Sample records for nyquist sampling theorem

  1. Nyquist Sampling Theorem: Understanding the Illusion of a Spinning Wheel Captured with a Video Camera

    ERIC Educational Resources Information Center

    Levesque, Luc

    2014-01-01

    Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…

  2. Color and Vector Flow Imaging in Parallel Ultrasound With Sub-Nyquist Sampling.

    PubMed

    Madiena, Craig; Faurie, Julia; Poree, Jonathan; Garcia, Damien; Garcia, Damien; Madiena, Craig; Faurie, Julia; Poree, Jonathan

    2018-05-01

    RF acquisition with a high-performance multichannel ultrasound system generates massive data sets in short periods of time, especially in "ultrafast" ultrasound when digital receive beamforming is required. Sampling at a rate four times the carrier frequency is the standard procedure since this rule complies with the Nyquist-Shannon sampling theorem and simplifies quadrature sampling. Bandpass sampling (or undersampling) outputs a bandpass signal at a rate lower than the maximal frequency without harmful aliasing. Advantages over Nyquist sampling are reduced storage volumes and data workflow, and simplified digital signal processing tasks. We used RF undersampling in color flow imaging (CFI) and vector flow imaging (VFI) to decrease data volume significantly (factor of 3 to 13 in our configurations). CFI and VFI with Nyquist and sub-Nyquist samplings were compared in vitro and in vivo. The estimate errors due to undersampling were small or marginal, which illustrates that Doppler and vector Doppler images can be correctly computed with a drastically reduced amount of RF samples. Undersampling can be a method of choice in CFI and VFI to avoid information overload and reduce data transfer and storage.

  3. Identification of Hot Moments and Hot Spots for Real-Time Adaptive Control of Multi-scale Environmental Sensor Networks

    NASA Astrophysics Data System (ADS)

    Wietsma, T.; Minsker, B. S.

    2012-12-01

    Increased sensor throughput combined with decreasing hardware costs has led to a disruptive growth in data volume. This disruption, popularly termed "the data deluge," has placed new demands for cyberinfrastructure and information technology skills among researchers in many academic fields, including the environmental sciences. Adaptive sampling has been well established as an effective means of improving network resource efficiency (energy, bandwidth) without sacrificing sample set quality relative to traditional uniform sampling. However, using adaptive sampling for the explicit purpose of improving resolution over events -- situations displaying intermittent dynamics and unique hydrogeological signatures -- is relatively new. In this paper, we define hot spots and hot moments in terms of sensor signal activity as measured through discrete Fourier analysis. Following this frequency-based approach, we apply the Nyquist-Shannon sampling theorem, a fundamental contribution from signal processing that led to the field of information theory, for analysis of uni- and multivariate environmental signal data. In the scope of multi-scale environmental sensor networks, we present several sampling control algorithms, derived from the Nyquist-Shannon theorem, that operate at local (field sensor), regional (base station for aggregation of field sensor data), and global (Cloud-based, computationally intensive models) scales. Evaluated over soil moisture data, results indicate significantly greater sample density during precipitation events while reducing overall sample volume. Using these algorithms as indicators rather than control mechanisms, we also discuss opportunities for spatio-temporal modeling as a tool for planning/modifying sensor network deployments. Locally adaptive model based on Nyquist-Shannon sampling theorem Pareto frontiers for local, regional, and global models relative to uniform sampling. Objectives are (1) overall sampling efficiency and (2) sampling efficiency during hot moments as identified using heuristic approach.

  4. An efficient sampling technique for sums of bandpass functions

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1982-01-01

    A well known sampling theorem states that a bandlimited function can be completely determined by its values at a uniformly placed set of points whose density is at least twice the highest frequency component of the function (Nyquist rate). A less familiar but important sampling theorem states that a bandlimited narrowband function can be completely determined by its values at a properly chosen, nonuniformly placed set of points whose density is at least twice the passband width. This allows for efficient digital demodulation of narrowband signals, which are common in sonar, radar and radio interferometry, without the side effect of signal group delay from an analog demodulator. This theorem was extended by developing a technique which allows a finite sum of bandlimited narrowband functions to be determined by its values at a properly chosen, nonuniformly placed set of points whose density can be made arbitrarily close to the sum of the passband widths.

  5. Infrared super-resolution imaging based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Sui, Xiubao; Chen, Qian; Gu, Guohua; Shen, Xuewei

    2014-03-01

    The theoretical basis of traditional infrared super-resolution imaging method is Nyquist sampling theorem. The reconstruction premise is that the relative positions of the infrared objects in the low-resolution image sequences should keep fixed and the image restoration means is the inverse operation of ill-posed issues without fixed rules. The super-resolution reconstruction ability of the infrared image, algorithm's application area and stability of reconstruction algorithm are limited. To this end, we proposed super-resolution reconstruction method based on compressed sensing in this paper. In the method, we selected Toeplitz matrix as the measurement matrix and realized it by phase mask method. We researched complementary matching pursuit algorithm and selected it as the recovery algorithm. In order to adapt to the moving target and decrease imaging time, we take use of area infrared focal plane array to acquire multiple measurements at one time. Theoretically, the method breaks though Nyquist sampling theorem and can greatly improve the spatial resolution of the infrared image. The last image contrast and experiment data indicate that our method is effective in improving resolution of infrared images and is superior than some traditional super-resolution imaging method. The compressed sensing super-resolution method is expected to have a wide application prospect.

  6. An interferometric fiber optic hydrophone with large upper limit of dynamic range

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Kan, Baoxi; Zheng, Baichao; Wang, Xuefeng; Zhang, Haiyan; Hao, Liangbin; Wang, Hailiang; Hou, Zhenxing; Yu, Wenpeng

    2017-10-01

    Interferometric fiber optic hydrophone based on heterodyne detection is used to measure the missile dropping point in the sea. The signal caused by the missile dropping in the water will be too large to be detected, so it is necessary to boost the upper limit of dynamic range (ULODR) of fiber optic hydrophone. In this article we analysis the factors which influence the ULODR of fiber optic hydrophone based on heterodyne detection, the ULODR is decided by the sampling frequency fsam and the heterodyne frequency Δf. The sampling frequency and the heterodyne frequency should be satisfied with the Nyquist sampling theorem which fsam will be two times larger than Δf, in this condition the ULODR is depended on the heterodyne frequency. In order to enlarge the ULODR, the Nyquist sampling theorem was broken, and we proposed a fiber optic hydrophone which the heterodyne frequency is larger than the sampling frequency. Both the simulation and experiment were done in this paper, the consequences are similar: When the sampling frequency is 100kHz, the ULODR of large heterodyne frequency fiber optic hydrophone is 2.6 times larger than that of the small heterodyne frequency fiber optic hydrophone. As the heterodyne frequency is larger than the sampling frequency, the ULODR is depended on the sampling frequency. If the sampling frequency was set at 2MHz, the ULODR of fiber optic hydrophone based on heterodyne detection will be boosted to 1000rad at 1kHz, and this large heterodyne fiber optic hydrophone can be applied to locate the drop position of the missile in the sea.

  7. Experimental scheme and restoration algorithm of block compression sensing

    NASA Astrophysics Data System (ADS)

    Zhang, Linxia; Zhou, Qun; Ke, Jun

    2018-01-01

    Compressed Sensing (CS) can use the sparseness of a target to obtain its image with much less data than that defined by the Nyquist sampling theorem. In this paper, we study the hardware implementation of a block compression sensing system and its reconstruction algorithms. Different block sizes are used. Two algorithms, the orthogonal matching algorithm (OMP) and the full variation minimum algorithm (TV) are used to obtain good reconstructions. The influence of block size on reconstruction is also discussed.

  8. Characterizing the zenithal night sky brightness in large territories: how many samples per square kilometre are needed?

    NASA Astrophysics Data System (ADS)

    Bará, Salvador

    2018-01-01

    A recurring question arises when trying to characterize, by means of measurements or theoretical calculations, the zenithal night sky brightness throughout a large territory: how many samples per square kilometre are needed? The optimum sampling distance should allow reconstructing, with sufficient accuracy, the continuous zenithal brightness map across the whole region, whilst at the same time avoiding unnecessary and redundant oversampling. This paper attempts to provide some tentative answers to this issue, using two complementary tools: the luminance structure function and the Nyquist-Shannon spatial sampling theorem. The analysis of several regions of the world, based on the data from the New world atlas of artificial night sky brightness, suggests that, as a rule of thumb, about one measurement per square kilometre could be sufficient for determining the zenithal night sky brightness of artificial origin at any point in a region to within ±0.1 magV arcsec-2 (in the root-mean-square sense) of its true value in the Johnson-Cousins V band. The exact reconstruction of the zenithal night sky brightness maps from samples taken at the Nyquist rate seems to be considerably more demanding.

  9. Image Understanding and Information Extraction\\

    DTIC Science & Technology

    1977-11-01

    mentation and generalization of DeCarlo’s Nyquist-like stability test [15,161. The last step of the procedure is to check whether this zero ...Several general sta- bility theorems which relate stability to the zero set of B(w,z) have been presented. These theorems led to the conclusion that...Spatial Stochastic Model for Contextual Pattern Recognition . ° . .............. 88 T. S. Yu and K. S. Fu V. PREPROCESSING 1. Stability of General Two

  10. How to choose a subset of frequencies in frequency-domain finite-difference migration

    NASA Astrophysics Data System (ADS)

    Mulder, W. A.; Plessix, R.-E.

    2004-09-01

    Finite-difference migration with the two-way wave equation can be accelerated by an order of magnitude if the frequency domain rather than the time domain is used. This gain is mainly accomplished by using a subset of the available frequencies. The implicit assumption is that the data have a certain amount of redundancy in the frequency domain. The choice of frequencies cannot be arbitrary. If the frequencies are chosen with a constant increment and their spacing is too large, the well-known wrap-around that occurs when transforming back to the time domain will also show up in the migration to the depth domain, albeit in a more subtle way. Because migration involves propagation in a given background velocity model and summation over shots and receivers, the effects of wrap-around may disappear even when the Nyquist theorem is not obeyed. We have studied these effects analytically for the constant-velocity case and determined sampling conditions that avoid wrap-around artefacts. The conditions depend on the velocity, depth of the migration grid and offset range. They show that the spacing between subsequent frequencies can be larger than the inverse of the time range prescribed by the Nyquist theorem. A 2-D example has been used to test the validity of these conditions for a more realistic velocity model. Finite-difference migration with the one-way wave equation shows a similar behaviour.

  11. Multipinhole SPECT helical scan parameters and imaging volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Rutao, E-mail: rutaoyao@buffalo.edu; Deng, Xiao; Wei, Qingyang

    Purpose: The authors developed SPECT imaging capability on an animal PET scanner using a multiple-pinhole collimator and step-and-shoot helical data acquisition protocols. The objective of this work was to determine the preferred helical scan parameters, i.e., the angular and axial step sizes, and the imaging volume, that provide optimal imaging performance. Methods: The authors studied nine helical scan protocols formed by permuting three rotational and three axial step sizes. These step sizes were chosen around the reference values analytically calculated from the estimated spatial resolution of the SPECT system and the Nyquist sampling theorem. The nine helical protocols were evaluatedmore » by two figures-of-merit: the sampling completeness percentage (SCP) and the root-mean-square (RMS) resolution. SCP was an analytically calculated numerical index based on projection sampling. RMS resolution was derived from the reconstructed images of a sphere-grid phantom. Results: The RMS resolution results show that (1) the start and end pinhole planes of the helical scheme determine the axial extent of the effective field of view (EFOV), and (2) the diameter of the transverse EFOV is adequately calculated from the geometry of the pinhole opening, since the peripheral region beyond EFOV would introduce projection multiplexing and consequent effects. The RMS resolution results of the nine helical scan schemes show optimal resolution is achieved when the axial step size is the half, and the angular step size is about twice the corresponding values derived from the Nyquist theorem. The SCP results agree in general with that of RMS resolution but are less critical in assessing the effects of helical parameters and EFOV. Conclusions: The authors quantitatively validated the effective FOV of multiple pinhole helical scan protocols and proposed a simple method to calculate optimal helical scan parameters.« less

  12. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR

    PubMed Central

    Mobli, Mehdi; Hoch, Jeffrey C.

    2017-01-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. PMID:25456315

  13. Practical robustness measures in multivariable control system analysis. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Lehtomaki, N. A.

    1981-01-01

    The robustness of the stability of multivariable linear time invariant feedback control systems with respect to model uncertainty is considered using frequency domain criteria. Available robustness tests are unified under a common framework based on the nature and structure of model errors. These results are derived using a multivariable version of Nyquist's stability theorem in which the minimum singular value of the return difference transfer matrix is shown to be the multivariable generalization of the distance to the critical point on a single input, single output Nyquist diagram. Using the return difference transfer matrix, a very general robustness theorem is presented from which all of the robustness tests dealing with specific model errors may be derived. The robustness tests that explicitly utilized model error structure are able to guarantee feedback system stability in the face of model errors of larger magnitude than those robustness tests that do not. The robustness of linear quadratic Gaussian control systems are analyzed.

  14. Nonuniform sampling and non-Fourier signal processing methods in multidimensional NMR.

    PubMed

    Mobli, Mehdi; Hoch, Jeffrey C

    2014-11-01

    Beginning with the introduction of Fourier Transform NMR by Ernst and Anderson in 1966, time domain measurement of the impulse response (the free induction decay, FID) consisted of sampling the signal at a series of discrete intervals. For compatibility with the discrete Fourier transform (DFT), the intervals are kept uniform, and the Nyquist theorem dictates the largest value of the interval sufficient to avoid aliasing. With the proposal by Jeener of parametric sampling along an indirect time dimension, extension to multidimensional experiments employed the same sampling techniques used in one dimension, similarly subject to the Nyquist condition and suitable for processing via the discrete Fourier transform. The challenges of obtaining high-resolution spectral estimates from short data records using the DFT were already well understood, however. Despite techniques such as linear prediction extrapolation, the achievable resolution in the indirect dimensions is limited by practical constraints on measuring time. The advent of non-Fourier methods of spectrum analysis capable of processing nonuniformly sampled data has led to an explosion in the development of novel sampling strategies that avoid the limits on resolution and measurement time imposed by uniform sampling. The first part of this review discusses the many approaches to data sampling in multidimensional NMR, the second part highlights commonly used methods for signal processing of such data, and the review concludes with a discussion of other approaches to speeding up data acquisition in NMR. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Measuring saccade peak velocity using a low-frequency sampling rate of 50 Hz.

    PubMed

    Wierts, Roel; Janssen, Maurice J A; Kingma, Herman

    2008-12-01

    During the last decades, small head-mounted video eye trackers have been developed in order to record eye movements. Real-time systems-with a low sampling frequency of 50/60 Hz-are used for clinical vestibular practice, but are generally considered not to be suited for measuring fast eye movements. In this paper, it is shown that saccadic eye movements, having an amplitude of at least 5 degrees, can, in good approximation, be considered to be bandwidth limited up to a frequency of 25-30 Hz. Using the Nyquist theorem to reconstruct saccadic eye movement signals at higher temporal resolutions, it is shown that accurate values for saccade peak velocities, recorded at 50 Hz, can be obtained, but saccade peak accelerations and decelerations cannot. In conclusion, video eye trackers sampling at 50/60 Hz are appropriate for detecting the clinical relevant saccade peak velocities in contrast to what has been stated up till now.

  16. Ultrafast Nyquist OTDM demultiplexing using optical Nyquist pulse sampling in an all-optical nonlinear switch.

    PubMed

    Hirooka, Toshihiko; Seya, Daiki; Harako, Koudai; Suzuki, Daiki; Nakazawa, Masataka

    2015-08-10

    We propose the ultrahigh-speed demultiplexing of Nyquist OTDM signals using an optical Nyquist pulse as both a signal and a sampling pulse in an all-optical nonlinear switch. The narrow spectral width of the Nyquist pulses means that the spectral overlap between data and control pulses is greatly reduced, and the control pulse itself can be made more tolerant to dispersion and nonlinear distortions inside the nonlinear switch. We apply the Nyquist control pulse to the 640 to 40 Gbaud demultiplexing of DPSK and DQPSK signals using a nonlinear optical loop mirror (NOLM), and demonstrate a large performance improvement compared with conventional Gaussian control pulses. We also show that the optimum spectral profile of the Nyquist control pulse depends on the walk-off property of the NOLM.

  17. A Novel Energy-Efficient Approach for Human Activity Recognition.

    PubMed

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Peng, Ao; Tang, Biyu; Lu, Hai; Shi, Haibin; Zheng, Huiru

    2017-09-08

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper.

  18. Digital timing: sampling frequency, anti-aliasing filter and signal interpolation filter dependence on timing resolution.

    PubMed

    Cho, Sanghee; Grazioso, Ron; Zhang, Nan; Aykac, Mehmet; Schmand, Matthias

    2011-12-07

    The main focus of our study is to investigate how the performance of digital timing methods is affected by sampling rate, anti-aliasing and signal interpolation filters. We used the Nyquist sampling theorem to address some basic questions such as what will be the minimum sampling frequencies? How accurate will the signal interpolation be? How do we validate the timing measurements? The preferred sampling rate would be as low as possible, considering the high cost and power consumption of high-speed analog-to-digital converters. However, when the sampling rate is too low, due to the aliasing effect, some artifacts are produced in the timing resolution estimations; the shape of the timing profile is distorted and the FWHM values of the profile fluctuate as the source location changes. Anti-aliasing filters are required in this case to avoid the artifacts, but the timing is degraded as a result. When the sampling rate is marginally over the Nyquist rate, a proper signal interpolation is important. A sharp roll-off (higher order) filter is required to separate the baseband signal from its replicates to avoid the aliasing, but in return the computation will be higher. We demonstrated the analysis through a digital timing study using fast LSO scintillation crystals as used in time-of-flight PET scanners. From the study, we observed that there is no significant timing resolution degradation down to 1.3 Ghz sampling frequency, and the computation requirement for the signal interpolation is reasonably low. A so-called sliding test is proposed as a validation tool checking constant timing resolution behavior of a given timing pick-off method regardless of the source location change. Lastly, the performance comparison for several digital timing methods is also shown.

  19. Joint Transmit and Receive Filter Optimization for Sub-Nyquist Delay-Doppler Estimation

    NASA Astrophysics Data System (ADS)

    Lenz, Andreas; Stein, Manuel S.; Swindlehurst, A. Lee

    2018-05-01

    In this article, a framework is presented for the joint optimization of the analog transmit and receive filter with respect to a parameter estimation problem. At the receiver, conventional signal processing systems restrict the two-sided bandwidth of the analog pre-filter $B$ to the rate of the analog-to-digital converter $f_s$ to comply with the well-known Nyquist-Shannon sampling theorem. In contrast, here we consider a transceiver that by design violates the common paradigm $B\\leq f_s$. To this end, at the receiver, we allow for a higher pre-filter bandwidth $B>f_s$ and study the achievable parameter estimation accuracy under a fixed sampling rate when the transmit and receive filter are jointly optimized with respect to the Bayesian Cram\\'{e}r-Rao lower bound. For the case of delay-Doppler estimation, we propose to approximate the required Fisher information matrix and solve the transceiver design problem by an alternating optimization algorithm. The presented approach allows us to explore the Pareto-optimal region spanned by transmit and receive filters which are favorable under a weighted mean squared error criterion. We also discuss the computational complexity of the obtained transceiver design by visualizing the resulting ambiguity function. Finally, we verify the performance of the optimized designs by Monte-Carlo simulations of a likelihood-based estimator.

  20. Ultrasonic Phased Array Compressive Imaging in Time and Frequency Domain: Simulation, Experimental Verification and Real Application

    PubMed Central

    Bai, Zhiliang; Chen, Shili; Jia, Lecheng; Zeng, Zhoumo

    2018-01-01

    Embracing the fact that one can recover certain signals and images from far fewer measurements than traditional methods use, compressive sensing (CS) provides solutions to huge amounts of data collection in phased array-based material characterization. This article describes how a CS framework can be utilized to effectively compress ultrasonic phased array images in time and frequency domains. By projecting the image onto its Discrete Cosine transform domain, a novel scheme was implemented to verify the potentiality of CS for data reduction, as well as to explore its reconstruction accuracy. The results from CIVA simulations indicate that both time and frequency domain CS can accurately reconstruct array images using samples less than the minimum requirements of the Nyquist theorem. For experimental verification of three types of artificial flaws, although a considerable data reduction can be achieved with defects clearly preserved, it is currently impossible to break Nyquist limitation in the time domain. Fortunately, qualified recovery in the frequency domain makes it happen, meaning a real breakthrough for phased array image reconstruction. As a case study, the proposed CS procedure is applied to the inspection of an engine cylinder cavity containing different pit defects and the results show that orthogonal matching pursuit (OMP)-based CS guarantees the performance for real application. PMID:29738452

  1. A Novel Energy-Efficient Approach for Human Activity Recognition

    PubMed Central

    Zheng, Lingxiang; Wu, Dihong; Ruan, Xiaoyang; Weng, Shaolin; Tang, Biyu; Lu, Hai; Shi, Haibin

    2017-01-01

    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper. PMID:28885560

  2. SU-G-IeP1-13: Sub-Nyquist Dynamic MRI Via Prior Rank, Intensity and Sparsity Model (PRISM)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, B; Gao, H

    Purpose: Accelerated dynamic MRI is important for MRI guided radiotherapy. Inspired by compressive sensing (CS), sub-Nyquist dynamic MRI has been an active research area, i.e., sparse sampling in k-t space for accelerated dynamic MRI. This work is to investigate sub-Nyquist dynamic MRI via a previously developed CS model, namely Prior Rank, Intensity and Sparsity Model (PRISM). Methods: The proposed method utilizes PRISM with rank minimization and incoherent sampling patterns for sub-Nyquist reconstruction. In PRISM, the low-rank background image, which is automatically calculated by rank minimization, is excluded from the L1 minimization step of the CS reconstruction to further sparsify themore » residual image, thus allowing for higher acceleration rates. Furthermore, the sampling pattern in k-t space is made more incoherent by sampling a different set of k-space points at different temporal frames. Results: Reconstruction results from L1-sparsity method and PRISM method with 30% undersampled data and 15% undersampled data are compared to demonstrate the power of PRISM for dynamic MRI. Conclusion: A sub- Nyquist MRI reconstruction method based on PRISM is developed with improved image quality from the L1-sparsity method.« less

  3. Sub-Nyquist Sampling and Moire-Like Waveform Distortions

    NASA Technical Reports Server (NTRS)

    Williams, Glenn L.

    2000-01-01

    Investigations of aliasing effects in digital waveform sampling have revealed the existence of a mathematical field and a pseudo-alias domain lying to the left of a "Nyquist line" in a plane defining the boundary between two domains of sampling. To the right of the line lies the classic alias domain. For signals band-limited below the Nyquist limit, displayed output may show a false modulation envelope. The effect occurs whenever the sample rate and the signal frequency are related by ratios of mutually prime integers. Belying the principal of a 10:1 sampling ratio being "good enough", this distortion easily occurs in graphed one-dimensional waveforms and two-dimensional images and occurs daily on television.

  4. Sound-field measurement with moving microphones

    PubMed Central

    Katzberg, Fabrice; Mazur, Radoslaw; Maass, Marco; Koch, Philipp; Mertins, Alfred

    2017-01-01

    Closed-room scenarios are characterized by reverberation, which decreases the performance of applications such as hands-free teleconferencing and multichannel sound reproduction. However, exact knowledge of the sound field inside a volume of interest enables the compensation of room effects and allows for a performance improvement within a wide range of applications. The sampling of sound fields involves the measurement of spatially dependent room impulse responses, where the Nyquist-Shannon sampling theorem applies in the temporal and spatial domains. The spatial measurement often requires a huge number of sampling points and entails other difficulties, such as the need for exact calibration of a large number of microphones. In this paper, a method for measuring sound fields using moving microphones is presented. The number of microphones is customizable, allowing for a tradeoff between hardware effort and measurement time. The goal is to reconstruct room impulse responses on a regular grid from data acquired with microphones between grid positions, in general. For this, the sound field at equidistant positions is related to the measurements taken along the microphone trajectories via spatial interpolation. The benefits of using perfect sequences for excitation, a multigrid recovery, and the prospects for reconstruction by compressed sensing are presented. PMID:28599533

  5. Packet loss mitigation for biomedical signals in healthcare telemetry.

    PubMed

    Garudadri, Harinath; Baheti, Pawan K

    2009-01-01

    In this work, we propose an effective application layer solution for packet loss mitigation in the context of Body Sensor Networks (BSN) and healthcare telemetry. Packet losses occur due to many reasons including excessive path loss, interference from other wireless systems, handoffs, congestion, system loading, etc. A call for action is in order, as packet losses can have extremely adverse impact on many healthcare applications relying on BAN and WAN technologies. Our approach for packet loss mitigation is based on Compressed Sensing (CS), an emerging signal processing concept, wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. We present simulation results demonstrating graceful degradation of performance with increasing packet loss rate. We also compare the proposed approach with retransmissions. The CS based packet loss mitigation approach was found to maintain up to 99% beat-detection accuracy at packet loss rates of 20%, with a constant latency of less than 2.5 seconds.

  6. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar.

    PubMed

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-09-09

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar's estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method.

  7. Compressive power spectrum sensing for vibration-based output-only system identification of structural systems in the presence of noise

    NASA Astrophysics Data System (ADS)

    Tau Siesakul, Bamrung; Gkoktsi, Kyriaki; Giaralis, Agathoklis

    2015-05-01

    Motivated by the need to reduce monetary and energy consumption costs of wireless sensor networks in undertaking output-only/operational modal analysis of engineering structures, this paper considers a multi-coset analog-toinformation converter for structural system identification from acceleration response signals of white noise excited linear damped structures sampled at sub-Nyquist rates. The underlying natural frequencies, peak gains in the frequency domain, and critical damping ratios of the vibrating structures are estimated directly from the sub-Nyquist measurements and, therefore, the computationally demanding signal reconstruction step is by-passed. This is accomplished by first employing a power spectrum blind sampling (PSBS) technique for multi-band wide sense stationary stochastic processes in conjunction with deterministic non-uniform multi-coset sampling patterns derived from solving a weighted least square optimization problem. Next, modal properties are derived by the standard frequency domain peak picking algorithm. Special attention is focused on assessing the potential of the adopted PSBS technique, which poses no sparsity requirements to the sensed signals, to derive accurate estimates of modal structural system properties from noisy sub- Nyquist measurements. To this aim, sub-Nyquist sampled acceleration response signals corrupted by various levels of additive white noise pertaining to a benchmark space truss structure with closely spaced natural frequencies are obtained within an efficient Monte Carlo simulation-based framework. Accurate estimates of natural frequencies and reasonable estimates of local peak spectral ordinates and critical damping ratios are derived from measurements sampled at about 70% below the Nyquist rate and for SNR as low as 0db demonstrating that the adopted approach enjoys noise immunity.

  8. Practical Sub-Nyquist Sampling via Array-Based Compressed Sensing Receiver Architecture

    DTIC Science & Technology

    2016-07-10

    different array ele- ments at different sub-Nyquist sampling rates. Signal processing inspired by the sparse fast Fourier transform allows for signal...reconstruction algorithms can be computationally demanding (REF). The related sparse Fourier transform algorithms aim to reduce the processing time nec- essary to...compute the DFT of frequency-sparse signals [7]. In particular, the sparse fast Fourier transform (sFFT) achieves processing time better than the

  9. Generative adversarial networks recover features in astrophysical images of galaxies beyond the deconvolution limit

    NASA Astrophysics Data System (ADS)

    Schawinski, Kevin; Zhang, Ce; Zhang, Hantian; Fowler, Lucas; Santhanam, Gokula Krishnan

    2017-05-01

    Observations of astrophysical objects such as galaxies are limited by various sources of random and systematic noise from the sky background, the optical system of the telescope and the detector used to record the data. Conventional deconvolution techniques are limited in their ability to recover features in imaging data by the Shannon-Nyquist sampling theorem. Here, we train a generative adversarial network (GAN) on a sample of 4550 images of nearby galaxies at 0.01 < z < 0.02 from the Sloan Digital Sky Survey and conduct 10× cross-validation to evaluate the results. We present a method using a GAN trained on galaxy images that can recover features from artificially degraded images with worse seeing and higher noise than the original with a performance that far exceeds simple deconvolution. The ability to better recover detailed features such as galaxy morphology from low signal to noise and low angular resolution imaging data significantly increases our ability to study existing data sets of astrophysical objects as well as future observations with observatories such as the Large Synoptic Sky Telescope (LSST) and the Hubble and James Webb space telescopes.

  10. Sampling in the light of Wigner distribution.

    PubMed

    Stern, Adrian; Javidi, Bahram

    2004-03-01

    We propose a new method for analysis of the sampling and reconstruction conditions of real and complex signals by use of the Wigner domain. It is shown that the Wigner domain may provide a better understanding of the sampling process than the traditional Fourier domain. For example, it explains how certain non-bandlimited complex functions can be sampled and perfectly reconstructed. On the basis of observations in the Wigner domain, we derive a generalization to the Nyquist sampling criterion. By using this criterion, we demonstrate simple preprocessing operations that can adapt a signal that does not fulfill the Nyquist sampling criterion. The preprocessing operations demonstrated can be easily implemented by optical means.

  11. New Techniques in Time-Frequency Analysis: Adaptive Band, Ultra-Wide Band and Multi-Rate Signal Processing

    DTIC Science & Technology

    2016-03-02

    Nyquist tiles and sampling groups in Euclidean geometry, and discussed the extension of these concepts to hyperbolic and spherical geometry and...hyperbolic or spherical spaces. We look to develop a structure for the tiling of frequency spaces in both Euclidean and non-Euclidean domains. In particular...we establish Nyquist tiles and sampling groups in Euclidean geometry, and discuss the extension of these concepts to hyperbolic and spherical geometry

  12. Target Tracking Using SePDAF under Ambiguous Angles for Distributed Array Radar

    PubMed Central

    Long, Teng; Zhang, Honggang; Zeng, Tao; Chen, Xinliang; Liu, Quanhua; Zheng, Le

    2016-01-01

    Distributed array radar can improve radar detection capability and measurement accuracy. However, it will suffer cyclic ambiguity in its angle estimates according to the spatial Nyquist sampling theorem since the large sparse array is undersampling. Consequently, the state estimation accuracy and track validity probability degrades when the ambiguous angles are directly used for target tracking. This paper proposes a second probability data association filter (SePDAF)-based tracking method for distributed array radar. Firstly, the target motion model and radar measurement model is built. Secondly, the fusion result of each radar’s estimation is employed to the extended Kalman filter (EKF) to finish the first filtering. Thirdly, taking this result as prior knowledge, and associating with the array-processed ambiguous angles, the SePDAF is applied to accomplish the second filtering, and then achieving a high accuracy and stable trajectory with relatively low computational complexity. Moreover, the azimuth filtering accuracy will be promoted dramatically and the position filtering accuracy will also improve. Finally, simulations illustrate the effectiveness of the proposed method. PMID:27618058

  13. Quantification of extra virgin olive oil in dressing and edible oil blends using the representative TMS-4,4'-desmethylsterols gas-chromatographic-normalized fingerprint.

    PubMed

    Pérez-Castaño, Estefanía; Sánchez-Viñas, Mercedes; Gázquez-Evangelista, Domingo; Bagur-González, M Gracia

    2018-01-15

    This paper describes and discusses the application of trimethylsilyl (TMS)-4,4'-desmethylsterols derivatives chromatographic fingerprints (obtained from an off-line HPLC-GC-FID system) for the quantification of extra virgin olive oil in commercial vinaigrettes, dressing salad and in-house reference materials (i-HRM) using two different Partial Least Square-Regression (PLS-R) multivariate quantification methods. Different data pre-processing strategies were carried out being the whole one: (i) internal normalization; (ii) sampling based on The Nyquist Theorem; (iii) internal correlation optimized shifting, icoshift; (iv) baseline correction (v) mean centering and (vi) selecting zones. The first model corresponds to a matrix of dimensions 'n×911' variables and the second one to a matrix of dimensions 'n×431' variables. It has to be highlighted that the proposed two PLS-R models allow the quantification of extra virgin olive oil in binary blends, foodstuffs, etc., when the provided percentage is greater than 25%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Photonic compressed sensing nyquist folding receiver

    DTIC Science & Technology

    2017-09-01

    filter . Two independent photonic receiver architectures are designed and analyzed over the course of this research. Both receiver designs are...undersamples the signals using an opti- cal modulator configuration at 1550 nm and collects the detected samples in a low pass interpolation filter ...Electronic Intelligence EW Electronic Warfare FM Frequency Modulated LNA Low Noise Amplifier LPF Low Pass Filter MZI Mach-Zehnder Interferometer NYFR Nyquist

  15. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Nagarajaiah, Satish; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-03-01

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers have high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30-60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. The proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.

  16. Passive ultrasonics using sub-Nyquist sampling of high-frequency thermal-mechanical noise.

    PubMed

    Sabra, Karim G; Romberg, Justin; Lani, Shane; Degertekin, F Levent

    2014-06-01

    Monolithic integration of capacitive micromachined ultrasonic transducer arrays with low noise complementary metal oxide semiconductor electronics minimizes interconnect parasitics thus allowing the measurement of thermal-mechanical (TM) noise. This enables passive ultrasonics based on cross-correlations of diffuse TM noise to extract coherent ultrasonic waves propagating between receivers. However, synchronous recording of high-frequency TM noise puts stringent requirements on the analog to digital converter's sampling rate. To alleviate this restriction, high-frequency TM noise cross-correlations (12-25 MHz) were estimated instead using compressed measurements of TM noise which could be digitized at a sampling frequency lower than the Nyquist frequency.

  17. Dynamic measurement of temperature, velocity, and density in hot jets using Rayleigh scattering

    NASA Astrophysics Data System (ADS)

    Mielke, Amy F.; Elam, Kristie A.

    2009-10-01

    A molecular Rayleigh scattering technique is utilized to measure gas temperature, velocity, and density in unseeded gas flows at sampling rates up to 10 kHz, providing fluctuation information up to 5 kHz based on the Nyquist theorem. A high-power continuous-wave laser beam is focused at a point in an air flow field and Rayleigh scattered light is collected and fiber-optically transmitted to a Fabry-Perot interferometer for spectral analysis. Photomultiplier tubes operated in the photon counting mode allow high-frequency sampling of the total signal level and the circular interference pattern to provide dynamic density, temperature, and velocity measurements. Mean and root mean square velocity, temperature, and density, as well as power spectral density calculations, are presented for measurements in a hydrogen-combustor heated jet facility with a 50.8-mm diameter nozzle at NASA John H. Glenn Research Center at Lewis Field. The Rayleigh measurements are compared with particle image velocimetry data and computational fluid dynamics predictions. This technique is aimed at aeronautics research related to identifying noise sources in free jets, as well as applications in supersonic and hypersonic flows where measurement of flow properties, including mass flux, is required in the presence of shocks and ionization occurrence.

  18. Exploiting the Modified Colombo-Nyquist Rule for Co-estimating Sub-monthly Gravity Field Solutions from a GRACE-like Mission

    NASA Astrophysics Data System (ADS)

    Devaraju, B.; Weigelt, M.; Mueller, J.

    2017-12-01

    In order to suppress the impact of aliasing errors on the standard monthly GRACE gravity-field solutions, co-estimating sub-monthly (daily/two-day) low-degree solutions has been suggested as a solution. The maximum degree of the low-degree solutions is chosen via the Colombo-Nyquist rule of thumb. However, it is now established that the sampling of satellites puts a restriction on the maximum estimable order and not the degree - modified Colombo-Nyquist rule. Therefore, in this contribution, we co-estimate low-order sub-monthly solutions, and compare and contrast them with the low-degree sub-monthly solutions. We also investigate their efficacies in dealing with aliasing errors.

  19. Integral imaging based light field display with enhanced viewing resolution using holographic diffuser

    NASA Astrophysics Data System (ADS)

    Yan, Zhiqiang; Yan, Xingpeng; Jiang, Xiaoyu; Gao, Hui; Wen, Jun

    2017-11-01

    An integral imaging based light field display method is proposed by use of holographic diffuser, and enhanced viewing resolution is gained over conventional integral imaging systems. The holographic diffuser is fabricated with controlled diffusion characteristics, which interpolates the discrete light field of the reconstructed points to approximate the original light field. The viewing resolution can thus be improved and independent of the limitation imposed by Nyquist sampling frequency. An integral imaging system with low Nyquist sampling frequency is constructed, and reconstructed scenes of high viewing resolution using holographic diffuser are demonstrated, verifying the feasibility of the method.

  20. Breaking through the bandwidth barrier in distributed fiber vibration sensing by sub-Nyquist randomized sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zhu, Tao; Zheng, Hua; Kuang, Yang; Liu, Min; Huang, Wei

    2017-04-01

    The round trip time of the light pulse limits the maximum detectable frequency response range of vibration in phase-sensitive optical time domain reflectometry (φ-OTDR). We propose a method to break the frequency response range restriction of φ-OTDR system by modulating the light pulse interval randomly which enables a random sampling for every vibration point in a long sensing fiber. This sub-Nyquist randomized sampling method is suits for detecting sparse-wideband- frequency vibration signals. Up to MHz resonance vibration signal with over dozens of frequency components and 1.153MHz single frequency vibration signal are clearly identified for a sensing range of 9.6km with 10kHz maximum sampling rate.

  1. Impedance and modulus spectroscopic study of nano hydroxyapatite

    NASA Astrophysics Data System (ADS)

    Jogiya, B. V.; Jethava, H. O.; Tank, K. P.; Raviya, V. R.; Joshi, M. J.

    2016-05-01

    Hydroxyapatite (Ca10 (PO4)6 (OH)2, HAP) is the main inorganic component of the hard tissues in bones and also important material for orthopedic and dental implant applications. Nano HAP is of great interest due to its various bio-medical applications. In the present work the nano HAP was synthesized by using surfactant mediated approach. Structure and morphology of the synthesized nano HAP was examined by the Powder XRD and TEM. Impedance study was carried out on pelletized sample in a frequency range of 100Hz to 20MHz at room temperature. The variation of dielectric constant, dielectric loss, and a.c. conductivity with frequency of applied field was studied. The Nyquist plot as well as modulus plot was drawn. The Nyquist plot showed two semicircle arcs, which indicated the presence of grain and grain boundary effect in the sample. The typical behavior of the Nyquist plot was represented by equivalent circuit having two parallel RC combinations in series.

  2. Spectral reconstruction of signals from periodic nonuniform subsampling based on a Nyquist folding scheme

    NASA Astrophysics Data System (ADS)

    Jiang, Kaili; Zhu, Jun; Tang, Bin

    2017-12-01

    Periodic nonuniform sampling occurs in many applications, and the Nyquist folding receiver (NYFR) is an efficient, low complexity, and broadband spectrum sensing architecture. In this paper, we first derive that the radio frequency (RF) sample clock function of NYFR is periodic nonuniform. Then, the classical results of periodic nonuniform sampling are applied to NYFR. We extend the spectral reconstruction algorithm of time series decomposed model to the subsampling case by using the spectrum characteristics of NYFR. The subsampling case is common for broadband spectrum surveillance. Finally, we take example for a LFM signal under large bandwidth to verify the proposed algorithm and compare the spectral reconstruction algorithm with orthogonal matching pursuit (OMP) algorithm.

  3. Identification of multiple leaks in pipeline: Linearized model, maximum likelihood, and super-resolution localization

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Ghidaoui, Mohamed S.

    2018-07-01

    This paper considers the problem of identifying multiple leaks in a water-filled pipeline based on inverse transient wave theory. The analytical solution to this problem involves nonlinear interaction terms between the various leaks. This paper shows analytically and numerically that these nonlinear terms are of the order of the leak sizes to the power two and; thus, negligible. As a result of this simplification, a maximum likelihood (ML) scheme that identifies leak locations and leak sizes separately is formulated and tested. It is found that the ML estimation scheme is highly efficient and robust with respect to noise. In addition, the ML method is a super-resolution leak localization scheme because its resolvable leak distance (approximately 0.15λmin , where λmin is the minimum wavelength) is below the Nyquist-Shannon sampling theorem limit (0.5λmin). Moreover, the Cramér-Rao lower bound (CRLB) is derived and used to show the efficiency of the ML scheme estimates. The variance of the ML estimator approximates the CRLB proving that the ML scheme belongs to class of best unbiased estimator of leak localization methods.

  4. Digital signal processing at Bell Labs-Foundations for speech and acoustics research

    NASA Astrophysics Data System (ADS)

    Rabiner, Lawrence R.

    2004-05-01

    Digital signal processing (DSP) is a fundamental tool for much of the research that has been carried out of Bell Labs in the areas of speech and acoustics research. The fundamental bases for DSP include the sampling theorem of Nyquist, the method for digitization of analog signals by Shannon et al., methods of spectral analysis by Tukey, the cepstrum by Bogert et al., and the FFT by Tukey (and Cooley of IBM). Essentially all of these early foundations of DSP came out of the Bell Labs Research Lab in the 1930s, 1940s, 1950s, and 1960s. This fundamental research was motivated by fundamental applications (mainly in the areas of speech, sonar, and acoustics) that led to novel design methods for digital filters (Kaiser, Golden, Rabiner, Schafer), spectrum analysis methods (Rabiner, Schafer, Allen, Crochiere), fast convolution methods based on the FFT (Helms, Bergland), and advanced digital systems used to implement telephony channel banks (Jackson, McDonald, Freeny, Tewksbury). This talk summarizes the key contributions to DSP made at Bell Labs, and illustrates how DSP was utilized in the areas of speech and acoustics research. It also shows the vast, worldwide impact of this DSP research on modern consumer electronics.

  5. A neural algorithm for the non-uniform and adaptive sampling of biomedical data.

    PubMed

    Mesin, Luca

    2016-04-01

    Body sensors are finding increasing applications in the self-monitoring for health-care and in the remote surveillance of sensitive people. The physiological data to be sampled can be non-stationary, with bursts of high amplitude and frequency content providing most information. Such data could be sampled efficiently with a non-uniform schedule that increases the sampling rate only during activity bursts. A real time and adaptive algorithm is proposed to select the sampling rate, in order to reduce the number of measured samples, but still recording the main information. The algorithm is based on a neural network which predicts the subsequent samples and their uncertainties, requiring a measurement only when the risk of the prediction is larger than a selectable threshold. Four examples of application to biomedical data are discussed: electromyogram, electrocardiogram, electroencephalogram, and body acceleration. Sampling rates are reduced under the Nyquist limit, still preserving an accurate representation of the data and of their power spectral densities (PSD). For example, sampling at 60% of the Nyquist frequency, the percentage average rectified errors in estimating the signals are on the order of 10% and the PSD is fairly represented, until the highest frequencies. The method outperforms both uniform sampling and compressive sensing applied to the same data. The discussed method allows to go beyond Nyquist limit, still preserving the information content of non-stationary biomedical signals. It could find applications in body sensor networks to lower the number of wireless communications (saving sensor power) and to reduce the occupation of memory. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  7. Blind identification of full-field vibration modes of output-only structures from uniformly-sampled, possibly temporally-aliased (sub-Nyquist), video measurements

    DOE PAGES

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; ...

    2016-12-05

    Enhancing the spatial and temporal resolution of vibration measurements and modal analysis could significantly benefit dynamic modelling, analysis, and health monitoring of structures. For example, spatially high-density mode shapes are critical for accurate vibration-based damage localization. In experimental or operational modal analysis, higher (frequency) modes, which may be outside the frequency range of the measurement, contain local structural features that can improve damage localization as well as the construction and updating of the modal-based dynamic model of the structure. In general, the resolution of vibration measurements can be increased by enhanced hardware. Traditional vibration measurement sensors such as accelerometers havemore » high-frequency sampling capacity; however, they are discrete point-wise sensors only providing sparse, low spatial sensing resolution measurements, while dense deployment to achieve high spatial resolution is expensive and results in the mass-loading effect and modification of structure's surface. Non-contact measurement methods such as scanning laser vibrometers provide high spatial and temporal resolution sensing capacity; however, they make measurements sequentially that requires considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation or template matching, optical flow, etc.), video camera based measurements have been successfully used for experimental and operational vibration measurement and subsequent modal analysis. However, the sampling frequency of most affordable digital cameras is limited to 30–60 Hz, while high-speed cameras for higher frequency vibration measurements are extremely costly. This work develops a computational algorithm capable of performing vibration measurement at a uniform sampling frequency lower than what is required by the Shannon-Nyquist sampling theorem for output-only modal analysis. In particular, the spatio-temporal uncoupling property of the modal expansion of structural vibration responses enables a direct modal decoupling of the temporally-aliased vibration measurements by existing output-only modal analysis methods, yielding (full-field) mode shapes estimation directly. Then the signal aliasing properties in modal analysis is exploited to estimate the modal frequencies and damping ratios. Furthermore, the proposed method is validated by laboratory experiments where output-only modal identification is conducted on temporally-aliased acceleration responses and particularly the temporally-aliased video measurements of bench-scale structures, including a three-story building structure and a cantilever beam.« less

  8. Split Bregman's optimization method for image construction in compressive sensing

    NASA Astrophysics Data System (ADS)

    Skinner, D.; Foo, S.; Meyer-Bäse, A.

    2014-05-01

    The theory of compressive sampling (CS) was reintroduced by Candes, Romberg and Tao, and D. Donoho in 2006. Using a priori knowledge that a signal is sparse, it has been mathematically proven that CS can defY Nyquist sampling theorem. Theoretically, reconstruction of a CS image relies on the minimization and optimization techniques to solve this complex almost NP-complete problem. There are many paths to consider when compressing and reconstructing an image but these methods have remained untested and unclear on natural images, such as underwater sonar images. The goal of this research is to perfectly reconstruct the original sonar image from a sparse signal while maintaining pertinent information, such as mine-like object, in Side-scan sonar (SSS) images. Goldstein and Osher have shown how to use an iterative method to reconstruct the original image through a method called Split Bregman's iteration. This method "decouples" the energies using portions of the energy from both the !1 and !2 norm. Once the energies are split, Bregman iteration is used to solve the unconstrained optimization problem by recursively solving the problems simultaneously. The faster these two steps or energies can be solved then the faster the overall method becomes. While the majority of CS research is still focused on the medical field, this paper will demonstrate the effectiveness of the Split Bregman's methods on sonar images.

  9. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients

    PubMed Central

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-01-01

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information. PMID:26861337

  10. A Compressed Sensing-Based Wearable Sensor Network for Quantitative Assessment of Stroke Patients.

    PubMed

    Yu, Lei; Xiong, Daxi; Guo, Liquan; Wang, Jiping

    2016-02-05

    Clinical rehabilitation assessment is an important part of the therapy process because it is the premise for prescribing suitable rehabilitation interventions. However, the commonly used assessment scales have the following two drawbacks: (1) they are susceptible to subjective factors; (2) they only have several rating levels and are influenced by a ceiling effect, making it impossible to exactly detect any further improvement in the movement. Meanwhile, energy constraints are a primary design consideration in wearable sensor network systems since they are often battery-operated. Traditionally, for wearable sensor network systems that follow the Shannon/Nyquist sampling theorem, there are many data that need to be sampled and transmitted. This paper proposes a novel wearable sensor network system to monitor and quantitatively assess the upper limb motion function, based on compressed sensing technology. With the sparse representation model, less data is transmitted to the computer than with traditional systems. The experimental results show that the accelerometer signals of Bobath handshake and shoulder touch exercises can be compressed, and the length of the compressed signal is less than 1/3 of the raw signal length. More importantly, the reconstruction errors have no influence on the predictive accuracy of the Brunnstrom stage classification model. It also indicated that the proposed system can not only reduce the amount of data during the sampling and transmission processes, but also, the reconstructed accelerometer signals can be used for quantitative assessment without any loss of useful information.

  11. Dictionary Learning for Data Recovery in Positron Emission Tomography

    PubMed Central

    Valiollahzadeh, SeyyedMajid; Clark, John W.; Mawlawi, Osama

    2015-01-01

    Compressed sensing (CS) aims to recover images from fewer measurements than that governed by the Nyquist sampling theorem. Most CS methods use analytical predefined sparsifying domains such as Total variation (TV), wavelets, curvelets, and finite transforms to perform this task. In this study, we evaluated the use of dictionary learning (DL) as a sparsifying domain to reconstruct PET images from partially sampled data, and compared the results to the partially and fully sampled image (baseline). A CS model based on learning an adaptive dictionary over image patches was developed to recover missing observations in PET data acquisition. The recovery was done iteratively in two steps: a dictionary learning step and an image reconstruction step. Two experiments were performed to evaluate the proposed CS recovery algorithm: an IEC phantom study and five patient studies. In each case, 11% of the detectors of a GE PET/CT system were removed and the acquired sinogram data were recovered using the proposed DL algorithm. The recovered images (DL) as well as the partially sampled images (with detector gaps) for both experiments were then compared to the baseline. Comparisons were done by calculating RMSE, contrast recovery and SNR in ROIs drawn in the background, and spheres of the phantom as well as patient lesions. For the phantom experiment, the RMSE for the DL recovered images were 5.8% when compared with the baseline images while it was 17.5% for the partially sampled images. In the patients’ studies, RMSE for the DL recovered images were 3.8%, while it was 11.3% for the partially sampled images. Our proposed CS with DL is a good approach to recover partially sampled PET data. This approach has implications towards reducing scanner cost while maintaining accurate PET image quantification. PMID:26161630

  12. Dictionary learning for data recovery in positron emission tomography

    NASA Astrophysics Data System (ADS)

    Valiollahzadeh, SeyyedMajid; Clark, John W., Jr.; Mawlawi, Osama

    2015-08-01

    Compressed sensing (CS) aims to recover images from fewer measurements than that governed by the Nyquist sampling theorem. Most CS methods use analytical predefined sparsifying domains such as total variation, wavelets, curvelets, and finite transforms to perform this task. In this study, we evaluated the use of dictionary learning (DL) as a sparsifying domain to reconstruct PET images from partially sampled data, and compared the results to the partially and fully sampled image (baseline). A CS model based on learning an adaptive dictionary over image patches was developed to recover missing observations in PET data acquisition. The recovery was done iteratively in two steps: a dictionary learning step and an image reconstruction step. Two experiments were performed to evaluate the proposed CS recovery algorithm: an IEC phantom study and five patient studies. In each case, 11% of the detectors of a GE PET/CT system were removed and the acquired sinogram data were recovered using the proposed DL algorithm. The recovered images (DL) as well as the partially sampled images (with detector gaps) for both experiments were then compared to the baseline. Comparisons were done by calculating RMSE, contrast recovery and SNR in ROIs drawn in the background, and spheres of the phantom as well as patient lesions. For the phantom experiment, the RMSE for the DL recovered images were 5.8% when compared with the baseline images while it was 17.5% for the partially sampled images. In the patients’ studies, RMSE for the DL recovered images were 3.8%, while it was 11.3% for the partially sampled images. Our proposed CS with DL is a good approach to recover partially sampled PET data. This approach has implications toward reducing scanner cost while maintaining accurate PET image quantification.

  13. Counting statistics of tunneling current

    NASA Astrophysics Data System (ADS)

    Levitov, L. S.; Reznikov, M.

    2004-09-01

    The form of electron counting statistics of the tunneling current noise in a generic many-body interacting electron system is obtained and universal relations between its different moments are derived. A generalized fluctuation-dissipation theorem providing a relation between current and noise at arbitrary bias-to-temperature ratio eV/kBT is established in the tunneling Hamiltonian approximation. The third correlator of current fluctuations S3 (the skewness of the charge counting distribution) has a universal Schottky-type relation with the current and quasiparticle charge that holds in a wide bias voltage range, both at large and small eV/kBT . The insensitivity of S3 to the Nyquist-Schottky crossover represents an advantage compared to the Schottky formula for the noise power. We discuss the possibility of using the correlator S3 for detecting quasiparticle charge at high temperatures.

  14. Resonant paramagnetic enhancement of the thermal and zero-point Nyquist noise

    NASA Astrophysics Data System (ADS)

    França, H. M.; Santos, R. B. B.

    1999-01-01

    The interaction between a very thin macroscopic solenoid, and a single magnetic particle precessing in a external magnetic field B0, is described by taking into account the thermal and the zero-point fluctuations of stochastic electrodynamics. The inductor belongs to a RLC circuit without batteries and the random motion of the magnetic dipole generates in the solenoid a fluctuating current Idip( t), and a fluctuating voltage εdip( t), with spectral distribution quite different from the Nyquist noise. We show that the mean square value < Idip2> presents an enormous variation when the frequency of precession approaches the frequency of the circuit, but it is still much smaller than the Nyquist current in the circuit. However, we also show that < Idip2> can reach measurable values if the inductor is interacting with a macroscopic sample of magnetic particles (atoms or nuclei) which are close enough to its coils.

  15. Stimulated Raman scattering microscopy by Nyquist modulation of a two-branch ultrafast fiber source.

    PubMed

    Riek, Claudius; Kocher, Claudius; Zirak, Peyman; Kölbl, Christoph; Fimpel, Peter; Leitenstorfer, Alfred; Zumbusch, Andreas; Brida, Daniele

    2016-08-15

    A highly stable setup for stimulated Raman scattering (SRS) microscopy is presented. It is based on a two-branch femtosecond Er:fiber laser operating at a 40 MHz repetition rate. One of the outputs is directly modulated at the Nyquist frequency with an integrated electro-optic modulator (EOM). This compact source combines a jitter-free pulse synchronization with a broad tunability and allows for shot-noise limited SRS detection. The performance of the SRS microscope is illustrated with measurements on samples from material science and cell biology.

  16. Transmission and full-band coherent detection of polarization-multiplexed all-optical Nyquist signals generated by Sinc-shaped Nyquist pulses

    PubMed Central

    Zhang, Junwen; Yu, Jianjun; Chi, Nan

    2015-01-01

    All optical method is considered as a promising technique for high symbol rate Nyquist signal generation, which has attracted a lot of research interests for high spectral-efficiency and high-capacity optical communication system. In this paper, we extend our previous work and report the fully experimental demonstration of polarization-division multiplexed (PDM) all-optical Nyquist signal generation based on Sinc-shaped Nyquist pulse with advanced modulation formats, fiber-transmission and single-receiver full-band coherent detection. Using this scheme, we have successfully demonstrated the generation, fiber transmission and single-receiver full-band coherent detection of all-optical Nyquist PDM-QPSK and PDM-16QAM signals up to 125-GBaud. 1-Tb/s single-carrier PDM-16QAM signal generation and full-band coherent detection is realized, which shows the advantage and feasibility of the single-carrier all-optical Nyquist signals. PMID:26323238

  17. Interferometric Dynamic Measurement: Techniques Based on High-Speed Imaging or a Single Photodetector

    PubMed Central

    Fu, Yu; Pedrini, Giancarlo

    2014-01-01

    In recent years, optical interferometry-based techniques have been widely used to perform noncontact measurement of dynamic deformation in different industrial areas. In these applications, various physical quantities need to be measured in any instant and the Nyquist sampling theorem has to be satisfied along the time axis on each measurement point. Two types of techniques were developed for such measurements: one is based on high-speed cameras and the other uses a single photodetector. The limitation of the measurement range along the time axis in camera-based technology is mainly due to the low capturing rate, while the photodetector-based technology can only do the measurement on a single point. In this paper, several aspects of these two technologies are discussed. For the camera-based interferometry, the discussion includes the introduction of the carrier, the processing of the recorded images, the phase extraction algorithms in various domains, and how to increase the temporal measurement range by using multiwavelength techniques. For the detector-based interferometry, the discussion mainly focuses on the single-point and multipoint laser Doppler vibrometers and their applications for measurement under extreme conditions. The results show the effort done by researchers for the improvement of the measurement capabilities using interferometry-based techniques to cover the requirements needed for the industrial applications. PMID:24963503

  18. Research on compressive sensing reconstruction algorithm based on total variation model

    NASA Astrophysics Data System (ADS)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  19. A Note on a Sampling Theorem for Functions over GF(q)n Domain

    NASA Astrophysics Data System (ADS)

    Ukita, Yoshifumi; Saito, Tomohiko; Matsushima, Toshiyasu; Hirasawa, Shigeichi

    In digital signal processing, the sampling theorem states that any real valued function ƒ can be reconstructed from a sequence of values of ƒ that are discretely sampled with a frequency at least twice as high as the maximum frequency of the spectrum of ƒ. This theorem can also be applied to functions over finite domain. Then, the range of frequencies of ƒ can be expressed in more detail by using a bounded set instead of the maximum frequency. A function whose range of frequencies is confined to a bounded set is referred to as bandlimited function. And a sampling theorem for bandlimited functions over Boolean domain has been obtained. Here, it is important to obtain a sampling theorem for bandlimited functions not only over Boolean domain (GF(q)n domain) but also over GF(q)n domain, where q is a prime power and GF(q) is Galois field of order q. For example, in experimental designs, although the model can be expressed as a linear combination of the Fourier basis functions and the levels of each factor can be represented by GF(q)n, the number of levels often take a value greater than two. However, the sampling theorem for bandlimited functions over GF(q)n domain has not been obtained. On the other hand, the sampling points are closely related to the codewords of a linear code. However, the relation between the parity check matrix of a linear code and any distinct error vectors has not been obtained, although it is necessary for understanding the meaning of the sampling theorem for bandlimited functions. In this paper, we generalize the sampling theorem for bandlimited functions over Boolean domain to a sampling theorem for bandlimited functions over GF(q)n domain. We also present a theorem for the relation between the parity check matrix of a linear code and any distinct error vectors. Lastly, we clarify the relation between the sampling theorem for functions over GF(q)n domain and linear codes.

  20. A rigorous analysis of digital pre-emphasis and DAC resolution for interleaved DAC Nyquist-WDM signal generation in high-speed coherent optical transmission systems

    NASA Astrophysics Data System (ADS)

    Weng, Yi; Wang, Junyi; He, Xuan; Pan, Zhongqi

    2018-02-01

    The Nyquist spectral shaping techniques facilitate a promising solution to enhance spectral efficiency (SE) and further reduce the cost-per-bit in high-speed wavelength-division multiplexing (WDM) transmission systems. Hypothetically, any Nyquist WDM signals with arbitrary shapes can be generated by the use of the digital signal processing (DSP) based electrical filters (E-filter). Nonetheless, in actual 100G/ 200G coherent systems, the performance as well as DSP complexity are increasingly restricted by cost and power consumption. Henceforward it is indispensable to optimize DSP to accomplish the preferred performance at the least complexity. In this paper, we systematically investigated the minimum requirements and challenges of Nyquist WDM signal generation, particularly for higher-order modulation formats, including 16 quadrature amplitude modulation (QAM) or 64QAM. A variety of interrelated parameters, such as channel spacing and roll-off factor, have been evaluated to optimize the requirements of the digital-to-analog converter (DAC) resolution and transmitter E-filter bandwidth. The impact of spectral pre-emphasis has been predominantly enhanced via the proposed interleaved DAC architecture by at least 4%, and hence reducing the required optical signal to noise ratio (OSNR) at a bit error rate (BER) of 10-3 by over 0.45 dB at a channel spacing of 1.05 symbol rate and an optimized roll-off factor of 0.1. Furthermore, the requirements of sampling rate for different types of super-Gaussian E-filters are discussed for 64QAM Nyquist WDM transmission systems. Finally, the impact of the non-50% duty cycle error between sub-DACs upon the quality of the generated signals for the interleaved DAC structure has been analyzed.

  1. Compressed NMR: Combining compressive sampling and pure shift NMR techniques.

    PubMed

    Aguilar, Juan A; Kenwright, Alan M

    2017-12-26

    Historically, the resolution of multidimensional nuclear magnetic resonance (NMR) has been orders of magnitude lower than the intrinsic resolution that NMR spectrometers are capable of producing. The slowness of Nyquist sampling as well as the existence of signals as multiplets instead of singlets have been two of the main reasons for this underperformance. Fortunately, two compressive techniques have appeared that can overcome these limitations. Compressive sensing, also known as compressed sampling (CS), avoids the first limitation by exploiting the compressibility of typical NMR spectra, thus allowing sampling at sub-Nyquist rates, and pure shift techniques eliminate the second issue "compressing" multiplets into singlets. This paper explores the possibilities and challenges presented by this combination (compressed NMR). First, a description of the CS framework is given, followed by a description of the importance of combining it with the right pure shift experiment. Second, examples of compressed NMR spectra and how they can be combined with covariance methods will be shown. Copyright © 2017 John Wiley & Sons, Ltd.

  2. Application of random coherence order selection in gradient-enhanced multidimensional NMR

    NASA Astrophysics Data System (ADS)

    Bostock, Mark J.; Nietlispach, Daniel

    2016-03-01

    Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1-norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1-norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended to the full suite of experiments available to modern NMR spectroscopy, allowing resolution enhancements for all indirect dimensions; alone or in combination with NUS, RQD can be used to improve experimental resolution, or shorten experiment times, of considerable benefit to the challenging applications undertaken by modern NMR.

  3. Study on sampling of continuous linear system based on generalized Fourier transform

    NASA Astrophysics Data System (ADS)

    Li, Huiguang

    2003-09-01

    In the research of signal and system, the signal's spectrum and the system's frequency characteristic can be discussed through Fourier Transform (FT) and Laplace Transform (LT). However, some singular signals such as impulse function and signum signal don't satisfy Riemann integration and Lebesgue integration. They are called generalized functions in Maths. This paper will introduce a new definition -- Generalized Fourier Transform (GFT) and will discuss generalized function, Fourier Transform and Laplace Transform under a unified frame. When the continuous linear system is sampled, this paper will propose a new method to judge whether the spectrum will overlap after generalized Fourier transform (GFT). Causal and non-causal systems are studied, and sampling method to maintain system's dynamic performance is presented. The results can be used on ordinary sampling and non-Nyquist sampling. The results also have practical meaning on research of "discretization of continuous linear system" and "non-Nyquist sampling of signal and system." Particularly, condition for ensuring controllability and observability of MIMO continuous systems in references 13 and 14 is just an applicable example of this paper.

  4. Super-Nyquist White Dwarf Pulsations in K2 Long-Cadence Data

    NASA Astrophysics Data System (ADS)

    Bell, Keaton J.; Hermes, JJ; Montgomery, Michael H.; Vanderbosch, Zach

    2017-06-01

    The Kepler and K2 missions have recently revolutionized the field of white dwarf asteroseismology. Since white dwarfs pulsate on timescales of order 10 minutes, we aim to observe these objects at K2’s short cadence (1 minute). Occasionally we find signatures of pulsations in white dwarf targets that were only observed by K2 at long cadence (30 minute). These signals suffer extreme aliasing since the intrinsic frequencies exceed the Nyquist sampling limit. We present our work to recover accurate frequency determinations for these targets, guided by a limited amount of supplementary, ground-based photometry from McDonald Observatory.

  5. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    NASA Astrophysics Data System (ADS)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  6. A way around the Nyquist lag

    NASA Astrophysics Data System (ADS)

    Penland, C.

    2017-12-01

    One way to test for the linearity of a multivariate system is to perform Linear Inverse Modeling (LIM) to a multivariate time series. LIM yields an estimated operator by combining a lagged covariance matrix with the contemporaneous covariance matrix. If the underlying dynamics is linear, the resulting dynamical description should not depend on the particular lag at which the lagged covariance matrix is estimated. This test is known as the "tau test." The tau test will be severely compromised if the lag at which the analysis is performed is approximately half the period of an internal oscillation frequency. In this case, the tau test will fail even though the dynamics are actually linear. Thus, until now, the tau test has only been possible for lags smaller than this "Nyquist lag." In this poster, we investigate the use of Hilbert transforms as a way to avoid the problems associated with Nyquist lags. By augmenting the data with dimensions orthogonal to those spanning the original system, information that would be inaccessible to LIM in its original form may be sampled.

  7. Compressive sensing for efficient health monitoring and effective damage detection of structures

    NASA Astrophysics Data System (ADS)

    Jayawardhana, Madhuka; Zhu, Xinqun; Liyanapathirana, Ranjith; Gunawardana, Upul

    2017-02-01

    Real world Structural Health Monitoring (SHM) systems consist of sensors in the scale of hundreds, each sensor generating extremely large amounts of data, often arousing the issue of the cost associated with data transfer and storage. Sensor energy is a major component included in this cost factor, especially in Wireless Sensor Networks (WSN). Data compression is one of the techniques that is being explored to mitigate the effects of these issues. In contrast to traditional data compression techniques, Compressive Sensing (CS) - a very recent development - introduces the means of accurately reproducing a signal by acquiring much less number of samples than that defined by Nyquist's theorem. CS achieves this task by exploiting the sparsity of the signal. By the reduced amount of data samples, CS may help reduce the energy consumption and storage costs associated with SHM systems. This paper investigates CS based data acquisition in SHM, in particular, the implications of CS on damage detection and localization. CS is implemented in a simulation environment to compress structural response data from a Reinforced Concrete (RC) structure. Promising results were obtained from the compressed data reconstruction process as well as the subsequent damage identification process using the reconstructed data. A reconstruction accuracy of 99% could be achieved at a Compression Ratio (CR) of 2.48 using the experimental data. Further analysis using the reconstructed signals provided accurate damage detection and localization results using two damage detection algorithms, showing that CS has not compromised the crucial information on structural damages during the compression process.

  8. Nonuniform sampling theorems for random signals in the linear canonical transform domain

    NASA Astrophysics Data System (ADS)

    Shuiqing, Xu; Congmei, Jiang; Yi, Chai; Youqiang, Hu; Lei, Huang

    2018-06-01

    Nonuniform sampling can be encountered in various practical processes because of random events or poor timebase. The analysis and applications of the nonuniform sampling for deterministic signals related to the linear canonical transform (LCT) have been well considered and researched, but up to now no papers have been published regarding the various nonuniform sampling theorems for random signals related to the LCT. The aim of this article is to explore the nonuniform sampling and reconstruction of random signals associated with the LCT. First, some special nonuniform sampling models are briefly introduced. Second, based on these models, some reconstruction theorems for random signals from various nonuniform samples associated with the LCT have been derived. Finally, the simulation results are made to prove the accuracy of the sampling theorems. In addition, the latent real practices of the nonuniform sampling for random signals have been also discussed.

  9. A Low-cost 4 Bit, 10 Giga-samples-per-second Analog-to-digital Converter Printed Circuit Board Assembly for FPGA-based Backends

    NASA Astrophysics Data System (ADS)

    Jiang, Homin; Yu, Chen-Yu; Kubo, Derek; Chen, Ming-Tang; Guzzino, Kim

    2016-11-01

    In this study, a 4 bit, 10 giga-samples-per-second analog-to-digital converter (ADC) printed circuit board assembly (PCBA) was designed, manufactured, and characterized for digitizing radio telescopes. For this purpose, an Adsantec ANST7120A-KMA flash ADC chip was used. Together with the field-programmable gate array platform, developed by the Collaboration for Astronomy Signal Processing and Electronics Research community, the PCBA enables data acquisition with a wide bandwidth and simplifies the intermediate frequency section. In the current version, the PCBA and the chip exhibit an analog bandwidth of 10 GHz (3 dB loss) and 20 GHz, respectively, which facilitates second, third, and even fourth Nyquist sampling. The following average performance parameters were obtained from the first and second Nyquist zones of the three boards: a spurious-free dynamic range of 31.35/30.45 dB, a signal-to-noise and distortion ratio of 22.95/21.83 dB, and an effective number of bits of 3.65/3.43, respectively.

  10. Generalized analog thresholding for spike acquisition at ultralow sampling rates

    PubMed Central

    He, Bryan D.; Wein, Alex; Varshney, Lav R.; Kusuma, Julius; Richardson, Andrew G.

    2015-01-01

    Efficient spike acquisition techniques are needed to bridge the divide from creating large multielectrode arrays (MEA) to achieving whole-cortex electrophysiology. In this paper, we introduce generalized analog thresholding (gAT), which achieves millisecond temporal resolution with sampling rates as low as 10 Hz. Consider the torrent of data from a single 1,000-channel MEA, which would generate more than 3 GB/min using standard 30-kHz Nyquist sampling. Recent neural signal processing methods based on compressive sensing still require Nyquist sampling as a first step and use iterative methods to reconstruct spikes. Analog thresholding (AT) remains the best existing alternative, where spike waveforms are passed through an analog comparator and sampled at 1 kHz, with instant spike reconstruction. By generalizing AT, the new method reduces sampling rates another order of magnitude, detects more than one spike per interval, and reconstructs spike width. Unlike compressive sensing, the new method reveals a simple closed-form solution to achieve instant (noniterative) spike reconstruction. The base method is already robust to hardware nonidealities, including realistic quantization error and integration noise. Because it achieves these considerable specifications using hardware-friendly components like integrators and comparators, generalized AT could translate large-scale MEAs into implantable devices for scientific investigation and medical technology. PMID:25904712

  11. Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer

    NASA Astrophysics Data System (ADS)

    Armstrong, Wilbur C.

    1992-06-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).

  12. Users manual for program NYQUIST: Liquid rocket nyquist plots developed for use on a PC computer

    NASA Technical Reports Server (NTRS)

    Armstrong, Wilbur C.

    1992-01-01

    The piping in a liquid rocket can assume complex configurations due to multiple tanks, multiple engines, and structures that must be piped around. The capability to handle some of these complex configurations have been incorporated into the NYQUIST code. The capability to modify the input on line has been implemented. The configurations allowed include multiple tanks, multiple engines, and the splitting of a pipe into unequal segments going to different (or the same) engines. This program will handle the following type elements: straight pipes, bends, inline accumulators, tuned stub accumulators, Helmholtz resonators, parallel resonators, pumps, split pipes, multiple tanks, and multiple engines. The code is too large to compile as one program using Microsoft FORTRAN 5; therefore, the code was broken into two segments: NYQUIST1.FOR and NYQUIST2.FOR. These are compiled separately and then linked together. The final run code is not too large (approximately equals 344,000 bytes).

  13. Understanding the Sampling Distribution and the Central Limit Theorem.

    ERIC Educational Resources Information Center

    Lewis, Charla P.

    The sampling distribution is a common source of misuse and misunderstanding in the study of statistics. The sampling distribution, underlying distribution, and the Central Limit Theorem are all interconnected in defining and explaining the proper use of the sampling distribution of various statistics. The sampling distribution of a statistic is…

  14. Real-time Nyquist signaling with dynamic precision and flexible non-integer oversampling.

    PubMed

    Schmogrow, R; Meyer, M; Schindler, P C; Nebendahl, B; Dreschmann, M; Meyer, J; Josten, A; Hillerkuss, D; Ben-Ezra, S; Becker, J; Koos, C; Freude, W; Leuthold, J

    2014-01-13

    We demonstrate two efficient processing techniques for Nyquist signals, namely computation of signals using dynamic precision as well as arbitrary rational oversampling factors. With these techniques along with massively parallel processing it becomes possible to generate and receive high data rate Nyquist signals with flexible symbol rates and bandwidths, a feature which is highly desirable for novel flexgrid networks. We achieved maximum bit rates of 252 Gbit/s in real-time.

  15. Image Reconstruction for Interferometric Imaging of Geosynchronous Satellites

    NASA Astrophysics Data System (ADS)

    DeSantis, Zachary J.

    Imaging distant objects at a high resolution has always presented a challenge due to the diffraction limit. Larger apertures improve the resolution, but at some point the cost of engineering, building, and correcting phase aberrations of large apertures become prohibitive. Interferometric imaging uses the Van Cittert-Zernike theorem to form an image from measurements of spatial coherence. This effectively allows the synthesis of a large aperture from two or more smaller telescopes to improve the resolution. We apply this method to imaging geosynchronous satellites with a ground-based system. Imaging a dim object from the ground presents unique challenges. The atmosphere creates errors in the phase measurements. The measurements are taken simultaneously across a large bandwidth of light. The atmospheric piston error, therefore, manifests as a linear phase error across the spectral measurements. Because the objects are faint, many of the measurements are expected to have a poor signal-to-noise ratio (SNR). This eliminates possibility of use of commonly used techniques like closure phase, which is a standard technique in astronomical interferometric imaging for making partial phase measurements in the presence of atmospheric error. The bulk of our work has been focused on forming an image, using sub-Nyquist sampled data, in the presence of these linear phase errors without relying on closure phase techniques. We present an image reconstruction algorithm that successfully forms an image in the presence of these linear phase errors. We demonstrate our algorithm’s success in both simulation and in laboratory experiments.

  16. Pepsi-SAXS: an adaptive method for rapid and accurate computation of small-angle X-ray scattering profiles.

    PubMed

    Grudinin, Sergei; Garkavenko, Maria; Kazennov, Andrei

    2017-05-01

    A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist-Shannon-Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion order, this method has the same quadratic dependence on the number of atoms in the model as the Debye-based approach, but with a much smaller prefactor in the computational complexity. The method has been systematically validated on a large set of over 50 models collected from the BioIsis and SASBDB databases. Using a laptop, it was demonstrated that Pepsi-SAXS is about seven, 29 and 36 times faster compared with CRYSOL, FoXS and the three-dimensional Zernike method in SAStbx, respectively, when tested on data from the BioIsis database, and is about five, 21 and 25 times faster compared with CRYSOL, FoXS and SAStbx, respectively, when tested on data from SASBDB. On average, Pepsi-SAXS demonstrates comparable accuracy in terms of χ 2 to CRYSOL and FoXS when tested on BioIsis and SASBDB profiles. Together with a small allowed variation of adjustable parameters, this demonstrates the effectiveness of the method. Pepsi-SAXS is available at http://team.inria.fr/nano-d/software/pepsi-saxs.

  17. Compact opto-electronic engine for high-speed compressive sensing

    NASA Astrophysics Data System (ADS)

    Tidman, James; Weston, Tyler; Hewitt, Donna; Herman, Matthew A.; McMackin, Lenore

    2013-09-01

    The measurement efficiency of Compressive Sensing (CS) enables the computational construction of images from far fewer measurements than what is usually considered necessary by the Nyquist- Shannon sampling theorem. There is now a vast literature around CS mathematics and applications since the development of its theoretical principles about a decade ago. Applications include quantum information to optical microscopy to seismic and hyper-spectral imaging. In the application of shortwave infrared imaging, InView has developed cameras based on the CS single-pixel camera architecture. This architecture is comprised of an objective lens to image the scene onto a Texas Instruments DLP® Micromirror Device (DMD), which by using its individually controllable mirrors, modulates the image with a selected basis set. The intensity of the modulated image is then recorded by a single detector. While the design of a CS camera is straightforward conceptually, its commercial implementation requires significant development effort in optics, electronics, hardware and software, particularly if high efficiency and high-speed operation are required. In this paper, we describe the development of a high-speed CS engine as implemented in a lab-ready workstation. In this engine, configurable measurement patterns are loaded into the DMD at speeds up to 31.5 kHz. The engine supports custom reconstruction algorithms that can be quickly implemented. Our work includes optical path design, Field programmable Gate Arrays for DMD pattern generation, and circuit boards for front end data acquisition, ADC and system control, all packaged in a compact workstation.

  18. Informational analysis for compressive sampling in radar imaging.

    PubMed

    Zhang, Jingxiong; Yang, Ke

    2015-03-24

    Compressive sampling or compressed sensing (CS) works on the assumption of the sparsity or compressibility of the underlying signal, relies on the trans-informational capability of the measurement matrix employed and the resultant measurements, operates with optimization-based algorithms for signal reconstruction and is thus able to complete data compression, while acquiring data, leading to sub-Nyquist sampling strategies that promote efficiency in data acquisition, while ensuring certain accuracy criteria. Information theory provides a framework complementary to classic CS theory for analyzing information mechanisms and for determining the necessary number of measurements in a CS environment, such as CS-radar, a radar sensor conceptualized or designed with CS principles and techniques. Despite increasing awareness of information-theoretic perspectives on CS-radar, reported research has been rare. This paper seeks to bridge the gap in the interdisciplinary area of CS, radar and information theory by analyzing information flows in CS-radar from sparse scenes to measurements and determining sub-Nyquist sampling rates necessary for scene reconstruction within certain distortion thresholds, given differing scene sparsity and average per-sample signal-to-noise ratios (SNRs). Simulated studies were performed to complement and validate the information-theoretic analysis. The combined strategy proposed in this paper is valuable for information-theoretic orientated CS-radar system analysis and performance evaluation.

  19. WE-G-204-03: Photon-Counting Hexagonal Pixel Array CdTe Detector: Optimal Resampling to Square Pixels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shrestha, S; Vedantham, S; Karellas, A

    Purpose: Detectors with hexagonal pixels require resampling to square pixels for distortion-free display of acquired images. In this work, the presampling modulation transfer function (MTF) of a hexagonal pixel array photon-counting CdTe detector for region-of-interest fluoroscopy was measured and the optimal square pixel size for resampling was determined. Methods: A 0.65mm thick CdTe Schottky sensor capable of concurrently acquiring up to 3 energy-windowed images was operated in a single energy-window mode to include ≥10 KeV photons. The detector had hexagonal pixels with apothem of 30 microns resulting in pixel spacing of 60 and 51.96 microns along the two orthogonal directions.more » Images of a tungsten edge test device acquired under IEC RQA5 conditions were double Hough transformed to identify the edge and numerically differentiated. The presampling MTF was determined from the finely sampled line spread function that accounted for the hexagonal sampling. The optimal square pixel size was determined in two ways; the square pixel size for which the aperture function evaluated at the Nyquist frequencies along the two orthogonal directions matched that from the hexagonal pixel aperture functions, and the square pixel size for which the mean absolute difference between the square and hexagonal aperture functions was minimized over all frequencies up to the Nyquist limit. Results: Evaluation of the aperture functions over the entire frequency range resulted in square pixel size of 53 microns with less than 2% difference from the hexagonal pixel. Evaluation of the aperture functions at Nyquist frequencies alone resulted in 54 microns square pixels. For the photon-counting CdTe detector and after resampling to 53 microns square pixels using quadratic interpolation, the presampling MTF at Nyquist frequency of 9.434 cycles/mm along the two directions were 0.501 and 0.507. Conclusion: Hexagonal pixel array photon-counting CdTe detector after resampling to square pixels provides high-resolution imaging suitable for fluoroscopy.« less

  20. Restoration of STORM images from sparse subset of localizations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.

    2016-02-01

    To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.

  1. Robust Methods for Sensing and Reconstructing Sparse Signals

    ERIC Educational Resources Information Center

    Carrillo, Rafael E.

    2012-01-01

    Compressed sensing (CS) is an emerging signal acquisition framework that goes against the traditional Nyquist sampling paradigm. CS demonstrates that a sparse, or compressible, signal can be acquired using a low rate acquisition process. Since noise is always present in practical data acquisition systems, sensing and reconstruction methods are…

  2. Elaborate analysis and design of filter-bank-based sensing for wideband cognitive radios

    NASA Astrophysics Data System (ADS)

    Maliatsos, Konstantinos; Adamis, Athanasios; Kanatas, Athanasios G.

    2014-12-01

    The successful operation of a cognitive radio system strongly depends on its ability to sense the radio environment. With the use of spectrum sensing algorithms, the cognitive radio is required to detect co-existing licensed primary transmissions and to protect them from interference. This paper focuses on filter-bank-based sensing and provides a solid theoretical background for the design of these detectors. Optimum detectors based on the Neyman-Pearson theorem are developed for uniform discrete Fourier transform (DFT) and modified DFT filter banks with root-Nyquist filters. The proposed sensing framework does not require frequency alignment between the filter bank of the sensor and the primary signal. Each wideband primary channel is spanned and monitored by several sensor subchannels that analyse it in narrowband signals. Filter-bank-based sensing is proved to be robust and efficient under coloured noise. Moreover, the performance of the weighted energy detector as a sensing technique is evaluated. Finally, based on the Locally Most Powerful and the Generalized Likelihood Ratio test, real-world sensing algorithms that do not require a priori knowledge are proposed and tested.

  3. Current Noise from a Magnetic Moment in a Helical Edge

    NASA Astrophysics Data System (ADS)

    Väyrynen, Jukka I.; Glazman, Leonid I.

    2017-03-01

    We calculate the two-terminal current noise generated by a magnetic moment coupled to a helical edge of a two-dimensional topological insulator. When the system is symmetric with respect to in-plane spin rotation, the noise is dominated by the Nyquist component even in the presence of a voltage bias V . The corresponding noise spectrum S (V ,ω ) is determined by a modified fluctuation-dissipation theorem with the differential conductance G (V ,ω ) in place of the linear one. The differential noise ∂S /∂V , commonly measured in experiments, is strongly dependent on frequency on a small scale τK-1≪T set by the Korringa relaxation rate of the local moment. This is in stark contrast to the case of conventional mesoscopic conductors where ∂S /∂V is frequency independent and defined by the shot noise. In a helical edge, a violation of the spin-rotation symmetry leads to the shot noise, which becomes important only at a high bias. Uncharacteristically for a fermion system, this noise in the backscattered current is super-Poissonian.

  4. A variational theorem for creep with applications to plates and columns

    NASA Technical Reports Server (NTRS)

    Sanders, J Lyell, Jr; Mccomb, Harvey G , Jr; Schlechte, Floyd R

    1958-01-01

    A variational theorem is presented for a body undergoing creep. Solutions to problems of the creep behavior of plates, columns, beams, and shells can be obtained by means of the direct methods of the calculus of variations in conjunction with the stated theorem. The application of the theorem is illustrated for plates and columns by the solution of two sample problems.

  5. Widefield compressive multiphoton microscopy.

    PubMed

    Alemohammad, Milad; Shin, Jaewook; Tran, Dung N; Stroud, Jasper R; Chin, Sang Peter; Tran, Trac D; Foster, Mark A

    2018-06-15

    A single-pixel compressively sensed architecture is exploited to simultaneously achieve a 10× reduction in acquired data compared with the Nyquist rate, while alleviating limitations faced by conventional widefield temporal focusing microscopes due to scattering of the fluorescence signal. Additionally, we demonstrate an adaptive sampling scheme that further improves the compression and speed of our approach.

  6. The Variation Theorem Applied to H-2+: A Simple Quantum Chemistry Computer Project

    ERIC Educational Resources Information Center

    Robiette, Alan G.

    1975-01-01

    Describes a student project which requires limited knowledge of Fortran and only minimal computing resources. The results illustrate such important principles of quantum mechanics as the variation theorem and the virial theorem. Presents sample calculations and the subprogram for energy calculations. (GS)

  7. Fourier Theory Explanation for the Sampling Theorem Demonstrated by a Laboratory Experiment.

    ERIC Educational Resources Information Center

    Sharma, A.; And Others

    1996-01-01

    Describes a simple experiment that uses a CCD video camera, a display monitor, and a laser-printed bar pattern to illustrate signal sampling problems that produce aliasing or moiri fringes in images. Uses the Fourier transform to provide an appropriate and elegant means to explain the sampling theorem and the aliasing phenomenon in CCD-based…

  8. Computational integration of nanoscale physical biomarkers and cognitive assessments for Alzheimer’s disease diagnosis and prognosis

    PubMed Central

    Yue, Tao; Jia, Xinghua; Petrosino, Jennifer; Sun, Leming; Fan, Zhen; Fine, Jesse; Davis, Rebecca; Galster, Scott; Kuret, Jeff; Scharre, Douglas W.; Zhang, Mingjun

    2017-01-01

    With the increasing prevalence of Alzheimer’s disease (AD), significant efforts have been directed toward developing novel diagnostics and biomarkers that can enhance AD detection and management. AD affects the cognition, behavior, function, and physiology of patients through mechanisms that are still being elucidated. Current AD diagnosis is contingent on evaluating which symptoms and signs a patient does or does not display. Concerns have been raised that AD diagnosis may be affected by how those measurements are analyzed. Unbiased means of diagnosing AD using computational algorithms that integrate multidisciplinary inputs, ranging from nanoscale biomarkers to cognitive assessments, and integrating both biochemical and physical changes may provide solutions to these limitations due to lack of understanding for the dynamic progress of the disease coupled with multiple symptoms in multiscale. We show that nanoscale physical properties of protein aggregates from the cerebral spinal fluid and blood of patients are altered during AD pathogenesis and that these properties can be used as a new class of “physical biomarkers.” Using a computational algorithm, developed to integrate these biomarkers and cognitive assessments, we demonstrate an approach to impartially diagnose AD and predict its progression. Real-time diagnostic updates of progression could be made on the basis of the changes in the physical biomarkers and the cognitive assessment scores of patients over time. Additionally, the Nyquist-Shannon sampling theorem was used to determine the minimum number of necessary patient checkups to effectively predict disease progression. This integrated computational approach can generate patient-specific, personalized signatures for AD diagnosis and prognosis. PMID:28782028

  9. The Storage Ring Proton EDM Experiment

    NASA Astrophysics Data System (ADS)

    Semertzidis, Yannis; Storage Ring Proton EDM Collaboration

    2014-09-01

    The storage ring pEDM experiment utilizes an all-electric storage ring to store ~1011 longitudinally polarized protons simultaneously in clock-wise and counter-clock-wise directions for 103 seconds. The radial E-field acts on the proton EDM for the duration of the storage time to precess its spin in the vertical plane. The ring lattice is optimized to reduce intra-beam scattering, increase the statistical sensitivity and reduce the systematic errors of the method. The main systematic error is a net radial B-field integrated around the ring causing an EDM-like vertical spin precession. The counter-rotating beams sense this integrated field and are vertically shifted by an amount, which depends on the strength of the vertical focusing in the ring, thus creating a radial B-field. Modulating the vertical focusing at 10 kHz makes possible the detection of this radial B-field by a SQUID-magnetometer (SQUID-based BPM). For a total number of n SQUID-based BPMs distributed around the ring the effectiveness of the method is limited to the N = n /2 harmonic of the background radial B-field due to the Nyquist sampling theorem limit. This limitation establishes the requirement to reduce the maximum radial B-field to 0.1-1 nT everywhere around the ring by layers of mu-metal and aluminum vacuum tube. The metho's sensitivity is 10-29 e .cm , more than three orders of magnitude better than the present neutron EDM experimental limit, making it sensitive to SUSY-like new physics mass scale up to 300 TeV.

  10. Ultra-dense WDM-PON delivering carrier-centralized Nyquist-WDM uplink with digital coherent detection.

    PubMed

    Dong, Ze; Yu, Jianjun; Chien, Hung-Chang; Chi, Nan; Chen, Lin; Chang, Gee-Kung

    2011-06-06

    We introduce an "ultra-dense" concept into next-generation WDM-PON systems, which transmits a Nyquist-WDM uplink with centralized uplink optical carriers and digital coherent detection for the future access network requiring both high capacity and high spectral efficiency. 80-km standard single mode fiber (SSMF) transmission of Nyquist-WDM signal with 13 coherent 25-GHz spaced wavelength shaped optical carriers individually carrying 100-Gbit/s polarization-multiplexing quadrature phase-shift keying (PM-QPSK) upstream data has been experimentally demonstrated with negligible transmission penalty. The 13 frequency-locked wavelengths with a uniform optical power level of -10 dBm and OSNR of more than 50 dB are generated from a single lightwave via a multi-carrier generator consists of an optical phase modulator (PM), a Mach-Zehnder modulator (MZM), and a WSS. Following spacing the carriers at the baud rate, sub-carriers are individually spectral shaped to form Nyquist-WDM. The Nyquist-WDM channels have less than 1-dB crosstalk penalty of optical signal-to-noise ratio (OSNR) at 2 × 10(-3) bit-error rate (BER). Performance of a traditional coherent optical OFDM scheme and its restrictions on symbol synchronization and power difference are also experimentally compared and studied.

  11. Classroom Research: Assessment of Student Understanding of Sampling Distributions of Means and the Central Limit Theorem in Post-Calculus Probability and Statistics Classes

    ERIC Educational Resources Information Center

    Lunsford, M. Leigh; Rowell, Ginger Holmes; Goodson-Espy, Tracy

    2006-01-01

    We applied a classroom research model to investigate student understanding of sampling distributions of sample means and the Central Limit Theorem in post-calculus introductory probability and statistics courses. Using a quantitative assessment tool developed by previous researchers and a qualitative assessment tool developed by the authors, we…

  12. Asynchronous timing and Doppler recovery in DSP based DPSK modems for fixed and mobile satellite applications

    NASA Astrophysics Data System (ADS)

    Koblents, B.; Belanger, M.; Woods, D.; McLane, P. J.

    While conventional analog modems employ some kind of clock wave regenerator circuit for synchronous timing recovery, in sampled modem receivers the timing is recovered asynchronously to the incoming data stream, with no adjustment being made to the input sampling rate. All timing corrections are accomplished by digital operations on the sampled data stream, and timing recovery is asynchronous with the uncontrolled, input A/D system. A good timing error measurement algorithm is a zero crossing tracker proposed by Gardner. Digital, speech rate (2400 - 4800 bps) M-PSK modem receivers employing Gardner's zero crossing tracker were implemented and tested and found to achieve BER performance very close to theoretical values on the AWGN channel. Nyguist pulse shaped modem systems with excess bandwidth factors ranging from 100 to 60 percent were considered. We can show that for any symmetric M-PSK signal set Gardner's NDA algorithm is free of pattern jitter for any carrier phase offset for rectangular pulses and for Nyquist pulses having 100 percent excess bandwidth. Also, the Nyquist pulse shaped system is studied on the mobile satellite channel, where Doppler shifts and multipath fading degrade the pi/4-DQPSK signal. Two simple modifications to Gardner's zero crossing tracker enable it to remain useful in the presence of multipath fading.

  13. Asynchronous timing and Doppler recovery in DSP based DPSK modems for fixed and mobile satellite applications

    NASA Technical Reports Server (NTRS)

    Koblents, B.; Belanger, M.; Woods, D.; Mclane, P. J.

    1993-01-01

    While conventional analog modems employ some kind of clock wave regenerator circuit for synchronous timing recovery, in sampled modem receivers the timing is recovered asynchronously to the incoming data stream, with no adjustment being made to the input sampling rate. All timing corrections are accomplished by digital operations on the sampled data stream, and timing recovery is asynchronous with the uncontrolled, input A/D system. A good timing error measurement algorithm is a zero crossing tracker proposed by Gardner. Digital, speech rate (2400 - 4800 bps) M-PSK modem receivers employing Gardner's zero crossing tracker were implemented and tested and found to achieve BER performance very close to theoretical values on the AWGN channel. Nyguist pulse shaped modem systems with excess bandwidth factors ranging from 100 to 60 percent were considered. We can show that for any symmetric M-PSK signal set Gardner's NDA algorithm is free of pattern jitter for any carrier phase offset for rectangular pulses and for Nyquist pulses having 100 percent excess bandwidth. Also, the Nyquist pulse shaped system is studied on the mobile satellite channel, where Doppler shifts and multipath fading degrade the pi/4-DQPSK signal. Two simple modifications to Gardner's zero crossing tracker enable it to remain useful in the presence of multipath fading.

  14. Optical and Radio Frequency Refractivity Fluctuations from High Resolution Point Sensors: Sea Breezes and Other Observations

    DTIC Science & Technology

    2007-03-01

    velocity and direction along with vertical velocities are derived from the measured time of flight for the ultrasonic signals (manufacture’s...data set. To prevent aliasing a wave must be sample at least twice per period so the Nyquist frequency is sn ff 2 = . 3. Sampling Requirements...an order of magnitude or more. To refine models or conduct climatologically studies for Cn2 requires direct measurements to identify the underlying

  15. Scaling and the frequency dependence of Nyquist plot maxima of the electrical impedance of the human thigh.

    PubMed

    Shiffman, Carl

    2017-11-30

    To define and elucidate the properties of reduced-variable Nyquist plots. Non-invasive measurements of the electrical impedance of the human thigh. A retrospective analysis of the electrical impedances of 154 normal subjects measured over the past decade shows that 'scaling' of the Nyquist plots for human thigh muscles is a property shared by healthy thigh musculature, irrespective of subject and the length of muscle segment. Here the term scaling signifies the near and sometimes 'perfect' coalescence of the separate X versus R plots into one 'reduced' Nyquist plot by the simple expedient of dividing R and X by X m , the value of X at the reactance maximum. To the extent allowed by noise levels one can say that there is one 'universal' reduced Nyquist plot for the thigh musculature of healthy subjects. There is one feature of the Nyquist curves which is not 'universal', however, namely the frequency f m at which the maximum in X is observed. That is found to vary from 10 to 100 kHz. depending on subject and segment length. Analysis shows, however, that the mean value of 1/f m is an accurately linear function of segment length, though there is a small subject-to-subject random element as well. Also, following the recovery of an otherwise healthy victim of ankle fracture demonstrates the clear superiority of measurements above about 800 kHz, where scaling is not observed, in contrast to measurements below about 400 kHz, where scaling is accurately obeyed. The ubiquity of 'scaling' casts new light on the interpretation of impedance results as they are used in electrical impedance myography and bioelectric impedance analysis.

  16. Nyquist-WDM filter shaping with a high-resolution colorless photonic spectral processor.

    PubMed

    Sinefeld, David; Ben-Ezra, Shalva; Marom, Dan M

    2013-09-01

    We employ a spatial-light-modulator-based colorless photonic spectral processor with a spectral addressability of 100 MHz along 100 GHz bandwidth, for multichannel, high-resolution reshaping of Gaussian channel response to square-like shape, compatible with Nyquist WDM requirements.

  17. WEIGHTED LIKELIHOOD ESTIMATION UNDER TWO-PHASE SAMPLING

    PubMed Central

    Saegusa, Takumi; Wellner, Jon A.

    2013-01-01

    We develop asymptotic theory for weighted likelihood estimators (WLE) under two-phase stratified sampling without replacement. We also consider several variants of WLEs involving estimated weights and calibration. A set of empirical process tools are developed including a Glivenko–Cantelli theorem, a theorem for rates of convergence of M-estimators, and a Donsker theorem for the inverse probability weighted empirical processes under two-phase sampling and sampling without replacement at the second phase. Using these general results, we derive asymptotic distributions of the WLE of a finite-dimensional parameter in a general semiparametric model where an estimator of a nuisance parameter is estimable either at regular or nonregular rates. We illustrate these results and methods in the Cox model with right censoring and interval censoring. We compare the methods via their asymptotic variances under both sampling without replacement and the more usual (and easier to analyze) assumption of Bernoulli sampling at the second phase. PMID:24563559

  18. Zero Thermal Noise in Resistors at Zero Temperature

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes-Göran

    2016-06-01

    The bandwidth of transistors in logic devices approaches the quantum limit, where Johnson noise and associated error rates are supposed to be strongly enhanced. However, the related theory — asserting a temperature-independent quantum zero-point (ZP) contribution to Johnson noise, which dominates the quantum regime — is controversial and resolution of the controversy is essential to determine the real error rate and fundamental energy dissipation limits of logic gates in the quantum limit. The Callen-Welton formula (fluctuation-dissipation theorem) of voltage and current noise for a resistance is the sum of Nyquist’s classical Johnson noise equation and a quantum ZP term with a power density spectrum proportional to frequency and independent of temperature. The classical Johnson-Nyquist formula vanishes at the approach of zero temperature, but the quantum ZP term still predicts non-zero noise voltage and current. Here, we show that this noise cannot be reconciled with the Fermi-Dirac distribution, which defines the thermodynamics of electrons according to quantum-statistical physics. Consequently, Johnson noise must be nil at zero temperature, and non-zero noise found for certain experimental arrangements may be a measurement artifact, such as the one mentioned in Kleen’s uncertainty relation argument.

  19. Nonuniform sampling by quantiles.

    PubMed

    Craft, D Levi; Sonstrom, Reilly E; Rovnyak, Virginia G; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Nonuniform sampling by quantiles

    NASA Astrophysics Data System (ADS)

    Craft, D. Levi; Sonstrom, Reilly E.; Rovnyak, Virginia G.; Rovnyak, David

    2018-03-01

    A flexible strategy for choosing samples nonuniformly from a Nyquist grid using the concept of statistical quantiles is presented for broad classes of NMR experimentation. Quantile-directed scheduling is intuitive and flexible for any weighting function, promotes reproducibility and seed independence, and is generalizable to multiple dimensions. In brief, weighting functions are divided into regions of equal probability, which define the samples to be acquired. Quantile scheduling therefore achieves close adherence to a probability distribution function, thereby minimizing gaps for any given degree of subsampling of the Nyquist grid. A characteristic of quantile scheduling is that one-dimensional, weighted NUS schedules are deterministic, however higher dimensional schedules are similar within a user-specified jittering parameter. To develop unweighted sampling, we investigated the minimum jitter needed to disrupt subharmonic tracts, and show that this criterion can be met in many cases by jittering within 25-50% of the subharmonic gap. For nD-NUS, three supplemental components to choosing samples by quantiles are proposed in this work: (i) forcing the corner samples to ensure sampling to specified maximum values in indirect evolution times, (ii) providing an option to triangular backfill sampling schedules to promote dense/uniform tracts at the beginning of signal evolution periods, and (iii) providing an option to force the edges of nD-NUS schedules to be identical to the 1D quantiles. Quantile-directed scheduling meets the diverse needs of current NUS experimentation, but can also be used for future NUS implementations such as off-grid NUS and more. A computer program implementing these principles (a.k.a. QSched) in 1D- and 2D-NUS is available under the general public license.

  1. Nyquist WDM superchannel using offset-16QAM and receiver-side digital spectral shaping.

    PubMed

    Xiang, Meng; Fu, Songnian; Tang, Ming; Tang, Haoyuan; Shum, Perry; Liu, Deming

    2014-07-14

    The performance of Nyquist WDM superchannel using advanced modulation formats with coherent detection is degraded due to the existence of both inter-symbol interference (ISI) and inter-channel interference (ICI). Here, we propose and numerically investigate a Nyquist WDM superchannel using offset-16QAM and receiver-side digital spectral shaping (RS-DSS), achieving a spectral efficiency up to 7.44 bit/s/Hz with 7% hard-decision forward error correction (HD-FEC) overhead. Compared with Nyquist WDM superchannel using 16QAM and RS-DSS, the proposed system has 1.4 dB improvement of required OSNR at BER = 10(-3) in the case of back-to-back (B2B) transmission. Furthermore, the range of launched optical power allowed beyond HD-FEC threshold is drastically increased from -6 dBm to 1.2 dBm, after 960 km SSMF transmission with EDFA-only. In particular, no more than 1.8 dB required OSNR penalty at BER = 10(-3) is achieved for the proposed system even with the phase difference between channels varying from 0 to 360 degree.

  2. Experimental demonstration of 608Gbit/s short reach transmission employing half-cycle 16QAM Nyquist-SCM signal and direct detection with 25Gbps EML.

    PubMed

    Zhong, Kangping; Zhou, Xian; Wang, Yiguang; Wang, Liang; Yuan, Jinhui; Yu, Changyuan; Lau, Alan Pak Tao; Lu, Chao

    2016-10-31

    In this paper, we experimentally demonstrated an IM/DD short reach transmission system with a total capacity of 608Gbit/s (net capacity of 565.4Gbit/s exclude 7% FEC overhead) employing half-cycle 16QAM Nyquist-SCM signal and 25Gbps EML at O band. Direct detection-faster than Nyquist (DD-FTN) technique was employed to compensate channel impairments. Number of taps of DD-LMS and tap coefficient of post filter in DD-FTN were experimentally studied for different baud rates. Single-lane 152Gbit/s transmission over 10km of SSMF was experimentally demonstrated. Employing a 4-lanes LAN-WDM architecture, a total capacity of 608Gbit/s transmission over 2km was successfully achieved with a receiver sensitivity lower than -4dBm. To the best of authors' knowledge, this is the highest reported baud rate of half-cycle 16QAM Nyquist-SCM signal and the highest bit rate employing IM/DD and 25Gbps EML in a four lanes LAN-WDM architecture for short reach systems in the O band.

  3. Joint digital signal processing for superchannel coherent optical communication systems.

    PubMed

    Liu, Cheng; Pan, Jie; Detwiler, Thomas; Stark, Andrew; Hsueh, Yu-Ting; Chang, Gee-Kung; Ralph, Stephen E

    2013-04-08

    Ultra-high-speed optical communication systems which can support ≥ 1Tb/s per channel transmission will soon be required to meet the increasing capacity demand. However, 1Tb/s over a single carrier requires either or both a high-level modulation format (i.e. 1024QAM) and a high baud rate. Alternatively, grouping a number of tightly spaced "sub-carriers" to form a terabit superchannel increases channel capacity while minimizing the need for high-level modulation formats and high baud rate, which may allow existing formats, baud rate and components to be exploited. In ideal Nyquist-WDM superchannel systems, optical subcarriers with rectangular spectra are tightly packed at a channel spacing equal to the baud rate, thus achieving the Nyquist bandwidth limit. However, in practical Nyquist-WDM systems, precise electrical or optical control of channel spectra is required to avoid strong inter-channel interference (ICI). Here, we propose and demonstrate a new "super receiver" architecture for practical Nyquist-WDM systems, which jointly detects and demodulates multiple channels simultaneously and mitigates the penalties associated with the limitations of generating ideal Nyquist-WDM spectra. Our receiver-side solution relaxes the filter requirements imposed on the transmitter. Two joint DSP algorithms are developed for linear ICI cancellation and joint carrier-phase recovery. Improved system performance is observed with both experimental and simulation data. Performance analysis under different system configurations is conducted to demonstrate the feasibility and robustness of the proposed joint DSP algorithms.

  4. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices.

    PubMed

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; Shahid, Adnan; Moerman, Ingrid; De Poorter, Eli

    2017-09-12

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals' modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI's probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access.

  5. Wireless Technology Recognition Based on RSSI Distribution at Sub-Nyquist Sampling Rate for Constrained Devices

    PubMed Central

    Liu, Wei; Kulin, Merima; Kazaz, Tarik; De Poorter, Eli

    2017-01-01

    Driven by the fast growth of wireless communication, the trend of sharing spectrum among heterogeneous technologies becomes increasingly dominant. Identifying concurrent technologies is an important step towards efficient spectrum sharing. However, due to the complexity of recognition algorithms and the strict condition of sampling speed, communication systems capable of recognizing signals other than their own type are extremely rare. This work proves that multi-model distribution of the received signal strength indicator (RSSI) is related to the signals’ modulation schemes and medium access mechanisms, and RSSI from different technologies may exhibit highly distinctive features. A distinction is made between technologies with a streaming or a non-streaming property, and appropriate feature spaces can be established either by deriving parameters such as packet duration from RSSI or directly using RSSI’s probability distribution. An experimental study shows that even RSSI acquired at a sub-Nyquist sampling rate is able to provide sufficient features to differentiate technologies such as Wi-Fi, Long Term Evolution (LTE), Digital Video Broadcasting-Terrestrial (DVB-T) and Bluetooth. The usage of the RSSI distribution-based feature space is illustrated via a sample algorithm. Experimental evaluation indicates that more than 92% accuracy is achieved with the appropriate configuration. As the analysis of RSSI distribution is straightforward and less demanding in terms of system requirements, we believe it is highly valuable for recognition of wideband technologies on constrained devices in the context of dynamic spectrum access. PMID:28895879

  6. Model-based frequency response characterization of a digital-image analysis system for epifluorescence microscopy

    NASA Technical Reports Server (NTRS)

    Hazra, Rajeeb; Viles, Charles L.; Park, Stephen K.; Reichenbach, Stephen E.; Sieracki, Michael E.

    1992-01-01

    Consideration is given to a model-based method for estimating the spatial frequency response of a digital-imaging system (e.g., a CCD camera) that is modeled as a linear, shift-invariant image acquisition subsystem that is cascaded with a linear, shift-variant sampling subsystem. The method characterizes the 2D frequency response of the image acquisition subsystem to beyond the Nyquist frequency by accounting explicitly for insufficient sampling and the sample-scene phase. Results for simulated systems and a real CCD-based epifluorescence microscopy system are presented to demonstrate the accuracy of the method.

  7. The fundamentals of average local variance--Part II: Sampling simple regular patterns with optical imagery.

    PubMed

    Bøcher, Peder Klith; McCloy, Keith R

    2006-02-01

    In this investigation, the characteristics of the average local variance (ALV) function is investigated through the acquisition of images at different spatial resolutions of constructed scenes of regular patterns of black and white squares. It is shown that the ALV plot consistently peaks at a spatial resolution in which the pixels has a size corresponding to half the distance between scene objects, and that, under very specific conditions, it also peaks at a spatial resolution in which the pixel size corresponds to the whole distance between scene objects. It is argued that the peak at object distance when present is an expression of the Nyquist sample rate. The presence of this peak is, hence, shown to be a function of the matching between the phase of the scene pattern and the phase of the sample grid, i.e., the image. When these phases match, a clear and distinct peak is produced on the ALV plot. The fact that the peak at half the distance consistently occurs in the ALV plot is linked to the circumstance that the sampling interval (distance between pixels) and the extent of the sampling unit (size of pixels) are equal. Hence, at twice the Nyquist sampling rate, each fundamental period of the pattern is covered by four pixels; therefore, at least one pixel is always completely embedded within one pattern element, regardless of sample scene phase. If the objects in the scene are scattered with a distance larger than their extent, the peak will be related to the size by a factor larger than 1/2. This is suggested to be the explanation to the results presented by others that the ALV plot is related to scene-object size by a factor of 1/2-3/4.

  8. The value of Bayes' theorem for interpreting abnormal test scores in cognitively healthy and clinical samples.

    PubMed

    Gavett, Brandon E

    2015-03-01

    The base rates of abnormal test scores in cognitively normal samples have been a focus of recent research. The goal of the current study is to illustrate how Bayes' theorem uses these base rates--along with the same base rates in cognitively impaired samples and prevalence rates of cognitive impairment--to yield probability values that are more useful for making judgments about the absence or presence of cognitive impairment. Correlation matrices, means, and standard deviations were obtained from the Wechsler Memory Scale--4th Edition (WMS-IV) Technical and Interpretive Manual and used in Monte Carlo simulations to estimate the base rates of abnormal test scores in the standardization and special groups (mixed clinical) samples. Bayes' theorem was applied to these estimates to identify probabilities of normal cognition based on the number of abnormal test scores observed. Abnormal scores were common in the standardization sample (65.4% scoring below a scaled score of 7 on at least one subtest) and more common in the mixed clinical sample (85.6% scoring below a scaled score of 7 on at least one subtest). Probabilities varied according to the number of abnormal test scores, base rates of normal cognition, and cutoff scores. The results suggest that interpretation of base rates obtained from cognitively healthy samples must also account for data from cognitively impaired samples. Bayes' theorem can help neuropsychologists answer questions about the probability that an individual examinee is cognitively healthy based on the number of abnormal test scores observed.

  9. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD).

    PubMed

    Bermúdez Ordoñez, Juan Carlos; Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-05-16

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ 1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain.

  10. Energy Efficient GNSS Signal Acquisition Using Singular Value Decomposition (SVD)

    PubMed Central

    Arnaldo Valdés, Rosa María; Gómez Comendador, Fernando

    2018-01-01

    A significant challenge in global navigation satellite system (GNSS) signal processing is a requirement for a very high sampling rate. The recently-emerging compressed sensing (CS) theory makes processing GNSS signals at a low sampling rate possible if the signal has a sparse representation in a certain space. Based on CS and SVD theories, an algorithm for sampling GNSS signals at a rate much lower than the Nyquist rate and reconstructing the compressed signal is proposed in this research, which is validated after the output from that process still performs signal detection using the standard fast Fourier transform (FFT) parallel frequency space search acquisition. The sparse representation of the GNSS signal is the most important precondition for CS, by constructing a rectangular Toeplitz matrix (TZ) of the transmitted signal, calculating the left singular vectors using SVD from the TZ, to achieve sparse signal representation. Next, obtaining the M-dimensional observation vectors based on the left singular vectors of the SVD, which are equivalent to the sampler operator in standard compressive sensing theory, the signal can be sampled below the Nyquist rate, and can still be reconstructed via ℓ1 minimization with accuracy using convex optimization. As an added value, there is a GNSS signal acquisition enhancement effect by retaining the useful signal and filtering out noise by projecting the signal into the most significant proper orthogonal modes (PODs) which are the optimal distributions of signal power. The algorithm is validated with real recorded signals, and the results show that the proposed method is effective for sampling, reconstructing intermediate frequency (IF) GNSS signals in the time discrete domain. PMID:29772731

  11. Development of a bio-magnetic measurement system and sensor configuration analysis for rats

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho

    2017-04-01

    Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.

  12. Phase Conjugated and Transparent Wavelength Conversions of Nyquist 16-QAM Signals Employing a Single-Layer Graphene Coated Fiber Device

    PubMed Central

    Hu, Xiao; Zeng, Mengqi; Long, Yun; Liu, Jun; Zhu, Yixiao; Zou, Kaiheng; Zhang, Fan; Fu, Lei; Wang, Jian

    2016-01-01

    We fabricate a nonlinear optical device based on a fiber pigtail cross-section coated with a single-layer graphene grown by chemical vapor deposition (CVD) method. Using the fabricated graphene-assisted nonlinear optical device and employing Nyquist 16-ary quadrature amplitude modulation (16-QAM) signal, we experimentally demonstrate phase conjugated wavelength conversion by degenerate four-wave mixing (FWM) and transparent wavelength conversion by non-degenerate FWM in graphene. We study the conversion efficiency as functions of the pump power and pump wavelength and evaluate the bit-error rate (BER) performance. We also compare the time-varying symbol sequence for graphene-assisted phase conjugated and transparent wavelength conversions of Nyquist 16-QAM signal. PMID:26932470

  13. Mechanical Characterization of Anion Exchange Membranes Under Controlled Environmental Conditions

    DTIC Science & Technology

    2015-05-11

    are a common mechanical failure in fuel cell membranes, and elongation at break correlates well with microcrack resistance [29]. In an effort to...TestEquity sample chamber controlled temperature and humidity during data acquisition. Membrane resistance was defined as the low frequency intercept of...the Nyquist impedance plot and conductivity, σ, was calculated using the film dimensions where R is the membrane resistance , l 26 is the length

  14. Computationally efficient video restoration for Nyquist sampled imaging sensors combining an affine-motion-based temporal Kalman filter and adaptive Wiener filter.

    PubMed

    Rucci, Michael; Hardie, Russell C; Barnard, Kenneth J

    2014-05-01

    In this paper, we present a computationally efficient video restoration algorithm to address both blur and noise for a Nyquist sampled imaging system. The proposed method utilizes a temporal Kalman filter followed by a correlation-model based spatial adaptive Wiener filter (AWF). The Kalman filter employs an affine background motion model and novel process-noise variance estimate. We also propose and demonstrate a new multidelay temporal Kalman filter designed to more robustly treat local motion. The AWF is a spatial operation that performs deconvolution and adapts to the spatially varying residual noise left in the Kalman filter stage. In image areas where the temporal Kalman filter is able to provide significant noise reduction, the AWF can be aggressive in its deconvolution. In other areas, where less noise reduction is achieved with the Kalman filter, the AWF balances the deconvolution with spatial noise reduction. In this way, the Kalman filter and AWF work together effectively, but without the computational burden of full joint spatiotemporal processing. We also propose a novel hybrid system that combines a temporal Kalman filter and BM3D processing. To illustrate the efficacy of the proposed methods, we test the algorithms on both simulated imagery and video collected with a visible camera.

  15. Resolution of the EPR Paradox for Fermion Spin Correlations

    NASA Astrophysics Data System (ADS)

    Close, Robert

    2011-10-01

    The EPR paradox addresses the question of whether a physical system can have a definite state independent of its measurement. Bell's Theorem places limits on correlations between local measurements of particles whose properties are established prior to measurement. Experimental violation of Bell's theorem has been regarded as evidence against the existence of a definite state prior to measurement. We model fermions as having a spatial distribution of spin values, so that a Stern-Gerlach device samples the spin distribution differently at different orientations. The computed correlations agree with quantum mechanical predictions and experimental observations. Bell's Theorem is not applicable because for any sampling of angles, different points on the sphere have different density of states.

  16. Robust Control Design for Flight Control

    DTIC Science & Technology

    1989-07-01

    controller may be designed to produce desired responses to pilot commands, responses to external (atmospheric) disturbances may be unusual and...suggested for stabilizing open loop unstable aircraft result in nonminimum phase zeros in the dynamics as seen by the pilot . This issue has not been...stability test it does retain several essential features of the popular single loop test developed by Nyquist. In particular, it identifies a Nyquist

  17. Tutorial on Fourier space coverage for scattering experiments, with application to SAR

    NASA Astrophysics Data System (ADS)

    Deming, Ross W.

    2010-04-01

    The Fourier Diffraction Theorem relates the data measured during electromagnetic, optical, or acoustic scattering experiments to the spatial Fourier transform of the object under test. The theorem is well-known, but since it is based on integral equations and complicated mathematical expansions, the typical derivation may be difficult for the non-specialist. In this paper, the theorem is derived and presented using simple geometry, plus undergraduatelevel physics and mathematics. For practitioners of synthetic aperture radar (SAR) imaging, the theorem is important to understand because it leads to a simple geometric and graphical understanding of image resolution and sampling requirements, and how they are affected by radar system parameters and experimental geometry. Also, the theorem can be used as a starting point for imaging algorithms and motion compensation methods. Several examples are given in this paper for realistic scenarios.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H.

    We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-raymore » microscopes. Corresponding work with x-ray microscopes is in progress.« less

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yong, E-mail: 83229994@qq.com; Ge, Hao, E-mail: haoge@pku.edu.cn; Xiong, Jie, E-mail: jiexiong@umac.mo

    Fluctuation theorem is one of the major achievements in the field of nonequilibrium statistical mechanics during the past two decades. There exist very few results for steady-state fluctuation theorem of sample entropy production rate in terms of large deviation principle for diffusion processes due to the technical difficulties. Here we give a proof for the steady-state fluctuation theorem of a diffusion process in magnetic fields, with explicit expressions of the free energy function and rate function. The proof is based on the Karhunen-Loève expansion of complex-valued Ornstein-Uhlenbeck process.

  20. Kharitonov's theorem: Generalizations and algorithms

    NASA Technical Reports Server (NTRS)

    Rublein, George

    1989-01-01

    In 1978, the Russian mathematician V. Kharitonov published a remarkably simple necessary and sufficient condition in order that a rectangular parallelpiped of polynomials be a stable set. Here, stable is taken to mean that the polynomials have no roots in the closed right-half of the complex plane. The possibility of generalizing this result was studied by numerous authors. A set, Q, of polynomials is given and a necessary and sufficient condition that the set be stable is sought. Perhaps the most general result is due to Barmish who takes for Q a polytope and proceeds to construct a complicated nonlinear function, H, of the points in Q. With the notion of stability which was adopted, Barmish asks that the boundary of the closed right-half plane be swept, that the set G is considered = to (j(omega)(bar) - infinity is less than omega is less than infinity) and for each j(omega)(sigma)G, require H(delta) is greater than 0. Barmish's scheme has the merit that it describes a true generalization of Kharitonov's theorem. On the other hand, even when Q is a polyhedron, the definition of H requires that one do an optimization over the entire set of vertices, and then a subsequent optimization over an auxiliary parameter. In the present work, only the case where Q is a polyhedron is considered and the standard definition of stability described, is used. There are straightforward generalizations of the method to the case of discrete stability or to cases where certain root positions are deemed desirable. The cases where Q is non-polyhedral are less certain as candidates for the method. Essentially, a method of geometric programming was applied to the problem of finding maximum and minimum angular displacements of points in the Nyquist locus (Q(j x omega)(bar) - infinity is less than omega is less than infinity). There is an obvious connection with the boundary sweeping requirement of Barmish.

  1. NPP ATMS Prelaunch Performance Assessment and Sensor Data Record Validation

    DTIC Science & Technology

    2011-04-29

    TMS to sense scattering of cold cosmic background radiance from the tops of preci pitating clouds allows the retrieval of preCipitation intensities...operational and research missions over the last 40 years. The Cross-track Infrared and Microwave Sounding Suite (CrIMSS), consisting of the Cross-track...Infrared Sounder (CrrS) and the flIst space-based, Nyquist-sampled cross-track microwave sounder, the Advanced Technology Microwave Sounder (ATMS), will

  2. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  3. Integrated control-system design via generalized LQG (GLQG) theory

    NASA Technical Reports Server (NTRS)

    Bernstein, Dennis S.; Hyland, David C.; Richter, Stephen; Haddad, Wassim M.

    1989-01-01

    Thirty years of control systems research has produced an enormous body of theoretical results in feedback synthesis. Yet such results see relatively little practical application, and there remains an unsettling gap between classical single-loop techniques (Nyquist, Bode, root locus, pole placement) and modern multivariable approaches (LQG and H infinity theory). Large scale, complex systems, such as high performance aircraft and flexible space structures, now demand efficient, reliable design of multivariable feedback controllers which optimally tradeoff performance against modeling accuracy, bandwidth, sensor noise, actuator power, and control law complexity. A methodology is described which encompasses numerous practical design constraints within a single unified formulation. The approach, which is based upon coupled systems or modified Riccati and Lyapunov equations, encompasses time-domain linear-quadratic-Gaussian theory and frequency-domain H theory, as well as classical objectives such as gain and phase margin via the Nyquist circle criterion. In addition, this approach encompasses the optimal projection approach to reduced-order controller design. The current status of the overall theory will be reviewed including both continuous-time and discrete-time (sampled-data) formulations.

  4. Image Quality Modeling and Characterization of Nyquist Sampled Framing Systems with Operational Considerations for Remote Sensing

    NASA Astrophysics Data System (ADS)

    Garma, Rey Jan D.

    The trade between detector and optics performance is often conveyed through the Q metric, which is defined as the ratio of detector sampling frequency and optical cutoff frequency. Historically sensors have operated at Q ≈ 1, which introduces aliasing but increases the system modulation transfer function (MTF) and signal-to-noise ratio (SNR). Though mathematically suboptimal, such designs have been operationally ideal when considering system parameters such as pointing stability and detector performance. Substantial advances in read noise and quantum efficiency of modern detectors may compensate for the negative aspects associated with balancing detector/optics performance, presenting an opportunity to revisit the potential for implementing Nyquist-sampled (Q ≈ 2) sensors. A digital image chain simulation is developed and validated against a laboratory testbed using objective and subjective assessments. Objective assessments are accomplished by comparison of the modeled MTF and measurements from slant-edge photographs. Subjective assessments are carried out by performing a psychophysical study where subjects are asked to rate simulation and testbed imagery against a DeltaNIIRS scale with the aid of a marker set. Using the validated model, additional test cases are simulated to study the effects of increased detector sampling on image quality with operational considerations. First, a factorial experiment using Q-sampling, pointing stability, integration time, and detector performance is conducted to measure the main effects and interactions of each on the response variable, DeltaNIIRS. To assess the fidelity of current models, variants of the General Image Quality Equation (GIQE) are evaluated against subject-provided ratings and two modified GIQE versions are proposed. Finally, using the validated simulation and modified IQE, trades are conducted to ascertain the feasibility of implementing Q ≈ 2 designs in future systems.

  5. Blind I/Q imbalance and nonlinear ISI mitigation in Nyquist-SCM direct detection system with cascaded widely linear and Volterra equalizer

    NASA Astrophysics Data System (ADS)

    Liu, Na; Ju, Cheng

    2018-02-01

    Nyquist-SCM signal after fiber transmission, direct detection (DD), and analog down-conversion suffers from linear ISI, nonlinear ISI, and I/Q imbalance, simultaneously. Theoretical analysis based on widely linear (WL) and Volterra series is given to explain the relationship and interaction of these three interferences. A blind equalization algorithm, cascaded WL and Volterra equalizer, is designed to mitigate these three interferences. Furthermore, the feasibility of the proposed cascaded algorithm is experimentally demonstrated based on a 40-Gbps data rate 16-quadrature amplitude modulation (QAM) virtual single sideband (VSSB) Nyquist-SCM DD system over 100-km standard single mode fiber (SSMF) transmission. In addition, the performances of conventional strictly linear equalizer, WL equalizer, Volterra equalizer, and cascaded WL and Volterra equalizer are experimentally evaluated, respectively.

  6. K2 Campaign 5 observations of pulsating subdwarf B stars: binaries and super-Nyquist frequencies

    NASA Astrophysics Data System (ADS)

    Reed, M. D.; Armbrecht, E. L.; Telting, J. H.; Baran, A. S.; Østensen, R. H.; Blay, Pere; Kvammen, A.; Kuutma, Teet; Pursimo, T.; Ketzer, L.; Jeffery, C. S.

    2018-03-01

    We report the discovery of three pulsating subdwarf B stars in binary systems observed with the Kepler space telescope during Campaign 5 of K2. EPIC 211696659 (SDSS J083603.98+155216.4) is a g-mode pulsator with a white dwarf companion and a binary period of 3.16 d. EPICs 211823779 (SDSS J082003.35+173914.2) and 211938328 (LB 378) are both p-mode pulsators with main-sequence F companions. The orbit of EPIC 211938328 is long (635 ± 146 d) while we cannot constrain that of EPIC 211823779. The p modes are near the Nyquist frequency and so we investigate ways to discriminate super- from sub-Nyquist frequencies. We search for rotationally induced frequency multiplets and all three stars appear to be slow rotators with EPIC 211696659 subsynchronous to its orbit.

  7. Zero-point term and quantum effects in the Johnson noise of resistors: a critical appraisal

    NASA Astrophysics Data System (ADS)

    Kish, Laszlo B.; Niklasson, Gunnar A.; Granqvist, Claes G.

    2016-05-01

    There is a longstanding debate about the zero-point term in the Johnson noise voltage of a resistor. This term originates from a quantum-theoretical treatment of the fluctuation-dissipation theorem (FDT). Is the zero-point term really there, or is it only an experimental artifact, due to the uncertainty principle, for phase-sensitive amplifiers? Could it be removed by renormalization of theories? We discuss some historical measurement schemes that do not lead to the effect predicted by the FDT, and we analyse new features that emerge when the consequences of the zero-point term are measured via the mean energy and force in a capacitor shunting the resistor. If these measurements verify the existence of a zero-point term in the noise, then two types of perpetual motion machines can be constructed. Further investigation with the same approach shows that, in the quantum limit, the Johnson-Nyquist formula is also invalid under general conditions even though it is valid for a resistor-antenna system. Therefore we conclude that in a satisfactory quantum theory of the Johnson noise, the FDT must, as a minimum, include also the measurement system used to evaluate the observed quantities. Issues concerning the zero-point term may also have implications for phenomena in advanced nanotechnology.

  8. Some limit theorems for ratios of order statistics from uniform random variables.

    PubMed

    Xu, Shou-Fang; Miao, Yu

    2017-01-01

    In this paper, we study the ratios of order statistics based on samples drawn from uniform distribution and establish some limit properties such as the almost sure central limit theorem, the large deviation principle, the Marcinkiewicz-Zygmund law of large numbers and complete convergence.

  9. Analysis of the faster-than-Nyquist optimal linear multicarrier system

    NASA Astrophysics Data System (ADS)

    Marquet, Alexandre; Siclet, Cyrille; Roque, Damien

    2017-02-01

    Faster-than-Nyquist signalization enables a better spectral efficiency at the expense of an increased computational complexity. Regarding multicarrier communications, previous work mainly relied on the study of non-linear systems exploiting coding and/or equalization techniques, with no particular optimization of the linear part of the system. In this article, we analyze the performance of the optimal linear multicarrier system when used together with non-linear receiving structures (iterative decoding and direct feedback equalization), or in a standalone fashion. We also investigate the limits of the normality assumption of the interference, used for implementing such non-linear systems. The use of this optimal linear system leads to a closed-form expression of the bit-error probability that can be used to predict the performance and help the design of coded systems. Our work also highlights the great performance/complexity trade-off offered by decision feedback equalization in a faster-than-Nyquist context. xml:lang="fr"

  10. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity.

    PubMed

    Kuersteiner, Guido M; Prucha, Ingmar R

    2013-06-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n . The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT.

  11. Deterministic multidimensional nonuniform gap sampling.

    PubMed

    Worley, Bradley; Powers, Robert

    2015-12-01

    Born from empirical observations in nonuniformly sampled multidimensional NMR data relating to gaps between sampled points, the Poisson-gap sampling method has enjoyed widespread use in biomolecular NMR. While the majority of nonuniform sampling schemes are fully randomly drawn from probability densities that vary over a Nyquist grid, the Poisson-gap scheme employs constrained random deviates to minimize the gaps between sampled grid points. We describe a deterministic gap sampling method, based on the average behavior of Poisson-gap sampling, which performs comparably to its random counterpart with the additional benefit of completely deterministic behavior. We also introduce a general algorithm for multidimensional nonuniform sampling based on a gap equation, and apply it to yield a deterministic sampling scheme that combines burst-mode sampling features with those of Poisson-gap schemes. Finally, we derive a relationship between stochastic gap equations and the expectation value of their sampling probability densities. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Equalization enhanced phase noise in Nyquist-spaced superchannel transmission systems using multi-channel digital back-propagation

    PubMed Central

    Xu, Tianhua; Liga, Gabriele; Lavery, Domaniç; Thomsen, Benn C.; Savory, Seb J.; Killey, Robert I.; Bayvel, Polina

    2015-01-01

    Superchannel transmission spaced at the symbol rate, known as Nyquist spacing, has been demonstrated for effectively maximizing the optical communication channel capacity and spectral efficiency. However, the achievable capacity and reach of transmission systems using advanced modulation formats are affected by fibre nonlinearities and equalization enhanced phase noise (EEPN). Fibre nonlinearities can be effectively compensated using digital back-propagation (DBP). However EEPN which arises from the interaction between laser phase noise and dispersion cannot be efficiently mitigated, and can significantly degrade the performance of transmission systems. Here we report the first investigation of the origin and the impact of EEPN in Nyquist-spaced superchannel system, employing electronic dispersion compensation (EDC) and multi-channel DBP (MC-DBP). Analysis was carried out in a Nyquist-spaced 9-channel 32-Gbaud DP-64QAM transmission system. Results confirm that EEPN significantly degrades the performance of all sub-channels of the superchannel system and that the distortions are more severe for the outer sub-channels, both using EDC and MC-DBP. It is also found that the origin of EEPN depends on the relative position between the carrier phase recovery module and the EDC (or MC-DBP) module. Considering EEPN, diverse coding techniques and modulation formats have to be applied for optimizing different sub-channels in superchannel systems. PMID:26365422

  13. Optical subcarrier processing for Nyquist SCM signals via coherent spectrum overlapping in four-wave mixing with coherent multi-tone pump.

    PubMed

    Lu, Guo-Wei; Luís, Ruben S; Mendinueta, José Manuel Delgado; Sakamoto, Takahide; Yamamoto, Naokatsu

    2018-01-22

    As one of the promising multiplexing and multicarrier modulation technologies, Nyquist subcarrier multiplexing (Nyquist SCM) has recently attracted research attention to realize ultra-fast and ultra-spectral-efficient optical networks. In this paper, we propose and experimentally demonstrate optical subcarrier processing technologies for Nyquist SCM signals such as frequency conversion, multicast and data aggregation of subcarriers, through the coherent spectrum overlapping between subcarriers in four-wave mixing (FWM) with coherent multi-tone pump. The data aggregation is realized by coherently superposing or combining low-level subcarriers to yield high-level subcarriers in the optical field. Moreover, multiple replicas of the data-aggregated subcarriers and the subcarriers carrying the original data are obtained. In the experiment, two 5 Gbps quadrature phase-shift keying (QPSK) subcarriers are coherently combined to generate a 10 Gbps 16 quadrature amplitude modulation (QAM) subcarrier with frequency conversions through the FWM with coherent multi-tone pump. Less than 1 dB optical signal-to-noise ratio (OSNR) penalty variation is observed for the synthesized 16QAM subcarriers after the data aggregation. In addition, some subcarriers are kept in the original formats, QPSK, with a power penalty of less than 0.4 dB with respect to the original input subcarriers. The proposed subcarrier processing technology enables flexibility for spectral management in future dynamic optical networks.

  14. Imaging single cells in a beam of live cyanobacteria with an X-ray laser (CXIDB ID 27)

    DOE Data Explorer

    Schot, Gijs, vander

    2015-02-10

    Diffraction pattern of a micron-sized S. elongatus cell at 1,100 eV photon energy (1.13 nm wavelength) with ~10^11 photons per square micron on the sample in ~70 fs. The signal to noise ratio at 4 nm resolution is 3.7 with 0.24 photons per Nyquist pixel. The cell was alive at the time of the exposure. The central region of the pattern (dark red) is saturated and this prevented reliable image reconstruction.

  15. Membrane Vibration Analysis Above the Nyquist Limit with Fluorescence Videogrammetry

    NASA Technical Reports Server (NTRS)

    Dorrington, Adrian A.; Jones, Thomas W.; Danehy, Paul M.; Pappa, Richard S.

    2004-01-01

    A new method for generating photogrammetric targets by projecting an array of laser beams onto a membrane doped with fluorescent laser dye has recently been developed. In this paper we review this new fluorescence based technique, then proceed to show how it can be used for dynamic measurements, and how a short pulsed (10 ns) laser allows the measurement of vibration modes at frequencies several times the sampling frequency. In addition, we present experimental results showing the determination of fundamental and harmonic vibration modes of a drum style dye-doped polymer membrane tautly mounted on a 12-inch circular hoop and excited with 30 Hz and 62 Hz sinusoidal acoustic waves. The projected laser dot pattern was generated by passing the beam from a pulsed Nd:YAG laser though a diffractive optical element, and the resulting fluorescence was imaged with three digital video cameras, all of which were synchronized with a pulse and delay generator. Although the video cameras are capable of 240 Hz frame rates, the laser s output was limited to 30 Hz and below. Consequently, aliasing techniques were used to allow the measurement of vibration modes up to 186 Hz with a Nyquist limit of less than 15 Hz.

  16. Verification or Proof: Justification of Pythagoras' Theorem in Chinese Mathematics Classrooms

    ERIC Educational Resources Information Center

    Huang, Rongjin

    2005-01-01

    This paper presents key findings of my research on the approaches to justification by investigating how a sample of teachers in Hong Kong and Shanghai taught the topic Pythagoras theorem. In this study, 8 Hong Kong videos taken from TIMSS 1999 Video Study and 11 Shanghai videos videotaped by the researcher comprised the database. It was found that…

  17. Limit Theory for Panel Data Models with Cross Sectional Dependence and Sequential Exogeneity

    PubMed Central

    Kuersteiner, Guido M.; Prucha, Ingmar R.

    2013-01-01

    The paper derives a general Central Limit Theorem (CLT) and asymptotic distributions for sample moments related to panel data models with large n. The results allow for the data to be cross sectionally dependent, while at the same time allowing the regressors to be only sequentially rather than strictly exogenous. The setup is sufficiently general to accommodate situations where cross sectional dependence stems from spatial interactions and/or from the presence of common factors. The latter leads to the need for random norming. The limit theorem for sample moments is derived by showing that the moment conditions can be recast such that a martingale difference array central limit theorem can be applied. We prove such a central limit theorem by first extending results for stable convergence in Hall and Hedye (1980) to non-nested martingale arrays relevant for our applications. We illustrate our result by establishing a generalized estimation theory for GMM estimators of a fixed effect panel model without imposing i.i.d. or strict exogeneity conditions. We also discuss a class of Maximum Likelihood (ML) estimators that can be analyzed using our CLT. PMID:23794781

  18. Experimental comparison of direct detection Nyquist SSB transmission based on silicon dual-drive and IQ Mach-Zehnder modulators with electrical packaging.

    PubMed

    Ruan, Xiaoke; Li, Ke; Thomson, David J; Lacava, Cosimo; Meng, Fanfan; Demirtzioglou, Iosif; Petropoulos, Periklis; Zhu, Yixiao; Reed, Graham T; Zhang, Fan

    2017-08-07

    We have designed and fabricated a silicon photonic in-phase-quadrature (IQ) modulator based on a nested dual-drive Mach-Zehnder structure incorporating electrical packaging. We have assessed its use for generating Nyquist-shaped single sideband (SSB) signals by operating it either as an IQ Mach-Zehnder modulator (IQ-MZM) or using just a single branch of the dual-drive Mach-Zehnder modulator (DD-MZM). The impact of electrical packaging on the modulator bandwidth is also analyzed. We demonstrate 40 Gb/s (10Gbaud) 16-ary quadrature amplitude modulation (16-QAM) Nyquist-shaped SSB transmission over 160 km standard single mode fiber (SSMF). Without using any chromatic dispersion compensation, the bit error rates (BERs) of 5.4 × 10 -4 and 9.0 × 10 -5 were measured for the DD-MZM and IQ-MZM, respectively, far below the 7% hard-decision forward error correction threshold. The performance difference between IQ-MZM and DD-MZM is most likely due to the non-ideal electrical packaging. Our work is the first experimental comparison between silicon IQ-MZM and silicon DD-MZM in generating SSB signals. We also demonstrate 50 Gb/s (12.5Gbaud) 16-QAM Nyquist-shaped SSB transmission over 320 km SSMF with a BER of 2.7 × 10 -3 . Both the silicon IQ-MZM and the DD-MZM show potential for optical transmission at metro scale and for data center interconnection.

  19. Performance and stability of telemanipulators using bilateral impedance control. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Moore, Christopher Lane

    1991-01-01

    A new method of control for telemanipulators called bilateral impedance control is investigated. This new method differs from previous approaches in that interaction forces are used as the communication signals between the master and slave robots. The new control architecture has several advantages: (1) It allows the master robot and the slave robot to be stabilized independently without becoming involved in the overall system dynamics; (2) It permits the system designers to arbitrarily specify desired performance characteristics such as the force and position ratios between the master and slave; (3) The impedance at both ends of the telerobotic system can be modulated to suit the requirements of the task. The main goals of the research are to characterize the performance and stability of the new control architecture. The dynamics of the telerobotic system are described by a bond graph model that illustrates how energy is transformed, stored, and dissipated. Performance can be completely described by a set of three independent parameters. These parameters are fundamentally related to the structure of the H matrix that regulates the communication of force signals within the system. Stability is analyzed with two mathematical techniques: the Small Gain Theorem and the Multivariable Nyquist Criterion. The theoretical predictions for performance and stability are experimentally verified by implementing the new control architecture on a multidegree of freedom telemanipulator.

  20. Are reconstruction filters necessary?

    NASA Astrophysics Data System (ADS)

    Holst, Gerald C.

    2006-05-01

    Shannon's sampling theorem (also called the Shannon-Whittaker-Kotel'nikov theorem) was developed for the digitization and reconstruction of sinusoids. Strict adherence is required when frequency preservation is important. Three conditions must be met to satisfy the sampling theorem: (1) The signal must be band-limited, (2) the digitizer must sample the signal at an adequate rate, and (3) a low-pass reconstruction filter must be present. In an imaging system, the signal is band-limited by the optics. For most imaging systems, the signal is not adequately sampled resulting in aliasing. While the aliasing seems excessive mathematically, it does not significantly affect the perceived image. The human visual system detects intensity differences, spatial differences (shapes), and color differences. The eye is less sensitive to frequency effects and therefore sampling artifacts have become quite acceptable. Indeed, we love our television even though it is significantly undersampled. The reconstruction filter, although absolutely essential, is rarely discussed. It converts digital data (which we cannot see) into a viewable analog signal. There are several reconstruction filters: electronic low-pass filters, the display media (monitor, laser printer), and your eye. These are often used in combination to create a perceived continuous image. Each filter modifies the MTF in a unique manner. Therefore image quality and system performance depends upon the reconstruction filter(s) used. The selection depends upon the application.

  1. The Power of Doing: A Learning Exercise That Brings the Central Limit Theorem to Life

    ERIC Educational Resources Information Center

    Price, Barbara A.; Zhang, Xiaolong

    2007-01-01

    This article demonstrates an active learning technique for teaching the Central Limit Theorem (CLT) in an introductory undergraduate business statistics class. Groups of students carry out one of two experiments in the lab, tossing a die in sets of 5 rolls or tossing a die in sets of 10 rolls. They are asked to calculate the sample average of each…

  2. Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data

    NASA Astrophysics Data System (ADS)

    Mobli, Mehdi

    2015-07-01

    The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.

  3. Characterizing the Effect of Shock on Isotopic Ages I: Ferroan Anorthosite Major Elements

    NASA Technical Reports Server (NTRS)

    Edmunson, J.; Cohen, B. A.; Spilde, M. N.

    2009-01-01

    A study underway at Marshall Space Flight Center is further characterizing the effects of shock on isotopic ages. The study was inspired by the work of L. Nyquist et al. [1, 2], but goes beyond their work by investigating the spatial distribution of elements in lunar ferroan anorthosites (FANs) and magnesium-suite (Mg-suite) rocks in order to understand the processes that may influence the radioisotope ages obtained on early lunar samples. This paper discusses the first data set (major elements) obtained on FANs 62236 and 67075.

  4. Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme

    PubMed Central

    Woringer, Maxime; Darzacq, Xavier; Zimmer, Christophe

    2017-01-01

    Three-dimensional fluorescence microscopy based on Nyquist sampling of focal planes faces harsh trade-offs between acquisition time, light exposure, and signal-to-noise. We propose a 3D compressed sensing approach that uses temporal modulation of the excitation intensity during axial stage sweeping and can be adapted to fluorescence microscopes without hardware modification. We describe implementations on a lattice light sheet microscope and an epifluorescence microscope, and show that images of beads and biological samples can be reconstructed with a 5-10 fold reduction of light exposure and acquisition time. Our scheme opens a new door towards faster and less damaging 3D fluorescence microscopy. PMID:28788909

  5. A planar near-field scanning technique for bistatic radar cross section measurements

    NASA Technical Reports Server (NTRS)

    Tuhela-Reuning, S.; Walton, E. K.

    1990-01-01

    A progress report on the development of a bistatic radar cross section (RCS) measurement range is presented. A technique using one parabolic reflector and a planar scanning probe antenna is analyzed. The field pattern in the test zone is computed using a spatial array of signal sources. It achieved an illumination pattern with 1 dB amplitude and 15 degree phase ripple over the target zone. The required scan plane size is found to be proportional to the size of the desired test target. Scan plane probe sample spacing can be increased beyond the Nyquist lambda/2 limit permitting constant probe sample spacing over a range of frequencies.

  6. Compressed sensing: Radar signal detection and parameter measurement for EW applications

    NASA Astrophysics Data System (ADS)

    Rao, M. Sreenivasa; Naik, K. Krishna; Reddy, K. Maheshwara

    2016-09-01

    State of the art system development is very much required for UAVs (Unmanned Aerial Vehicle) and other airborne applications, where miniature, lightweight and low-power specifications are essential. Currently, the airborne Electronic Warfare (EW) systems are developed with digital receiver technology using Nyquist sampling. The detection of radar signals and parameter measurement is a necessary requirement in EW digital receivers. The Random Modulator Pre-Integrator (RMPI) can be used for matched detection of signals using smashed filter. RMPI hardware eliminates the high sampling rate analog to digital computer and reduces the number of samples using random sampling and detection of sparse orthonormal basis vectors. RMPI explore the structural and geometrical properties of the signal apart from traditional time and frequency domain analysis for improved detection. The concept has been proved with the help of MATLAB and LabVIEW simulations.

  7. A parameter estimation algorithm for LFM/BPSK hybrid modulated signal intercepted by Nyquist folding receiver

    NASA Astrophysics Data System (ADS)

    Qiu, Zhaoyang; Wang, Pei; Zhu, Jun; Tang, Bin

    2016-12-01

    Nyquist folding receiver (NYFR) is a novel ultra-wideband receiver architecture which can realize wideband receiving with a small amount of equipment. Linear frequency modulated/binary phase shift keying (LFM/BPSK) hybrid modulated signal is a novel kind of low probability interception signal with wide bandwidth. The NYFR is an effective architecture to intercept the LFM/BPSK signal and the LFM/BPSK signal intercepted by the NYFR will add the local oscillator modulation. A parameter estimation algorithm for the NYFR output signal is proposed. According to the NYFR prior information, the chirp singular value ratio spectrum is proposed to estimate the chirp rate. Then, based on the output self-characteristic, matching component function is designed to estimate Nyquist zone (NZ) index. Finally, matching code and subspace method are employed to estimate the phase change points and code length. Compared with the existing methods, the proposed algorithm has a better performance. It also has no need to construct a multi-channel structure, which means the computational complexity for the NZ index estimation is small. The simulation results demonstrate the efficacy of the proposed algorithm.

  8. Progress in multirate digital control system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1991-01-01

    A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.

  9. Method for utilizing properties of the sinc(x) function for phase retrieval on nyquist-under-sampled data

    NASA Technical Reports Server (NTRS)

    Dean, Bruce H. (Inventor); Smith, Jeffrey Scott (Inventor); Aronstein, David L. (Inventor)

    2012-01-01

    Disclosed herein are systems, methods, and non-transitory computer-readable storage media for simulating propagation of an electromagnetic field, performing phase retrieval, or sampling a band-limited function. A system practicing the method generates transformed data using a discrete Fourier transform which samples a band-limited function f(x) without interpolating or modifying received data associated with the function f(x), wherein an interval between repeated copies in a periodic extension of the function f(x) obtained from the discrete Fourier transform is associated with a sampling ratio Q, defined as a ratio of a sampling frequency to a band-limited frequency, and wherein Q is assigned a value between 1 and 2 such that substantially no aliasing occurs in the transformed data, and retrieves a phase in the received data based on the transformed data, wherein the phase is used as feedback to an optical system.

  10. Subrandom methods for multidimensional nonuniform sampling.

    PubMed

    Worley, Bradley

    2016-08-01

    Methods of nonuniform sampling that utilize pseudorandom number sequences to select points from a weighted Nyquist grid are commonplace in biomolecular NMR studies, due to the beneficial incoherence introduced by pseudorandom sampling. However, these methods require the specification of a non-arbitrary seed number in order to initialize a pseudorandom number generator. Because the performance of pseudorandom sampling schedules can substantially vary based on seed number, this can complicate the task of routine data collection. Approaches such as jittered sampling and stochastic gap sampling are effective at reducing random seed dependence of nonuniform sampling schedules, but still require the specification of a seed number. This work formalizes the use of subrandom number sequences in nonuniform sampling as a means of seed-independent sampling, and compares the performance of three subrandom methods to their pseudorandom counterparts using commonly applied schedule performance metrics. Reconstruction results using experimental datasets are also provided to validate claims made using these performance metrics. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Application of the MNA design method to a nonlinear turbofan engine. [multivariable Nyquist array method

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1981-01-01

    Using nonlinear digital simulation as a representative model of the dynamic operation of the QCSEE turbofan engine, a feedback control system is designed by variable frequency design techniques. Transfer functions are generated for each of five power level settings covering the range of operation from approach power to full throttle (62.5% to 100% full power). These transfer functions are then used by an interactive control system design synthesis program to provide a closed loop feedback control using the multivariable Nyquist array and extensions to multivariable Bode diagrams and Nichols charts.

  12. The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques

    ERIC Educational Resources Information Center

    Menil, Violeta C.

    2005-01-01

    In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…

  13. Comparisons of Mineralogy Between Cumulate Eucrites and Lunar Meteorites Possibly from the Farside Anorsothitic Crust

    NASA Technical Reports Server (NTRS)

    Takeda, H.; Yamaguchi, A.; Hiroi, T.; Nyquist, L. E.; Shih, C.-Y.; Ohtake, M.; Karouji, Y.; Kobayashi, S.

    2011-01-01

    Anorthosites composed of nearly pure anorthite (PAN) at many locations in the farside highlands have been observed by the Kaguya multiband imager and spectral profiler [1]. Mineralogical studies of lunar meteorites of the Dhofar 489 group [2,3] and Yamato (Y-) 86032 [4], all possibly from the farside highlands, showed some aspects of the farside crust. Nyquist et al. [5] performed Sm-Nd and Ar-Ar studies of pristine ferroan anorthosites (FANs) of the returned Apollo samples and of Dhofar 908 and 489, and discussed implications for lunar crustal history. Nyquist et al. [6] reported initial results of a combined mineralogical/chronological study of the Yamato (Y-) 980318 cumulate eucrite with a conventional Sm-Nd age of 4567 24 Ma and suggested that all eucrites, including cumulate eucrites, crystallized from parental magmas within a short interval following differentiation of their parent body, and most eucrites participated in an event or events in the time interval 4400- 4560 Ma in which many isotopic systems were partially reset. During the foregoing studies, we recognized that variations in mineralogy and chronology of lunar anorthosites are more complex than those of the crustal materials of the HED parent body. In this study, we compared the mineralogies and reflectance spectra of the cumulate eucrites, Y-980433 and 980318, to those of the Dhofar 307 lunar meteorite of the Dhofar 489 group [2]. Here we consider information from these samples to gain a better understanding of the feldspathic farside highlands and the Vesta-like body.

  14. Enabling Super-Nyquist Wavefront Control on WFIRST

    NASA Astrophysics Data System (ADS)

    Bendek, Eduardo; Belikov, Ruslan; Sirbu, Dan; Shaklan, Stuart B.; Eldorado Riggs, A. J.

    2018-01-01

    A large fraction of sun-like stars is contained in Binary systems. Within 10pc there are 70 FGK stars from which, 43 belong to a multi-star system, and 28 of them have companion leak that is greater than 1e-9 contrast assuming typical Hubble-quality space optics. Currently, those binary stars are not included in the WFIRST-CGI target list, but they could be observed if high-contrast imaging around binary star systems using WFIRST is possible, increasing by 70% the number of possible FGK targets for the mission. The Multi-Star Wavefront Control (MSWC) algorithm can be used to suppress the companion star leakage. If the targets have angular separations larger than the Nyquist controllable region of the Deformable Mirror the MSWC must operate in its Super-Nyquist (SN) mode. This mode requires a target star replica within the SN region in order to provide the energy, and coherent light necessary to null speckles at SN angular separations. For the case of WFIRST, about half of the targets that can be observed using MSWC have angular separations larger than the Nyquist controllable region of the 48x48 actuator Deformable Mirror (DM) to be used. Here, we discuss multiple alternatives to generate those PSF replicas with minimal or no impact to the WFIRST Coronagraph instrument such as 1) the addition of a movable diffractive pupil mounted of the Shape Pupil wheel. 2) Design of a modified Shape Pupil design able to create a dark zone and at the same time diffract a small fraction of the starlight on the SN region. 3) Predict the minimum residual quilting on Xinetics DM that would allow observing a given target.

  15. Techniques for High-contrast Imaging in Multi-star Systems. II. Multi-star Wavefront Control

    NASA Astrophysics Data System (ADS)

    Sirbu, D.; Thomas, S.; Belikov, R.; Bendek, E.

    2017-11-01

    Direct imaging of exoplanets represents a challenge for astronomical instrumentation due to the high-contrast ratio and small angular separation between the host star and the faint planet. Multi-star systems pose additional challenges for coronagraphic instruments due to the diffraction and aberration leakage caused by companion stars. Consequently, many scientifically valuable multi-star systems are excluded from direct imaging target lists for exoplanet surveys and characterization missions. Multi-star Wavefront Control (MSWC) is a technique that uses a coronagraphic instrument’s deformable mirror (DM) to create high-contrast regions in the focal plane in the presence of multiple stars. MSWC uses “non-redundant” modes on the DM to independently control speckles from each star in the dark zone. Our previous paper also introduced the Super-Nyquist wavefront control technique, which uses a diffraction grating to generate high-contrast regions beyond the Nyquist limit (nominal region correctable by the DM). These two techniques can be combined as MSWC-s to generate high-contrast regions for multi-star systems at wide (Super-Nyquist) angular separations, while MSWC-0 refers to close (Sub-Nyquist) angular separations. As a case study, a high-contrast wavefront control simulation that applies these techniques shows that the habitable region of the Alpha Centauri system can be imaged with a small aperture at 8× {10}-9 mean raw contrast in 10% broadband light in one-sided dark holes from 1.6-5.5 λ/D. Another case study using a larger 2.4 m aperture telescope such as the Wide-Field Infrared Survey Telescope uses these techniques to image the habitable zone of Alpha Centauri at 3.2× {10}-9 mean raw contrast in monochromatic light.

  16. Destroying Aliases from the Ground and Space: Super-Nyquist ZZ Cetis in K2 Long Cadence Data

    NASA Astrophysics Data System (ADS)

    Bell, Keaton J.; Hermes, J. J.; Vanderbosch, Z.; Montgomery, M. H.; Winget, D. E.; Dennihy, E.; Fuchs, J. T.; Tremblay, P.-E.

    2017-12-01

    With typical periods of the order of 10 minutes, the pulsation signatures of ZZ Ceti variables (pulsating hydrogen-atmosphere white dwarf stars) are severely undersampled by long-cadence (29.42 minutes per exposure) K2 observations. Nyquist aliasing renders the intrinsic frequencies ambiguous, stifling precision asteroseismology. We report the discovery of two new ZZ Cetis in long-cadence K2 data: EPIC 210377280 and EPIC 220274129. Guided by three to four nights of follow-up, high-speed (≤slant 30 s) photometry from the McDonald Observatory, we recover accurate pulsation frequencies for K2 signals that reflected four to five times off the Nyquist with the full precision of over 70 days of monitoring (∼0.01 μHz). In turn, the K2 observations enable us to select the correct peaks from the alias structure of the ground-based signals caused by gaps in the observations. We identify at least seven independent pulsation modes in the light curves of each of these stars. For EPIC 220274129, we detect three complete sets of rotationally split {\\ell }=1 (dipole mode) triplets, which we use to asteroseismically infer the stellar rotation period of 12.7 ± 1.3 hr. We also detect two sub-Nyquist K2 signals that are likely combination (difference) frequencies. We attribute our inability to match some of the K2 signals to the ground-based data to changes in pulsation amplitudes between epochs of observation. Model fits to SOAR spectroscopy place both EPIC 210377280 and EPIC 220274129 near the middle of the ZZ Ceti instability strip, with {T}{eff} =11590+/- 200 K and 11810 ± 210 K, and masses 0.57 ± 0.03 M ⊙ and 0.62 ± 0.03 M ⊙, respectively.

  17. Sensitivity of nonuniform sampling NMR.

    PubMed

    Palmer, Melissa R; Suiter, Christopher L; Henry, Geneive E; Rovnyak, James; Hoch, Jeffrey C; Polenova, Tatyana; Rovnyak, David

    2015-06-04

    Many information-rich multidimensional experiments in nuclear magnetic resonance spectroscopy can benefit from a signal-to-noise ratio (SNR) enhancement of up to about 2-fold if a decaying signal in an indirect dimension is sampled with nonconsecutive increments, termed nonuniform sampling (NUS). This work provides formal theoretical results and applications to resolve major questions about the scope of the NUS enhancement. First, we introduce the NUS Sensitivity Theorem in which any decreasing sampling density applied to any exponentially decaying signal always results in higher sensitivity (SNR per square root of measurement time) than uniform sampling (US). Several cases will illustrate this theorem and show that even conservative applications of NUS improve sensitivity by useful amounts. Next, we turn to a serious limitation of uniform sampling: the SNR by US decreases for extending evolution times, and thus total experimental times, beyond 1.26T2 (T2 = signal decay constant). Thus, SNR and resolution cannot be simultaneously improved by extending US beyond 1.26T2. We find that NUS can eliminate this constraint, and we introduce the matched NUS SNR Theorem: an exponential sampling density matched to the signal decay always improves the SNR with additional evolution time. Though proved for a specific case, broader classes of NUS densities also improve SNR with evolution time. Applications of these theoretical results are given for a soluble plant natural product and a solid tripeptide (u-(13)C,(15)N-MLF). These formal results clearly demonstrate the inadequacies of applying US to decaying signals in indirect nD-NMR dimensions, supporting a broader adoption of NUS.

  18. Application of wavefield compressive sensing in surface wave tomography

    NASA Astrophysics Data System (ADS)

    Zhan, Zhongwen; Li, Qingyang; Huang, Jianping

    2018-06-01

    Dense arrays allow sampling of seismic wavefield without significant aliasing, and surface wave tomography has benefitted from exploiting wavefield coherence among neighbouring stations. However, explicit or implicit assumptions about wavefield, irregular station spacing and noise still limit the applicability and resolution of current surface wave methods. Here, we propose to apply the theory of compressive sensing (CS) to seek a sparse representation of the surface wavefield using a plane-wave basis. Then we reconstruct the continuous surface wavefield on a dense regular grid before applying any tomographic methods. Synthetic tests demonstrate that wavefield CS improves robustness and resolution of Helmholtz tomography and wavefield gradiometry, especially when traditional approaches have difficulties due to sub-Nyquist sampling or complexities in wavefield.

  19. Sampling theorem for geometric moment determination and its application to a laser beam position detector.

    PubMed

    Loce, R P; Jodoin, R E

    1990-09-10

    Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.

  20. Nature's crucible: Manufacturing optical nonlinearities for high resolution, high sensitivity encoding in the compound eye of the fly, Musca domestica

    NASA Technical Reports Server (NTRS)

    Wilcox, Mike

    1993-01-01

    The number of pixels per unit area sampling an image determines Nyquist resolution. Therefore, the highest pixel density is the goal. Unfortunately, as reduction in pixel size approaches the wavelength of light, sensitivity is lost and noise increases. Animals face the same problems and have achieved novel solutions. Emulating these solutions offers potentially unlimited sensitivity with detector size approaching the diffraction limit. Once an image is 'captured', cellular preprocessing of information allows extraction of high resolution information from the scene. Computer simulation of this system promises hyperacuity for machine vision.

  1. Concretes of low environmental impact obtained by geopolymerization of Metakaolin

    NASA Astrophysics Data System (ADS)

    Sandoval, D. C.; Montaño, A. M.; González, C. P.; Gutiérrez, J.

    2018-04-01

    This work shows results of partial replacement of Portland Type I cement®, by geopolymers obtained through alkaline activation of Metakaolin, in concrete mixtures. Replacement was made with 10%, 20% and 30% of geopolymers at 7, 14, 28 and 90 days of setting. Cement samples was mechanical and electrically tested. Mechanical resistance to compression assay shows that the best percentage of replacement is 10% for every setting time; highest value is 26.75MPa at 90 days. Nyquist diagrams at different times of immersion exhibit same trend: decreasing of electrical resistance as time of assay goes by.

  2. Sampling theory for asynoptic satellite observations. I Space-time spectra, resolution, and aliasing. II - Fast Fourier synoptic mapping

    NASA Technical Reports Server (NTRS)

    Salby, M. L.

    1982-01-01

    An evaluation of the information content of asynoptic data taken in the form of nadir sonde and limb scan observations is presented, and a one-to-one correspondence is established between the alias-free data and twice-daily synoptic maps. Attention is given to space and time limitations of sampling and the orbital geometry is discussed. The sampling pattern is demonstrated to determine unique space-time spectra at all wavenumbers and frequencies. Spectral resolution and aliasing are explored, while restrictions on sampling and information content are defined. It is noted that irregular sampling at high latitudes produces spurious contamination effects. An Asynoptic Sampling Theorem is thereby formulated, as is a Synoptic Retrieval Theorem, in the second part of the article. In the latter, a procedure is developed for retrieving the unique correspondence between the asymptotic data and the synoptic maps. Applications examples are provided using data from the Nimbus-6 satellite.

  3. Sub-GHz-resolution C-band Nyquist-filtering interleaver on a high-index-contrast photonic integrated circuit.

    PubMed

    Zhuang, Leimeng; Zhu, Chen; Corcoran, Bill; Burla, Maurizio; Roeloffzen, Chris G H; Leinse, Arne; Schröder, Jochen; Lowery, Arthur J

    2016-03-21

    Modern optical communications rely on high-resolution, high-bandwidth filtering to maximize the data-carrying capacity of fiber-optic networks. Such filtering typically requires high-speed, power-hungry digital processes in the electrical domain. Passive optical filters currently provide high bandwidths with low power consumption, but at the expense of resolution. Here, we present a passive filter chip that functions as an optical Nyquist-filtering interleaver featuring sub-GHz resolution and a near-rectangular passband with 8% roll-off. This performance is highly promising for high-spectral-efficiency Nyquist wavelength division multiplexed (N-WDM) optical super-channels. The chip provides a simple two-ring-resonator-assisted Mach-Zehnder interferometer, which has a sub-cm2 footprint owing to the high-index-contrast Si3N4/SiO2 waveguide, while manifests low wavelength-dependency enabling C-band (> 4 THz) coverage with more than 160 effective free spectral ranges of 25 GHz. This device is anticipated to be a critical building block for spectrally-efficient, chip-scale transceivers and ROADMs for N-WDM super-channels in next-generation optical communication networks.

  4. A Deeper Understanding of Stability in the Solar Wind: Applying Nyquist's Instability Criterion to Wind Faraday Cup Data

    NASA Astrophysics Data System (ADS)

    Alterman, B. L.; Klein, K. G.; Verscharen, D.; Stevens, M. L.; Kasper, J. C.

    2017-12-01

    Long duration, in situ data sets enable large-scale statistical analysis of free-energy-driven instabilities in the solar wind. The plasma beta and temperature anisotropy plane provides a well-defined parameter space in which a single-fluid plasma's stability can be represented. Because this reduced parameter space can only represent instability thresholds due to the free energy of one ion species - typically the bulk protons - the true impact of instabilities on the solar wind is under estimated. Nyquist's instability criterion allows us to systematically account for other sources of free energy including beams, drifts, and additional temperature anisotropies. Utilizing over 20 years of Wind Faraday cup and magnetic field observations, we have resolved the bulk parameters for three ion populations: the bulk protons, beam protons, and alpha particles. Applying Nyquist's criterion, we calculate the number of linearly growing modes supported by each spectrum and provide a more nuanced consideration of solar wind stability. Using collisional age measurements, we predict the stability of the solar wind close to the sun. Accounting for the free-energy from the three most common ion populations in the solar wind, our approach provides a more complete characterization of solar wind stability.

  5. Referenceless one-dimensional Nyquist ghost correction in multicoil single-shot spatiotemporally encoded MRI.

    PubMed

    Chen, Ying; Liao, Yupeng; Yuan, Lisha; Liu, Hui; Yun, Seong Dae; Shah, Nadim Joni; Chen, Zhong; Zhong, Jianhui

    2017-04-01

    Single-shot spatiotemporally encoded (SPEN) MRI is a novel fast imaging method capable of retaining the time efficiency of single-shot echo planar imaging (EPI) but with distortion artifacts significantly reduced. Akin to EPI, the phase inconsistencies between mismatched even and odd echoes also result in the so-called Nyquist ghosts. However, the characteristic of the SPEN signals provides the possibility of obtaining ghost-free images directly from even and odd echoes respectively, without acquiring additional reference scans. In this paper, a theoretical analysis of the Nyquist ghosts manifested in single-shot SPEN MRI is presented, a one-dimensional correction scheme is put forward capable of maintaining definition of image features without blurring when the phase inconsistency along SPEN encoding direction is negligible, and a technique is introduced for convenient and robust correction of data from multi-channel receiver coils. The effectiveness of the proposed processing pipeline is validated by a series of experiments conducted on simulation data, in vivo rats and healthy human brains. The robustness of the method is further verified by implementing distortion correction on ghost corrected data. Copyright © 2016. Published by Elsevier Inc.

  6. A pulsation zoo in the hot subdwarf B star KIC 10139564 observed by Kepler

    NASA Astrophysics Data System (ADS)

    Baran, A. S.; Reed, M. D.; Stello, D.; Østensen, R. H.; Telting, J. H.; Pakštienë, E.; O'Toole, S. J.; Silvotti, R.; Degroote, P.; Bloemen, S.; Hu, H.; Van Grootel, V.; Clarke, B. D.; Van Cleve, J.; Thompson, S. E.; Kawaler, S. D.

    2012-08-01

    We present our analyses of 15 months of Kepler data on KIC 10139564. We detected 57 periodicities with a variety of properties not previously observed all together in one pulsating subdwarf B (sdB) star. Ten of the periodicities were found in the low-frequency region, and we associate them with nonradial g modes. The other periodicities were found in the high-frequency region, which are likely p modes. We discovered that most of the periodicities are components of multiplets with a common spacing. Assuming that multiplets are caused by rotation, we derive a rotation period of 25.6 ± 1.8 d. The multiplets also allow us to identify the pulsations to an unprecedented extent for this class of pulsator. We also detect l ≥ 2 multiplets, which are sensitive to the pulsation inclination and can constrain limb darkening via geometric cancellation factors. While most periodicities are stable, we detected several regions that show complex patterns. Detailed analyses showed that these regions are complicated by several factors. Two are combination frequencies that originate in the super-Nyquist region and were found to be reflected below the Nyquist frequency. The Fourier peaks are clear in the super-Nyquist region, but the orbital motion of Kepler smears the Nyquist frequency in the barycentric reference frame and this effect is passed on to the sub-Nyquist reflections. Others are likely multiplets but unstable in amplitudes and/or frequencies. The density of periodicities also makes KIC 10139564 challenging to explain using published models. This menagerie of properties should provide tight constraints on structural models, making this sdB star the most promising for applying asteroseismology. To support our photometric analysis, we have obtained spectroscopic radial-velocity measurements of KIC 10139564 using low-resolution spectra in the Balmer-line region. We did not find any radial-velocity variation. We used our high signal-to-noise average spectrum to improve the atmospheric parameters of the sdB star, deriving Teff = 31 859 K and log g = 5.673 dex. Based also on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland, Norway and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.

  7. A uniform LMI formulation for tuning PID, multi-term fractional-order PID, and Tilt-Integral-Derivative (TID) for integer and fractional-order processes.

    PubMed

    Merrikh-Bayat, Farshad

    2017-05-01

    In this paper first the Multi-term Fractional-Order PID (MFOPID) whose transfer function is equal to [Formula: see text] , where k j and α j are unknown and known real parameters respectively, is introduced. Without any loss of generality, a special form of MFOPID with transfer function k p +k i /s+k d1 s+k d2 s μ where k p , k i , k d1 , and k d2 are unknown real and μ is a known positive real parameter, is considered. Similar to PID and TID, MFOPID is also linear in its parameters which makes it possible to study all of them in a same framework. Tuning the parameters of PID, TID, and MFOPID based on loop shaping using Linear Matrix Inequalities (LMIs) is discussed. For this purpose separate LMIs for closed-loop stability (of sufficient type) and adjusting different aspects of the open-loop frequency response are developed. The proposed LMIs for stability are obtained based on the Nyquist stability theorem and can be applied to both integer and fractional-order (not necessarily commensurate) processes which are either stable or have one unstable pole. Numerical simulations show that the performance of the four-variable MFOPID can compete the trivial five-variable FOPID and often excels PID and TID. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  8. How Sample Size Affects a Sampling Distribution

    ERIC Educational Resources Information Center

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  9. Application of multirate digital filter banks to wideband all-digital phase-locked loops design

    NASA Technical Reports Server (NTRS)

    Sadr, Ramin; Shah, Biren; Hinedi, Sami

    1993-01-01

    A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.

  10. Application of multirate digital filter banks to wideband all-digital phase-locked loops design

    NASA Astrophysics Data System (ADS)

    Sadr, Ramin; Shah, Biren; Hinedi, Sami

    1993-06-01

    A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.

  11. Application of multirate digital filter banks to wideband all-digital phase-locked loops design

    NASA Astrophysics Data System (ADS)

    Sadr, R.; Shah, B.; Hinedi, S.

    1992-11-01

    A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.

  12. Application of multirate digital filter banks to wideband all-digital phase-locked loops design

    NASA Technical Reports Server (NTRS)

    Sadr, R.; Shah, B.; Hinedi, S.

    1992-01-01

    A new class of architecture for all-digital phase-locked loops (DPLL's) is presented in this article. These architectures, referred to as parallel DPLL (PDPLL), employ multirate digital filter banks (DFB's) to track signals with a lower processing rate than the Nyquist rate, without reducing the input (Nyquist) bandwidth. The PDPLL basically trades complexity for hardware-processing speed by introducing parallel processing in the receiver. It is demonstrated here that the DPLL performance is identical to that of a PDPLL for both steady-state and transient behavior. A test signal with a time-varying Doppler characteristic is used to compare the performance of both the DPLL and the PDPLL.

  13. Review of image processing fundamentals

    NASA Technical Reports Server (NTRS)

    Billingsley, F. C.

    1985-01-01

    Image processing through convolution, transform coding, spatial frequency alterations, sampling, and interpolation are considered. It is postulated that convolution in one domain (real or frequency) is equivalent to multiplication in the other (frequency or real), and that the relative amplitudes of the Fourier components must be retained to reproduce any waveshape. It is suggested that all digital systems may be considered equivalent, with a frequency content approximately at the Nyquist limit, and with a Gaussian frequency response. An optimized cubic version of the interpolation continuum image is derived as a set of cubic spines. Pixel replication has been employed to enlarge the visable area of digital samples, however, suitable elimination of the extraneous high frequencies involved in the visable edges, by defocusing, is necessary to allow the underlying object represented by the data values to be seen.

  14. Pedagogical Simulation of Sampling Distributions and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Hagtvedt, Reidar; Jones, Gregory Todd; Jones, Kari

    2007-01-01

    Students often find the fact that a sample statistic is a random variable very hard to grasp. Even more mysterious is why a sample mean should become ever more Normal as the sample size increases. This simulation tool is meant to illustrate the process, thereby giving students some intuitive grasp of the relationship between a parent population…

  15. Optimized Quasi-Interpolators for Image Reconstruction.

    PubMed

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  16. Generalized quantum no-go theorems of pure states

    NASA Astrophysics Data System (ADS)

    Li, Hui-Ran; Luo, Ming-Xing; Lai, Hong

    2018-07-01

    Various results of the no-cloning theorem, no-deleting theorem and no-superposing theorem in quantum mechanics have been proved using the superposition principle and the linearity of quantum operations. In this paper, we investigate general transformations forbidden by quantum mechanics in order to unify these theorems. First, we prove that any useful information cannot be created from an unknown pure state which is randomly chosen from a Hilbert space according to the Harr measure. And then, we propose a unified no-go theorem based on a generalized no-superposing result. The new theorem includes the no-cloning theorem, no-anticloning theorem, no-partial-erasure theorem, no-splitting theorem, no-superposing theorem or no-encoding theorem as a special case. Moreover, it implies various new results. Third, we extend the new theorem into another form that includes the no-deleting theorem as a special case.

  17. Optical single side-band Nyquist PAM-4 transmission using dual-drive MZM modulation and direct detection.

    PubMed

    Zhu, Mingyue; Zhang, Jing; Yi, Xingwen; Ying, Hao; Li, Xiang; Luo, Ming; Song, Yingxiong; Huang, Xiatao; Qiu, Kun

    2018-03-19

    We present the design and optimization of the optical single side-band (SSB) Nyquist four-level pulse amplitude modulation (PAM-4) transmission using dual-drive Mach-Zehnder modulator (DDMZM)modulation and direct detection (DD), aiming at the C-band cost-effective, high-speed and long-distance transmission. At the transmitter, the laser line width should be small to avoid the phase noise to amplitude noise conversion and equalization-enhanced phase noise due to the large chromatic dispersion (CD). The optical SSB signal is generated after optimizing the optical modulation index (OMI) and hence the minimum phase condition which is required by the Kramers-Kronig (KK) receiver can also be satisfied. At the receiver, a simple AC-coupled photodiode (PD) is used and a virtual carrier is added for the KK operation to alleviate the signal-to-signal beating interference (SSBI).A Volterra filter (VF) is cascaded for remaining nonlinearities mitigation. When the fiber nonlinearity becomes significant, we elect to use an optical band-pass filter with offset filtering. It can suppress the simulated Brillouin scattering and the conjugated distortion by filtering out the imaging frequency components. With our design and optimization, we achieve single-channel, single polarization 102.4-Gb/s Nyquist PAM-4 over 800-km standard single-mode fiber (SSMF).

  18. Compressed sensing system considerations for ECG and EMG wireless biosensors.

    PubMed

    Dixon, Anna M R; Allstot, Emily G; Gangopadhyay, Daibashish; Allstot, David J

    2012-04-01

    Compressed sensing (CS) is an emerging signal processing paradigm that enables sub-Nyquist processing of sparse signals such as electrocardiogram (ECG) and electromyogram (EMG) biosignals. Consequently, it can be applied to biosignal acquisition systems to reduce the data rate to realize ultra-low-power performance. CS is compared to conventional and adaptive sampling techniques and several system-level design considerations are presented for CS acquisition systems including sparsity and compression limits, thresholding techniques, encoder bit-precision requirements, and signal recovery algorithms. Simulation studies show that compression factors greater than 16X are achievable for ECG and EMG signals with signal-to-quantization noise ratios greater than 60 dB.

  19. A Planar Two-Dimensional Superconducting Bolometer Array for the Green Bank Telescope

    NASA Technical Reports Server (NTRS)

    Benford, Dominic; Staguhn, Johannes G.; Chervenak, James A.; Chen, Tina C.; Moseley, S. Harvey; Wollack, Edward J.; Devlin, Mark J.; Dicker, Simon R.; Supanich, Mark

    2004-01-01

    In order to provide high sensitivity rapid imaging at 3.3mm (90GHz) for the Green Bank Telescope - the world's largest steerable aperture - a camera is being built by the University of Pennsylvania, NASA/GSFC, and NRAO. The heart of this camera is an 8x8 close-packed, Nyquist-sampled detector array. We have designed and are fabricating a functional superconducting bolometer array system using a monolithic planar architecture. Read out by SQUID multiplexers, the superconducting transition edge sensors will provide fast, linear, sensitive response for high performance imaging. This will provide the first ever superconducting bolometer array on a facility instrument.

  20. Crystallization of hard spheres revisited. II. Thermodynamic modeling, nucleation work, and the surface of tension.

    PubMed

    Richard, David; Speck, Thomas

    2018-06-14

    Combining three numerical methods (forward flux sampling, seeding of droplets, and finite-size droplets), we probe the crystallization of hard spheres over the full range from close to coexistence to the spinodal regime. We show that all three methods allow us to sample different regimes and agree perfectly in the ranges where they overlap. By combining the nucleation work calculated from forward flux sampling of small droplets and the nucleation theorem, we show how to compute the nucleation work spanning three orders of magnitude. Using a variation of the nucleation theorem, we show how to extract the pressure difference between the solid droplet and ambient liquid. Moreover, combining the nucleation work with the pressure difference allows us to calculate the interfacial tension of small droplets. Our results demonstrate that employing bulk quantities yields inaccurate results for the nucleation rate.

  1. Eigenvector method for umbrella sampling enables error analysis

    PubMed Central

    Thiede, Erik H.; Van Koten, Brian; Weare, Jonathan; Dinner, Aaron R.

    2016-01-01

    Umbrella sampling efficiently yields equilibrium averages that depend on exploring rare states of a model by biasing simulations to windows of coordinate values and then combining the resulting data with physical weighting. Here, we introduce a mathematical framework that casts the step of combining the data as an eigenproblem. The advantage to this approach is that it facilitates error analysis. We discuss how the error scales with the number of windows. Then, we derive a central limit theorem for averages that are obtained from umbrella sampling. The central limit theorem suggests an estimator of the error contributions from individual windows, and we develop a simple and computationally inexpensive procedure for implementing it. We demonstrate this estimator for simulations of the alanine dipeptide and show that it emphasizes low free energy pathways between stable states in comparison to existing approaches for assessing error contributions. Our work suggests the possibility of using the estimator and, more generally, the eigenvector method for umbrella sampling to guide adaptation of the simulation parameters to accelerate convergence. PMID:27586912

  2. Super-Nyquist shaping and processing technologies for high-spectral-efficiency optical systems

    NASA Astrophysics Data System (ADS)

    Jia, Zhensheng; Chien, Hung-Chang; Zhang, Junwen; Dong, Ze; Cai, Yi; Yu, Jianjun

    2013-12-01

    The implementations of super-Nyquist pulse generation, both in a digital field using a digital-to-analog converter (DAC) or an optical filter at transmitter side, are introduced. Three corresponding signal processing algorithms at receiver are presented and compared for high spectral-efficiency (SE) optical systems employing the spectral prefiltering. Those algorithms are designed for the mitigation towards inter-symbol-interference (ISI) and inter-channel-interference (ICI) impairments by the bandwidth constraint, including 1-tap constant modulus algorithm (CMA) and 3-tap maximum likelihood sequence estimation (MLSE), regular CMA and digital filter with 2-tap MLSE, and constant multi-modulus algorithm (CMMA) with 2-tap MLSE. The principles and prefiltering tolerance are given through numerical and experimental results.

  3. Digital super-resolution holographic data storage based on Hermitian symmetry for achieving high areal density.

    PubMed

    Nobukawa, Teruyoshi; Nomura, Takanori

    2017-01-23

    Digital super-resolution holographic data storage based on Hermitian symmetry is proposed to store digital data in a tiny area of a medium. In general, reducing a recording area with an aperture leads to the improvement in the storage capacity of holographic data storage. Conventional holographic data storage systems however have a limitation in reducing a recording area. This limitation is called a Nyquist size. Unlike the conventional systems, our proposed system can overcome the limitation with the help of a digital holographic technique and digital signal processing. Experimental result shows that the proposed system can record and retrieve a hologram in a smaller area than the Nyquist size on the basis of Hermitian symmetry.

  4. STUDIES IN RESEARCH METHODOLOGY. IV. A SAMPLING STUDY OF THE CENTRAL LIMIT THEOREM AND THE ROBUSTNESS OF ONE-SAMPLE PARAMETRIC TESTS,

    DTIC Science & Technology

    iconoclastic . Even at N=1024 these departures were quite appreciable at the testing tails, being greatest for chi-square and least for Z, and becoming worse in all cases at increasingly extreme tail areas. (Author)

  5. Narrowband Interference Suppression in Spread Spectrum Communication Systems

    DTIC Science & Technology

    1995-12-01

    receiver input. As stated earlier, these waveforms must be sampled to obtain the discrete time sequences. The sampling theorem states: A bandlimited...From the FFT chips, the data is passed to a Plessey PDSP16330 Pythagoras Processor. The 16330 is a high-speed digital CMOS IC that converts real and

  6. Enhanced intercarrier interference mitigation based on encoded bit-sequence distribution inside optical superchannels

    NASA Astrophysics Data System (ADS)

    Torres, Jhon James Granada; Soto, Ana María Cárdenas; González, Neil Guerrero

    2016-10-01

    In the context of gridless optical multicarrier systems, we propose a method for intercarrier interference (ICI) mitigation which allows bit error correction in scenarios of nonspectral flatness between the subcarriers composing the multicarrier system and sub-Nyquist carrier spacing. We propose a hybrid ICI mitigation technique which exploits the advantages of signal equalization at both levels: the physical level for any digital and analog pulse shaping, and the bit-data level and its ability to incorporate advanced correcting codes. The concatenation of these two complementary techniques consists of a nondata-aided equalizer applied to each optical subcarrier, and a hard-decision forward error correction applied to the sequence of bits distributed along the optical subcarriers regardless of prior subchannel quality assessment as performed in orthogonal frequency-division multiplexing modulations for the implementation of the bit-loading technique. The impact of the ICI is systematically evaluated in terms of bit-error-rate as a function of the carrier frequency spacing and the roll-off factor of the digital pulse-shaping filter for a simulated 3×32-Gbaud single-polarization quadrature phase shift keying Nyquist-wavelength division multiplexing system. After the ICI mitigation, a back-to-back error-free decoding was obtained for sub-Nyquist carrier spacings of 28.5 and 30 GHz and roll-off values of 0.1 and 0.4, respectively.

  7. Radar research on thunderstorms and lightning

    NASA Technical Reports Server (NTRS)

    Rust, W. D.; Doviak, R. J.

    1982-01-01

    Applications of Doppler radar to detection of storm hazards are reviewed. Normal radar sweeps reveal data on reflectivity fields of rain drops, ionized lightning paths, and irregularities in humidity and temperature. Doppler radar permits identification of the targets' speed toward or away from the transmitter through interpretation of the shifts in the microwave frequency. Wind velocity fields can be characterized in three dimensions by the use of two radar units, with a Nyquist limit on the highest wind speeds that may be recorded. Comparisons with models numerically derived from Doppler radar data show substantial agreement in storm formation predictions based on information gathered before the storm. Examples are provided of tornado observations with expanded Nyquist limits, gust fronts, turbulence, lightning and storm structures. Obtaining vertical velocities from reflectivity spectra is discussed.

  8. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  9. The Great Emch Closure Theorem and a combinatorial proof of Poncelet's Theorem

    NASA Astrophysics Data System (ADS)

    Avksentyev, E. A.

    2015-11-01

    The relations between the classical closure theorems (Poncelet's, Steiner's, Emch's, and the zigzag theorems) and some of their generalizations are discussed. It is known that Emch's Theorem is the most general of these, while the others follow as special cases. A generalization of Emch's Theorem to pencils of circles is proved, which (by analogy with the Great Poncelet Theorem) can be called the Great Emch Theorem. It is shown that the Great Emch and Great Poncelet Theorems are equivalent and can be derived one from the other using elementary geometry, and also that both hold in the Lobachevsky plane as well. A new closure theorem is also obtained, in which the construction of closure is slightly more involved: closure occurs on a variable circle which is tangent to a fixed pair of circles. In conclusion, a combinatorial proof of Poncelet's Theorem is given, which deduces the closure principle for an arbitrary number of steps from the principle for three steps using combinatorics and number theory. Bibliography: 20 titles.

  10. Effect of Cu-doping on structural and electrical properties of Ni0.4-xCu0.3+xMg0.3Fe2O4 ferrites prepared using sol-gel method

    NASA Astrophysics Data System (ADS)

    Dhaou, Mohamed Houcine

    2018-06-01

    Ni0.4-xCu0.3+xMg0.3Fe2O4 spinel ferrites were prepared by sol-gel technique. X-ray diffraction results indicate that ferrite samples have a cubic spinel-type structure with ? space group. The electrical properties of the studied samples using complex impedance spectroscopy technique have been investigated as a function of frequency at different temperatures. We found that the addition of copper in Ni0.4-xCu0.3+xMg0.3Fe2O4 ferrite system can improve its conductivity. Dielectric properties have been discussed in terms of hopping of charge carriers between Fe2+ and Fe3+ ions. For all samples, frequency dependence of the imaginary part of impedance (Z") shows the existence of relaxation phenomenon. The appropriate equivalent circuit configuration for modeling the Nyquist plots of impedance is of the type of (Rg + Rgb//Cgb).

  11. Analysis of Returned Comet Nucleus Samples

    NASA Astrophysics Data System (ADS)

    Chang, Sherwood

    1997-12-01

    This volume contains abstracts that have been accepted by the Program Committee for presentation at the Workshop on Analysis of Returned Comet Nucleus Samples, held in Milpitas, California, January 16-18, 1989. Conveners are Sherwood Chang (NASA Ames Research Center) and Larry Nyquist (NASA Johnson Space Center). Program Committee members are Thomas Ahrens (ex-officio; California Institute of Technology), Lou Allamandola (NASA Ames Research Center), David Blake (NASA Ames Research Center), Donald Brownlee (University of Washington, Seattle), Theodore E. Bunch (NASA Ames Research Center), Humberto Campins (Planetary Science Institute), Jeff Cuzzi (NASA Ames Research Center), Eberhard Griin (Max-Plank-Institut fiir Kemphysik), Martha Hanner (Jet Propulsion Laboratory), Alan Harris (Jet Propulsion Laboratory), John Kerrid-e (University of Califomia, Los Angeles), Yves Langevin (University of Paris), Gerhard Schwehm (ESTEC), and Paul Weissman (Jet Propulsion Laboratory). Logistics and administrative support for the workshop were provided by the Lunar and Planetary Institute Projects Office.

  12. Analysis of Returned Comet Nucleus Samples

    NASA Technical Reports Server (NTRS)

    Chang, Sherwood (Compiler)

    1997-01-01

    This volume contains abstracts that have been accepted by the Program Committee for presentation at the Workshop on Analysis of Returned Comet Nucleus Samples, held in Milpitas, California, January 16-18, 1989. Conveners are Sherwood Chang (NASA Ames Research Center) and Larry Nyquist (NASA Johnson Space Center). Program Committee members are Thomas Ahrens (ex-officio; California Institute of Technology), Lou Allamandola (NASA Ames Research Center), David Blake (NASA Ames Research Center), Donald Brownlee (University of Washington, Seattle), Theodore E. Bunch (NASA Ames Research Center), Humberto Campins (Planetary Science Institute), Jeff Cuzzi (NASA Ames Research Center), Eberhard Griin (Max-Plank-Institut fiir Kemphysik), Martha Hanner (Jet Propulsion Laboratory), Alan Harris (Jet Propulsion Laboratory), John Kerrid-e (University of Califomia, Los Angeles), Yves Langevin (University of Paris), Gerhard Schwehm (ESTEC), and Paul Weissman (Jet Propulsion Laboratory). Logistics and administrative support for the workshop were provided by the Lunar and Planetary Institute Projects Office.

  13. High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures.

    PubMed

    Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando

    2011-01-01

    Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability.

  14. On the precision of automated activation time estimation

    NASA Technical Reports Server (NTRS)

    Kaplan, D. T.; Smith, J. M.; Rosenbaum, D. S.; Cohen, R. J.

    1988-01-01

    We examined how the assignment of local activation times in epicardial and endocardial electrograms is affected by sampling rate, ambient signal-to-noise ratio, and sinx/x waveform interpolation. Algorithms used for the estimation of fiducial point locations included dV/dtmax, and a matched filter detection algorithm. Test signals included epicardial and endocardial electrograms overlying both normal and infarcted regions of dog myocardium. Signal-to-noise levels were adjusted by combining known data sets with white noise "colored" to match the spectral characteristics of experimentally recorded noise. For typical signal-to-noise ratios and sampling rates, the template-matching algorithm provided the greatest precision in reproducibly estimating fiducial point location, and sinx/x interpolation allowed for an additional significant improvement. With few restrictions, combining these two techniques may allow for use of digitization rates below the Nyquist rate without significant loss of precision.

  15. Sanov and central limit theorems for output statistics of quantum Markov chains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horssen, Merlijn van, E-mail: merlijn.vanhorssen@nottingham.ac.uk; Guţă, Mădălin, E-mail: madalin.guta@nottingham.ac.uk

    2015-02-15

    In this paper, we consider the statistics of repeated measurements on the output of a quantum Markov chain. We establish a large deviations result analogous to Sanov’s theorem for the multi-site empirical measure associated to finite sequences of consecutive outcomes of a classical stochastic process. Our result relies on the construction of an extended quantum transition operator (which keeps track of previous outcomes) in terms of which we compute moment generating functions, and whose spectral radius is related to the large deviations rate function. As a corollary to this, we obtain a central limit theorem for the empirical measure. Suchmore » higher level statistics may be used to uncover critical behaviour such as dynamical phase transitions, which are not captured by lower level statistics such as the sample mean. As a step in this direction, we give an example of a finite system whose level-1 (empirical mean) rate function is independent of a model parameter while the level-2 (empirical measure) rate is not.« less

  16. Applying Nyquist's method for stability determination to solar wind observations

    NASA Astrophysics Data System (ADS)

    Klein, Kristopher G.; Kasper, Justin C.; Korreck, K. E.; Stevens, Michael L.

    2017-10-01

    The role instabilities play in governing the evolution of solar and astrophysical plasmas is a matter of considerable scientific interest. The large number of sources of free energy accessible to such nearly collisionless plasmas makes general modeling of unstable behavior, accounting for the temperatures, densities, anisotropies, and relative drifts of a large number of populations, analytically difficult. We therefore seek a general method of stability determination that may be automated for future analysis of solar wind observations. This work describes an efficient application of the Nyquist instability method to the Vlasov dispersion relation appropriate for hot, collisionless, magnetized plasmas, including the solar wind. The algorithm recovers the familiar proton temperature anisotropy instabilities, as well as instabilities that had been previously identified using fits extracted from in situ observations in Gary et al. (2016). Future proposed applications of this method are discussed.

  17. Novel Oversampling Technique for Improving Signal-to-Quantization Noise Ratio on Accelerometer-Based Smart Jerk Sensors in CNC Applications.

    PubMed

    Rangel-Magdaleno, Jose J; Romero-Troncoso, Rene J; Osornio-Rios, Roque A; Cabal-Yepez, Eduardo

    2009-01-01

    Jerk monitoring, defined as the first derivative of acceleration, has become a major issue in computerized numeric controlled (CNC) machines. Several works highlight the necessity of measuring jerk in a reliable way for improving production processes. Nowadays, the computation of jerk is done by finite differences of the acceleration signal, computed at the Nyquist rate, which leads to low signal-to-quantization noise ratio (SQNR) during the estimation. The novelty of this work is the development of a smart sensor for jerk monitoring from a standard accelerometer, which has improved SQNR. The proposal is based on oversampling techniques that give a better estimation of jerk than that produced by a Nyquist-rate differentiator. Simulations and experimental results are presented to show the overall methodology performance.

  18. Spatial Resolution Characterization for QuickBird Image Products 2003-2004 Season

    NASA Technical Reports Server (NTRS)

    Blonski, Slawomir

    2006-01-01

    This presentation focuses on spatial resolution characterization for QuickBird panochromatic images in 2003-2004 and presents data measurements and analysis of SSC edge target deployment and edge response extraction and modeling. The results of the characterization are shown as values of the Modulation Transfer Function (MTF) at the Nyquist spatial frequency and as the Relative Edge Response (RER) components. The results show that RER is much less sensitive to accuracy of the curve fitting than the value of MTF at Nyquist frequency. Therefore, the RER/edge response slope is a more robust estimator of the digital image spatial resolution than the MTF. For the QuickBird panochromatic images, the RER is consistently equal to 0.5 for images processed with the Cubic Convolution resampling and to 0.8 for the MTF resampling.

  19. High-contrast imaging in multi-star systems: progress in technology development and lab results

    NASA Astrophysics Data System (ADS)

    Belikov, Ruslan; Pluzhnik, Eugene; Bendek, Eduardo; Sirbu, Dan

    2017-09-01

    We present the continued progress and laboratory results advancing the technology readiness of Multi-Star Wavefront Control (MSWC), a method to directly image planets and disks in multi-star systems such as Alpha Centauri. This method works with almost any coronagraph (or external occulter with a DM) and requires little or no change to existing and mature hardware. In particular, it works with single-star coronagraphs and does not require the off-axis star(s) to be coronagraphically suppressed. Because of the ubiquity of multistar systems, this method increases the science yield of many missions and concepts such as WFIRST, Exo-C/S, HabEx, LUVOIR, and potentially enables the detection of Earthlike planets (if they exist) around our nearest neighbor star, Alpha Centauri, with a small and low-cost space telescope such as ACESat. Our lab demonstrations were conducted at the Ames Coronagraph Experiment (ACE) laboratory and show both the feasibility as well as the trade-offs involved in using MSWC. We show several simulations and laboratory tests at roughly TRL-3 corresponding to representative targets and missions, including Alpha Centauri with WFIRST. In particular, we demonstrate MSWC in Super-Nyquist mode, where the distance between the desired dark zone and the off-axis star is larger than the conventional (sub-Nyquist) control range of the DM. Our laboratory tests did not yet include a coronagraph, but did demonstrate significant speckle suppression from two independent light sources at sub- as well as super-Nyquist separations.

  20. Illustrating the Central Limit Theorem through Microsoft Excel Simulations

    ERIC Educational Resources Information Center

    Moen, David H.; Powell, John E.

    2005-01-01

    Using Microsoft Excel, several interactive, computerized learning modules are developed to demonstrate the Central Limit Theorem. These modules are used in the classroom to enhance the comprehension of this theorem. The Central Limit Theorem is a very important theorem in statistics, and yet because it is not intuitively obvious, statistics…

  1. Unified quantum no-go theorems and transforming of quantum pure states in a restricted set

    NASA Astrophysics Data System (ADS)

    Luo, Ming-Xing; Li, Hui-Ran; Lai, Hong; Wang, Xiaojun

    2017-12-01

    The linear superposition principle in quantum mechanics is essential for several no-go theorems such as the no-cloning theorem, the no-deleting theorem and the no-superposing theorem. In this paper, we investigate general quantum transformations forbidden or permitted by the superposition principle for various goals. First, we prove a no-encoding theorem that forbids linearly superposing of an unknown pure state and a fixed pure state in Hilbert space of a finite dimension. The new theorem is further extended for multiple copies of an unknown state as input states. These generalized results of the no-encoding theorem include the no-cloning theorem, the no-deleting theorem and the no-superposing theorem as special cases. Second, we provide a unified scheme for presenting perfect and imperfect quantum tasks (cloning and deleting) in a one-shot manner. This scheme may lead to fruitful results that are completely characterized with the linear independence of the representative vectors of input pure states. The upper bounds of the efficiency are also proved. Third, we generalize a recent superposing scheme of unknown states with a fixed overlap into new schemes when multiple copies of an unknown state are as input states.

  2. Establishing the moon as a spectral radiance standard

    USGS Publications Warehouse

    Kieffer, H.H.; Wildey, R.L.

    1996-01-01

    A new automated observatory dedicated to the radiometry of the moon has been constructed to provide new radiance information for calibration of earth-orbiting imaging instruments, particularly Earth Observing System instruments. Instrumentation includes an imaging photometer with 4.5-in. resolution on a fully digital mount and a full-aperture radiance calibration source. Interference filters within 0.35-0.95 ??m correspond to standard stellar magnitude systems, accommodate wavelengths of lunar spectral contrast, and approximate some band-passes of planned earth-orbiting instruments (ASTER, Landsat-7 ETM, MISR, MODIS, and SeaWIFS). The same equipment is used for lunar and stellar observations, with the use of an aperture stop in lunar imaging to comply with Nyquist's theorem and lengthen exposure times to avoid scintillation effects. A typical robotic night run involves observation of about 60 photometric standard stars and the moon; about 10 of the standard stars are observed repeatedly to determine atmospheric extinction, and the moon is observed several times. Observations are to be made on every photometric night during the bright half of the month for at least 4.5 years to adequately cover phase and libration variation. Each lunar image is reduced to absolute exoatmospheric radiance and reprojected to a fixed selenographic grid system. The collection of these images at various librations and phase angles will be reduced to photometric models for each of the approximately 120 000 points in the lunar grid for each filter. Radiance models of the moon can then be produced for the precise geometry of an orbiting instrument observation. Expected errors are under 1% relative and 2.5% absolute. A second telescope operating from 1.0 to 2.5 ??m is planned.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton

    This is a powerpoint presentation which serves as lecture material for the Parallel Computing summer school. It goes over the fundamentals of the Monte Carlo calculation method. The material is presented according to the following outline: Introduction (background, a simple example: estimating π), Why does this even work? (The Law of Large Numbers, The Central Limit Theorem), How to sample (inverse transform sampling, rejection), and An example from particle transport.

  4. Radiation Hardened, Modulator ASIC for High Data Rate Communications

    NASA Technical Reports Server (NTRS)

    McCallister, Ron; Putnam, Robert; Andro, Monty; Fujikawa, Gene

    2000-01-01

    Satellite-based telecommunication services are challenged by the need to generate down-link power levels adequate to support high quality (BER approx. equals 10(exp 12)) links required for modem broadband data services. Bandwidth-efficient Nyquist signaling, using low values of excess bandwidth (alpha), can exhibit large peak-to-average-power ratio (PAPR) values. High PAPR values necessitate high-power amplifier (HPA) backoff greater than the PAPR, resulting in unacceptably low HPA efficiency. Given the high cost of on-board prime power, this inefficiency represents both an economical burden, and a constraint on the rates and quality of data services supportable from satellite platforms. Constant-envelope signals offer improved power-efficiency, but only by imposing a severe bandwidth-efficiency penalty. This paper describes a radiation- hardened modulator which can improve satellite-based broadband data services by combining the bandwidth-efficiency of low-alpha Nyquist signals with high power-efficiency (negligible HPA backoff).

  5. Mitigation of time-varying distortions in Nyquist-WDM systems using machine learning

    NASA Astrophysics Data System (ADS)

    Granada Torres, Jhon J.; Varughese, Siddharth; Thomas, Varghese A.; Chiuchiarelli, Andrea; Ralph, Stephen E.; Cárdenas Soto, Ana M.; Guerrero González, Neil

    2017-11-01

    We propose a machine learning-based nonsymmetrical demodulation technique relying on clustering to mitigate time-varying distortions derived from several impairments such as IQ imbalance, bias drift, phase noise and interchannel interference. Experimental results show that those impairments cause centroid movements in the received constellations seen in time-windows of 10k symbols in controlled scenarios. In our demodulation technique, the k-means algorithm iteratively identifies the cluster centroids in the constellation of the received symbols in short time windows by means of the optimization of decision thresholds for a minimum BER. We experimentally verified the effectiveness of this computationally efficient technique in multicarrier 16QAM Nyquist-WDM systems over 270 km links. Our nonsymmetrical demodulation technique outperforms the conventional QAM demodulation technique, reducing the OSNR requirement up to ∼0.8 dB at a BER of 1 × 10-2 for signals affected by interchannel interference.

  6. A Decomposition Theorem for Finite Automata.

    ERIC Educational Resources Information Center

    Santa Coloma, Teresa L.; Tucci, Ralph P.

    1990-01-01

    Described is automata theory which is a branch of theoretical computer science. A decomposition theorem is presented that is easier than the Krohn-Rhodes theorem. Included are the definitions, the theorem, and a proof. (KR)

  7. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications

    PubMed Central

    Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J.; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-01-01

    Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications. PMID:27147324

  8. Photon-counting hexagonal pixel array CdTe detector: Spatial resolution characteristics for image-guided interventional applications.

    PubMed

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew; Shi, Linxi; Gounis, Matthew J; Bellazzini, Ronaldo; Spandre, Gloria; Brez, Alessandro; Minuti, Massimo

    2016-05-01

    High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixel pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vedantham, Srinivasan; Shrestha, Suman; Karellas, Andrew, E-mail: andrew.karellas@umassmed.edu

    Purpose: High-resolution, photon-counting, energy-resolved detector with fast-framing capability can facilitate simultaneous acquisition of precontrast and postcontrast images for subtraction angiography without pixel registration artifacts and can facilitate high-resolution real-time imaging during image-guided interventions. Hence, this study was conducted to determine the spatial resolution characteristics of a hexagonal pixel array photon-counting cadmium telluride (CdTe) detector. Methods: A 650 μm thick CdTe Schottky photon-counting detector capable of concurrently acquiring up to two energy-windowed images was operated in a single energy-window mode to include photons of 10 keV or higher. The detector had hexagonal pixels with apothem of 30 μm resulting in pixelmore » pitch of 60 and 51.96 μm along the two orthogonal directions. The detector was characterized at IEC-RQA5 spectral conditions. Linear response of the detector was determined over the air kerma rate relevant to image-guided interventional procedures ranging from 1.3 nGy/frame to 91.4 μGy/frame. Presampled modulation transfer was determined using a tungsten edge test device. The edge-spread function and the finely sampled line spread function accounted for hexagonal sampling, from which the presampled modulation transfer function (MTF) was determined. Since detectors with hexagonal pixels require resampling to square pixels for distortion-free display, the optimal square pixel size was determined by minimizing the root-mean-squared-error of the aperture functions for the square and hexagonal pixels up to the Nyquist limit. Results: At Nyquist frequencies of 8.33 and 9.62 cycles/mm along the apothem and orthogonal to the apothem directions, the modulation factors were 0.397 and 0.228, respectively. For the corresponding axis, the limiting resolution defined as 10% MTF occurred at 13.3 and 12 cycles/mm, respectively. Evaluation of the aperture functions yielded an optimal square pixel size of 54 μm. After resampling to 54 μm square pixels using trilinear interpolation, the presampled MTF at Nyquist frequency of 9.26 cycles/mm was 0.29 and 0.24 along the orthogonal directions and the limiting resolution (10% MTF) occurred at approximately 12 cycles/mm. Visual analysis of a bar pattern image showed the ability to resolve close to 12 line-pairs/mm and qualitative evaluation of a neurovascular nitinol-stent showed the ability to visualize its struts at clinically relevant conditions. Conclusions: Hexagonal pixel array photon-counting CdTe detector provides high spatial resolution in single-photon counting mode. After resampling to optimal square pixel size for distortion-free display, the spatial resolution is preserved. The dual-energy capabilities of the detector could allow for artifact-free subtraction angiography and basis material decomposition. The proposed high-resolution photon-counting detector with energy-resolving capability can be of importance for several image-guided interventional procedures as well as for pediatric applications.« less

  10. Correlation between k-space sampling pattern and MTF in compressed sensing MRSI.

    PubMed

    Heikal, A A; Wachowicz, K; Fallone, B G

    2016-10-01

    To investigate the relationship between the k-space sampling patterns used for compressed sensing MR spectroscopic imaging (CS-MRSI) and the modulation transfer function (MTF) of the metabolite maps. This relationship may allow the desired frequency content of the metabolite maps to be quantitatively tailored when designing an undersampling pattern. Simulations of a phantom were used to calculate the MTF of Nyquist sampled (NS) 32 × 32 MRSI, and four-times undersampled CS-MRSI reconstructions. The dependence of the CS-MTF on the k-space sampling pattern was evaluated for three sets of k-space sampling patterns generated using different probability distribution functions (PDFs). CS-MTFs were also evaluated for three more sets of patterns generated using a modified algorithm where the sampling ratios are constrained to adhere to PDFs. Strong visual correlation as well as high R 2 was found between the MTF of CS-MRSI and the product of the frequency-dependant sampling ratio and the NS 32 × 32 MTF. Also, PDF-constrained sampling patterns led to higher reproducibility of the CS-MTF, and stronger correlations to the above-mentioned product. The relationship established in this work provides the user with a theoretical solution for the MTF of CS MRSI that is both predictable and customizable to the user's needs.

  11. Slowly changing potential problems in Quantum Mechanics: Adiabatic theorems, ergodic theorems, and scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fishman, S., E-mail: fishman@physics.technion.ac.il; Soffer, A., E-mail: soffer@math.rutgers.edu

    2016-07-15

    We employ the recently developed multi-time scale averaging method to study the large time behavior of slowly changing (in time) Hamiltonians. We treat some known cases in a new way, such as the Zener problem, and we give another proof of the adiabatic theorem in the gapless case. We prove a new uniform ergodic theorem for slowly changing unitary operators. This theorem is then used to derive the adiabatic theorem, do the scattering theory for such Hamiltonians, and prove some classical propagation estimates and asymptotic completeness.

  12. Automatic optical inspection of regular grid patterns with an inspection camera used below the Shannon-Nyquist criterion for optical resolution

    NASA Astrophysics Data System (ADS)

    Ferreira, Flávio P.; Forte, Paulo M. F.; Felgueiras, Paulo E. R.; Bret, Boris P. J.; Belsley, Michael S.; Nunes-Pereira, Eduardo J.

    2017-02-01

    An Automatic Optical Inspection (AOI) system for optical inspection of imaging devices used in automotive industry using an inspecting optics of lower spatial resolution than the device under inspection is described. This system is robust and with no moving parts. The cycle time is small. Its main advantage is that it is capable of detecting and quantifying defects in regular patterns, working below the Shannon-Nyquist criterion for optical resolution, using a single low resolution image sensor. It is easily scalable, which is an important advantage in industrial applications, since the same inspecting sensor can be reused for increasingly higher spatial resolutions of the devices to be inspected. The optical inspection is implemented with a notch multi-band Fourier filter, making the procedure especially fitted for regular patterns, like the ones that can be produced in image displays and Head Up Displays (HUDs). The regular patterns are used in production line only, for inspection purposes. For image displays, functional defects are detected at the level of a sub-image display grid element unit. Functional defects are the ones impairing the function of the display, and are preferred in AOI to the direct geometric imaging, since those are the ones directly related with the end-user experience. The shift in emphasis from geometric imaging to functional imaging is critical, since it is this that allows quantitative inspection, below Shannon-Nyquist. For HUDs, the functional detect detection addresses defects resulting from the combined effect of the image display and the image forming optics.

  13. The Non-Signalling theorem in generalizations of Bell's theorem

    NASA Astrophysics Data System (ADS)

    Walleczek, J.; Grössing, G.

    2014-04-01

    Does "epistemic non-signalling" ensure the peaceful coexistence of special relativity and quantum nonlocality? The possibility of an affirmative answer is of great importance to deterministic approaches to quantum mechanics given recent developments towards generalizations of Bell's theorem. By generalizations of Bell's theorem we here mean efforts that seek to demonstrate the impossibility of any deterministic theories to obey the predictions of Bell's theorem, including not only local hidden-variables theories (LHVTs) but, critically, of nonlocal hidden-variables theories (NHVTs) also, such as de Broglie-Bohm theory. Naturally, in light of the well-established experimental findings from quantum physics, whether or not a deterministic approach to quantum mechanics, including an emergent quantum mechanics, is logically possible, depends on compatibility with the predictions of Bell's theorem. With respect to deterministic NHVTs, recent attempts to generalize Bell's theorem have claimed the impossibility of any such approaches to quantum mechanics. The present work offers arguments showing why such efforts towards generalization may fall short of their stated goal. In particular, we challenge the validity of the use of the non-signalling theorem as a conclusive argument in favor of the existence of free randomness, and therefore reject the use of the non-signalling theorem as an argument against the logical possibility of deterministic approaches. We here offer two distinct counter-arguments in support of the possibility of deterministic NHVTs: one argument exposes the circularity of the reasoning which is employed in recent claims, and a second argument is based on the inconclusive metaphysical status of the non-signalling theorem itself. We proceed by presenting an entirely informal treatment of key physical and metaphysical assumptions, and of their interrelationship, in attempts seeking to generalize Bell's theorem on the basis of an ontic, foundational interpretation of the non-signalling theorem. We here argue that the non-signalling theorem must instead be viewed as an epistemic, operational theorem i.e. one that refers exclusively to what epistemic agents can, or rather cannot, do. That is, we emphasize that the non-signalling theorem is a theorem about the operational inability of epistemic agents to signal information. In other words, as a proper principle, the non-signalling theorem may only be employed as an epistemic, phenomenological, or operational principle. Critically, our argument emphasizes that the non-signalling principle must not be used as an ontic principle about physical reality as such, i.e. as a theorem about the nature of physical reality independently of epistemic agents e.g. human observers. One major reason in favor of our conclusion is that any definition of signalling or of non-signalling invariably requires a reference to epistemic agents, and what these agents can actually measure and report. Otherwise, the non-signalling theorem would equal a general "no-influence" theorem. In conclusion, under the assumption that the non-signalling theorem is epistemic (i.e. "epistemic non-signalling"), the search for deterministic approaches to quantum mechanics, including NHVTs and an emergent quantum mechanics, continues to be a viable research program towards disclosing the foundations of physical reality at its smallest dimensions.

  14. Consistency of the adiabatic theorem.

    PubMed

    Amin, M H S

    2009-06-05

    The adiabatic theorem provides the basis for the adiabatic model of quantum computation. Recently the conditions required for the adiabatic theorem to hold have become a subject of some controversy. Here we show that the reported violations of the adiabatic theorem all arise from resonant transitions between energy levels. In the absence of fast driven oscillations the traditional adiabatic theorem holds. Implications for adiabatic quantum computation are discussed.

  15. Counting Penguins.

    ERIC Educational Resources Information Center

    Perry, Mike; Kader, Gary

    1998-01-01

    Presents an activity on the simplification of penguin counting by employing the basic ideas and principles of sampling to teach students to understand and recognize its role in statistical claims. Emphasizes estimation, data analysis and interpretation, and central limit theorem. Includes a list of items for classroom discussion. (ASK)

  16. Optimal no-go theorem on hidden-variable predictions of effect expectations

    NASA Astrophysics Data System (ADS)

    Blass, Andreas; Gurevich, Yuri

    2018-03-01

    No-go theorems prove that, under reasonable assumptions, classical hidden-variable theories cannot reproduce the predictions of quantum mechanics. Traditional no-go theorems proved that hidden-variable theories cannot predict correctly the values of observables. Recent expectation no-go theorems prove that hidden-variable theories cannot predict the expectations of observables. We prove the strongest expectation-focused no-go theorem to date. It is optimal in the sense that the natural weakenings of the assumptions and the natural strengthenings of the conclusion make the theorem fail. The literature on expectation no-go theorems strongly suggests that the expectation-focused approach is more general than the value-focused one. We establish that the expectation approach is not more general.

  17. Using Pictures to Enhance Students' Understanding of Bayes' Theorem

    ERIC Educational Resources Information Center

    Trafimow, David

    2011-01-01

    Students often have difficulty understanding algebraic proofs of statistics theorems. However, it sometimes is possible to prove statistical theorems with pictures in which case students can gain understanding more easily. I provide examples for two versions of Bayes' theorem.

  18. A hierarchical generalization of the acoustic reciprocity theorem involving higher-order derivatives and interaction quantities.

    PubMed

    Lin, Ju; Li, Jie; Li, Xiaolei; Wang, Ning

    2016-10-01

    An acoustic reciprocity theorem is generalized, for a smoothly varying perturbed medium, to a hierarchy of reciprocity theorems including higher-order derivatives of acoustic fields. The standard reciprocity theorem is the first member of the hierarchy. It is shown that the conservation of higher-order interaction quantities is related closely to higher-order derivative distributions of perturbed media. Then integral reciprocity theorems are obtained by applying Gauss's divergence theorem, which give explicit integral representations connecting higher-order interactions and higher-order derivative distributions of perturbed media. Some possible applications to an inverse problem are also discussed.

  19. Methods to Directly Image Exoplanets around Alpha Centauri and Other Multi-Star Systems

    NASA Astrophysics Data System (ADS)

    Belikov, R.; Sirbu, D.; Bendek, E.; Pluzhnik, E.

    2017-12-01

    The majority of FGK stars exist as multi-star star systems, and thus form a potentially rich target sample for direct imaging of exoplanets. A large fraction of these stars have starlight leakage from their companion that is brighter than rocky planets. This is in particular true of Alpha Centauri, which is 2.4x closer and about an order of magnitude brighter than any other FGK star, and thus may be the best target for any direct imaging mission, if the light of both stars can be suppressed. Thus, the ability to suppress starlight from two stars improves both the quantity and quality of Sun-like targets for missions such as WFIRST, LUVOIR, and HabEx. We present an analysis of starlight leak challenges in multi-star systems and techniques to solve those challenges, with an emphasis on imaging Alpha Centauri with WFIRST. For the case of internal coronagraphs, the fundamental problem appears to be independent wavefront control of multiple stars (at least if the companion is close enough or bright enough that it cannot simply be removed by longer exposure times or post-processing). We present a technique called Multi-Star Wavefront Control (MSWC) as a solution to this challenge and describe the results of our technology development program that advanced MSWC to TRL 3. Our program consisted of lab demonstrations of dark zones in two-star systems, validated simulations, as well as simulated predictions demonstrating that with this technology, contrasts needed for Earth-like planets are in principle achievable. We also demonstrate MSWC in Super-Nyquist mode, which allows suppression of multiple stars at separations greater than the spatial Nyquist limit of the deformable mirror.

  20. Confidence intervals for the population mean tailored to small sample sizes, with applications to survey sampling.

    PubMed

    Rosenblum, Michael A; Laan, Mark J van der

    2009-01-07

    The validity of standard confidence intervals constructed in survey sampling is based on the central limit theorem. For small sample sizes, the central limit theorem may give a poor approximation, resulting in confidence intervals that are misleading. We discuss this issue and propose methods for constructing confidence intervals for the population mean tailored to small sample sizes. We present a simple approach for constructing confidence intervals for the population mean based on tail bounds for the sample mean that are correct for all sample sizes. Bernstein's inequality provides one such tail bound. The resulting confidence intervals have guaranteed coverage probability under much weaker assumptions than are required for standard methods. A drawback of this approach, as we show, is that these confidence intervals are often quite wide. In response to this, we present a method for constructing much narrower confidence intervals, which are better suited for practical applications, and that are still more robust than confidence intervals based on standard methods, when dealing with small sample sizes. We show how to extend our approaches to much more general estimation problems than estimating the sample mean. We describe how these methods can be used to obtain more reliable confidence intervals in survey sampling. As a concrete example, we construct confidence intervals using our methods for the number of violent deaths between March 2003 and July 2006 in Iraq, based on data from the study "Mortality after the 2003 invasion of Iraq: A cross sectional cluster sample survey," by Burnham et al. (2006).

  1. Supersampling multiframe blind deconvolution resolution enhancement of adaptive-optics-compensated imagery of LEO satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2000-10-01

    A post-processing methodology for reconstructing undersampled image sequences with randomly varying blur is described which can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive optics compensated imagery taken by the Starfire Optical Range 3.5 meter telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques which includes a representation of spatial sampling by the focal plane array elements in the forward stochastic model of the imaging system. This generalization enables the random shifts and shape of the adaptive compensated PSF to be used to partially eliminate the aliasing effects associated with sub- Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss which occurs when imaging in wide FOV modes.

  2. Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites

    NASA Astrophysics Data System (ADS)

    Gerwe, David R.; Lee, David J.; Barchers, Jeffrey D.

    2002-09-01

    We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO- compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide- field-of-view (FOV) modes.

  3. Adjustable Nyquist-rate System for Single-Bit Sigma-Delta ADC with Alternative FIR Architecture

    NASA Astrophysics Data System (ADS)

    Frick, Vincent; Dadouche, Foudil; Berviller, Hervé

    2016-09-01

    This paper presents a new smart and compact system dedicated to control the output sampling frequency of an analogue-to-digital converters (ADC) based on single-bit sigma-delta (ΣΔ) modulator. This system dramatically improves the spectral analysis capabilities of power network analysers (power meters) by adjusting the ADC's sampling frequency to the input signal's fundamental frequency with a few parts per million accuracy. The trade-off between straightforwardness and performance that motivated the choice of the ADC's architecture are preliminary discussed. It particularly comes along with design considerations of an ultra-steep direct-form FIR that is optimised in terms of size and operating speed. Thanks to compact standard VHDL language description, the architecture of the proposed system is particularly suitable for application-specific integrated circuit (ASIC) implementation-oriented low-power and low-cost power meter applications. Field programmable gate array (FPGA) prototyping and experimental results validate the adjustable sampling frequency concept. They also show that the system can perform better in terms of implementation and power capabilities compared to dedicated IP resources.

  4. Wireless AE Event and Environmental Monitoring for Wind Turbine Blades at Low Sampling Rates

    NASA Astrophysics Data System (ADS)

    Bouzid, Omar M.; Tian, Gui Y.; Cumanan, K.; Neasham, J.

    Integration of acoustic wireless technology in structural health monitoring (SHM) applications introduces new challenges due to requirements of high sampling rates, additional communication bandwidth, memory space, and power resources. In order to circumvent these challenges, this chapter proposes a novel solution through building a wireless SHM technique in conjunction with acoustic emission (AE) with field deployment on the structure of a wind turbine. This solution requires a low sampling rate which is lower than the Nyquist rate. In addition, features extracted from aliased AE signals instead of reconstructing the original signals on-board the wireless nodes are exploited to monitor AE events, such as wind, rain, strong hail, and bird strike in different environmental conditions in conjunction with artificial AE sources. Time feature extraction algorithm, in addition to the principal component analysis (PCA) method, is used to extract and classify the relevant information, which in turn is used to classify or recognise a testing condition that is represented by the response signals. This proposed novel technique yields a significant data reduction during the monitoring process of wind turbine blades.

  5. Sm-Nd, Rb-Sr, and Mn-Cr Ages of Yamato 74013

    NASA Technical Reports Server (NTRS)

    Nyquist, L. E.; Shih, C.- Y.; Reese, Y.D.

    2009-01-01

    Yamato 74013 is one of 29 paired diogenites having granoblastic textures. The Ar-39 - Ar-40 age of Y-74097 is approximately 1100 Ma. Rb-Sr and Sm-Nd analyses of Y-74013, -74037, -74097, and -74136 suggested that multiple young metamorphic events disturbed their isotopic systems. Masuda et al. reported that REE abundances were heterogeneous even within the same sample (Y-74010) for sample sizes less than approximately 2 g. Both they and Nyquist et al. reported data for some samples showing significant LREE enrichment. In addition to its granoblastic texture, Y-74013 is characterized by large, isolated clots of chromite up to 5 mm in diameter. Takeda et al. suggested that these diogenites originally represented a single or very small number of coarse orthopyroxene crystals that were recrystallized by shock processes. They further suggested that initial crystallization may have occurred very early within the deep crust of the HED parent body. Here we report the chronology of Y-74013 as recorded in chronometers based on long-lived Rb-87 and Sm-147, intermediate- lived Sm-146, and short-lived Mn-53.

  6. Driven Langevin systems: fluctuation theorems and faithful dynamics

    NASA Astrophysics Data System (ADS)

    Sivak, David; Chodera, John; Crooks, Gavin

    2014-03-01

    Stochastic differential equations of motion (e.g., Langevin dynamics) provide a popular framework for simulating molecular systems. Any computational algorithm must discretize these equations, yet the resulting finite time step integration schemes suffer from several practical shortcomings. We show how any finite time step Langevin integrator can be thought of as a driven, nonequilibrium physical process. Amended by an appropriate work-like quantity (the shadow work), nonequilibrium fluctuation theorems can characterize or correct for the errors introduced by the use of finite time steps. We also quantify, for the first time, the magnitude of deviations between the sampled stationary distribution and the desired equilibrium distribution for equilibrium Langevin simulations of solvated systems of varying size. We further show that the incorporation of a novel time step rescaling in the deterministic updates of position and velocity can correct a number of dynamical defects in these integrators. Finally, we identify a particular splitting that has essentially universally appropriate properties for the simulation of Langevin dynamics for molecular systems in equilibrium, nonequilibrium, and path sampling contexts.

  7. On the symmetry foundation of double soft theorems

    NASA Astrophysics Data System (ADS)

    Li, Zhi-Zhong; Lin, Hung-Hwa; Zhang, Shun-Qing

    2017-12-01

    Double-soft theorems, like its single-soft counterparts, arises from the underlying symmetry principles that constrain the interactions of massless particles. While single soft theorems can be derived in a non-perturbative fashion by employing current algebras, recent attempts of extending such an approach to known double soft theorems has been met with difficulties. In this work, we have traced the difficulty to two inequivalent expansion schemes, depending on whether the soft limit is taken asymmetrically or symmetrically, which we denote as type A and B respectively. The soft-behaviour for type A scheme can simply be derived from single soft theorems, and are thus non-perturbatively protected. For type B, the information of the four-point vertex is required to determine the corresponding soft theorems, and thus are in general not protected. This argument can be readily extended to general multi-soft theorems. We also ask whether unitarity can be emergent from locality together with the two kinds of soft theorems, which has not been fully investigated before.

  8. Nonequilibrium study of the intrinsic free-energy profile across a liquid-vapour interface

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Braga, Carlos, E-mail: ccorreia@imperial.ac.uk; Muscatello, Jordan, E-mail: jordan.muscatello@imperial.ac.uk; Lau, Gabriel, E-mail: gabriel.lau07@imperial.ac.uk

    2016-01-28

    We calculate an atomistically detailed free-energy profile across a heterogeneous system using a nonequilibrium approach. The path-integral formulation of Crooks fluctuation theorem is used in conjunction with the intrinsic sampling method to calculate the free-energy profile for the liquid-vapour interface of the Lennard-Jones fluid. Free-energy barriers are found corresponding to the atomic layering in the liquid phase as well as a barrier associated with the presence of an adsorbed layer as revealed by the intrinsic density profile. Our findings are in agreement with profiles calculated using Widom’s potential distribution theorem applied to both the average and the intrinsic profiles asmore » well as the literature values for the excess chemical potential.« less

  9. Chemical Equilibrium and Polynomial Equations: Beware of Roots.

    ERIC Educational Resources Information Center

    Smith, William R.; Missen, Ronald W.

    1989-01-01

    Describes two easily applied mathematical theorems, Budan's rule and Rolle's theorem, that in addition to Descartes's rule of signs and intermediate-value theorem, are useful in chemical equilibrium. Provides examples that illustrate the use of all four theorems. Discusses limitations of the polynomial equation representation of chemical…

  10. Approaching Cauchy's Theorem

    ERIC Educational Resources Information Center

    Garcia, Stephan Ramon; Ross, William T.

    2017-01-01

    We hope to initiate a discussion about various methods for introducing Cauchy's Theorem. Although Cauchy's Theorem is the fundamental theorem upon which complex analysis is based, there is no "standard approach." The appropriate choice depends upon the prerequisites for the course and the level of rigor intended. Common methods include…

  11. Using Bayes' theorem for free energy calculations

    NASA Astrophysics Data System (ADS)

    Rogers, David M.

    Statistical mechanics is fundamentally based on calculating the probabilities of molecular-scale events. Although Bayes' theorem has generally been recognized as providing key guiding principals for setup and analysis of statistical experiments [83], classical frequentist models still predominate in the world of computational experimentation. As a starting point for widespread application of Bayesian methods in statistical mechanics, we investigate the central quantity of free energies from this perspective. This dissertation thus reviews the basics of Bayes' view of probability theory, and the maximum entropy formulation of statistical mechanics before providing examples of its application to several advanced research areas. We first apply Bayes' theorem to a multinomial counting problem in order to determine inner shell and hard sphere solvation free energy components of Quasi-Chemical Theory [140]. We proceed to consider the general problem of free energy calculations from samples of interaction energy distributions. From there, we turn to spline-based estimation of the potential of mean force [142], and empirical modeling of observed dynamics using integrator matching. The results of this research are expected to advance the state of the art in coarse-graining methods, as they allow a systematic connection from high-resolution (atomic) to low-resolution (coarse) structure and dynamics. In total, our work on these problems constitutes a critical starting point for further application of Bayes' theorem in all areas of statistical mechanics. It is hoped that the understanding so gained will allow for improvements in comparisons between theory and experiment.

  12. Phase-locked-loop interferometry applied to aspheric testing with a computer-stored compensator.

    PubMed

    Servin, M; Malacara, D; Rodriguez-Vera, R

    1994-05-01

    A recently developed technique for continuous-phase determination of interferograms with a digital phase-locked loop (PLL) is applied to the null testing of aspheres. Although this PLL demodulating scheme is also a synchronous or direct interferometric technique, the separate unwrapping process is not explicitly required. The unwrapping and the phase-detection processes are achieved simultaneously within the PLL. The proposed method uses a computer-generated holographic compensator. The holographic compensator does not need to be printed out by any means; it is calculated and used from the computer. This computer-stored compensator is used as the reference signal to phase demodulate a sample interferogram obtained from the asphere being tested. Consequently the demodulated phase contains information about the wave-front departures from the ideal computer-stored aspheric interferogram. Wave-front differences of ~ 1 λ are handled easily by the proposed PLL scheme. The maximum recorded frequency in the template's interferogram as well as in the sampled interferogram are assumed to be below the Nyquist frequency.

  13. Investigation of electrical studies of spinel FeCo2O4 synthesized by sol-gel method

    NASA Astrophysics Data System (ADS)

    Lobo, Laurel Simon; Kalainathan, S.; Kumar, A. Ruban

    2015-12-01

    In this work, spinel FeCo2O4 is synthesized by sol-gel method using succinic acid as a chelating agent at 900 °C. The structural, spectroscopic and morphological characterization was carried out by using X-ray diffraction (XRD), Fourier Transform Infrared Spectroscopy (FT-IR) and Scanning Electron Microscopy equipped with Energy Dispersive X-ray spectrometer (SEM-EDX). The M-H loop at room temperature confirms the ferromagnetic property of the sample. The frequency and temperature dependence of dielectric constant (εʹ) and dielectric loss (tan δ) shows the presence of Maxwell-Wagner relaxation in the sample due to the presence of oxygen vacancy. Nyquist plot for frequency and temperature domain signifies the presence of grain effect, grain boundary effect and electrode interface in the conduction process. Electric modulus under suppression of electrode polarization shows the grain and grain boundary effects. The electrode polarization is observed in the lower frequency range of the conductivity graph.

  14. On the Spectrum of the Plenoptic Function.

    PubMed

    Gilliam, Christopher; Dragotti, Pier-Luigi; Brookes, Mike

    2014-02-01

    The plenoptic function is a powerful tool to analyze the properties of multi-view image data sets. In particular, the understanding of the spectral properties of the plenoptic function is essential in many computer vision applications, including image-based rendering. In this paper, we derive for the first time an exact closed-form expression of the plenoptic spectrum of a slanted plane with finite width and use this expression as the elementary building block to derive the plenoptic spectrum of more sophisticated scenes. This is achieved by approximating the geometry of the scene with a set of slanted planes and evaluating the closed-form expression for each plane in the set. We then use this closed-form expression to revisit uniform plenoptic sampling. In this context, we derive a new Nyquist rate for the plenoptic sampling of a slanted plane and a new reconstruction filter. Through numerical simulations, on both real and synthetic scenes, we show that the new filter outperforms alternative existing filters.

  15. High-Performance 3D Compressive Sensing MRI Reconstruction Using Many-Core Architectures

    PubMed Central

    Kim, Daehyun; Trzasko, Joshua; Smelyanskiy, Mikhail; Haider, Clifton; Dubey, Pradeep; Manduca, Armando

    2011-01-01

    Compressive sensing (CS) describes how sparse signals can be accurately reconstructed from many fewer samples than required by the Nyquist criterion. Since MRI scan duration is proportional to the number of acquired samples, CS has been gaining significant attention in MRI. However, the computationally intensive nature of CS reconstructions has precluded their use in routine clinical practice. In this work, we investigate how different throughput-oriented architectures can benefit one CS algorithm and what levels of acceleration are feasible on different modern platforms. We demonstrate that a CUDA-based code running on an NVIDIA Tesla C2050 GPU can reconstruct a 256 × 160 × 80 volume from an 8-channel acquisition in 19 seconds, which is in itself a significant improvement over the state of the art. We then show that Intel's Knights Ferry can perform the same 3D MRI reconstruction in only 12 seconds, bringing CS methods even closer to clinical viability. PMID:21922017

  16. Early Vector Calculus: A Path through Multivariable Calculus

    ERIC Educational Resources Information Center

    Robertson, Robert L.

    2013-01-01

    The divergence theorem, Stokes' theorem, and Green's theorem appear near the end of calculus texts. These are important results, but many instructors struggle to reach them. We describe a pathway through a standard calculus text that allows instructors to emphasize these theorems. (Contains 2 figures.)

  17. Pick's Theorem: What a Lemon!

    ERIC Educational Resources Information Center

    Russell, Alan R.

    2004-01-01

    Pick's theorem can be used in various ways just like a lemon. This theorem generally finds its way in the syllabus approximately at the middle school level and in fact at times students have even calculated the area of a state considering its outline with the help of the above theorem.

  18. Generalized Optical Theorem Detection in Random and Complex Media

    NASA Astrophysics Data System (ADS)

    Tu, Jing

    The problem of detecting changes of a medium or environment based on active, transmit-plus-receive wave sensor data is at the heart of many important applications including radar, surveillance, remote sensing, nondestructive testing, and cancer detection. This is a challenging problem because both the change or target and the surrounding background medium are in general unknown and can be quite complex. This Ph.D. dissertation presents a new wave physics-based approach for the detection of targets or changes in rather arbitrary backgrounds. The proposed methodology is rooted on a fundamental result of wave theory called the optical theorem, which gives real physical energy meaning to the statistics used for detection. This dissertation is composed of two main parts. The first part significantly expands the theory and understanding of the optical theorem for arbitrary probing fields and arbitrary media including nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The proposed formalism addresses both scalar and full vector electromagnetic fields. The second contribution of this dissertation is the application of the optical theorem to change detection with particular emphasis on random, complex, and active media, including single frequency probing fields and broadband probing fields. The first part of this work focuses on the generalization of the existing theoretical repertoire and interpretation of the scalar and electromagnetic optical theorem. Several fundamental generalizations of the optical theorem are developed. A new theory is developed for the optical theorem for scalar fields in nonhomogeneous media which can be bounded or unbounded. The bounded media context is essential for applications such as intrusion detection and surveillance in enclosed environments such as indoor facilities, caves, tunnels, as well as for nondestructive testing and communication systems based on wave-guiding structures. The developed scalar optical theorem theory applies to arbitrary lossless backgrounds and quite general probing fields including near fields which play a key role in super-resolution imaging. The derived formulation holds for arbitrary passive scatterers, which can be dissipative, as well as for the more general class of active scatterers which are composed of a (passive) scatterer component and an active, radiating (antenna) component. Furthermore, the generalization of the optical theorem to active scatterers is relevant to many applications such as surveillance of active targets including certain cloaks, invisible scatterers, and wireless communications. The latter developments have important military applications. The derived theoretical framework includes the familiar real power optical theorem describing power extinction due to both dissipation and scattering as well as a reactive optical theorem related to the reactive power changes. Meanwhile, the developed approach naturally leads to three optical theorem indicators or statistics, which can be used to detect changes or targets in unknown complex media. In addition, the optical theorem theory is generalized in the time domain so that it applies to arbitrary full vector fields, and arbitrary media including anisotropic media, nonreciprocal media, active media, as well as time-varying and nonlinear scatterers. The second component of this Ph.D. research program focuses on the application of the optical theorem to change detection. Three different forms of indicators or statistics are developed for change detection in unknown background media: a real power optical theorem detector, a reactive power optical theorem detector, and a total apparent power optical theorem detector. No prior knowledge is required of the background or the change or target. The performance of the three proposed optical theorem detectors is compared with the classical energy detector approach for change detection. The latter uses a mathematical or functional energy while the optical theorem detectors are based on real physical energy. For reference, the optical theorem detectors are also compared with the matched filter approach which (unlike the optical theorem detectors) assumes perfect target and medium information. The practical implementation of the optical theorem detectors is based for certain random and complex media on the exploitation of time reversal focusing ideas developed in the past 20 years in electromagnetics and acoustics. In the final part of the dissertation, we also discuss the implementation of the optical theorem sensors for one-dimensional propagation systems such as transmission lines. We also present a new generalized likelihood ratio test for detection that exploits a prior data constraint based on the optical theorem. Finally, we also address the practical implementation of the optical theorem sensors for optical imaging systems, by means of holography. The later is the first holographic implementation the optical theorem for arbitrary scenes and targets.

  19. Non-parametric methods for cost-effectiveness analysis: the central limit theorem and the bootstrap compared.

    PubMed

    Nixon, Richard M; Wonderling, David; Grieve, Richard D

    2010-03-01

    Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.

  20. Performance Comparison of 112-Gb/s DMT, Nyquist PAM4, and Partial-Response PAM4 for Future 5G Ethernet-Based Fronthaul Architecture

    NASA Astrophysics Data System (ADS)

    Eiselt, Nicklas; Muench, Daniel; Dochhan, Annika; Griesser, Helmut; Eiselt, Michael; Olmos, Juan Jose Vegas; Monroy, Idelfonso Tafur; Elbers, Joerg-Peter

    2018-05-01

    For a future 5G Ethernet-based fronthaul architecture, 100G trunk lines of a transmission distance up to 10 km standard single mode fiber (SSMF) in combination with cheap grey optics to daisy chain cell site network interfaces are a promising cost- and power-efficient solution. For such a scenario, different intensity modulation and direct detect (IMDD) Formats at a data rate of 112 Gb/s, namely Nyquist four-level pulse amplitude modulation (PAM4), discrete multi-tone Transmission (DMT) and partial-response (PR) PAM4 are experimentally investigated, using a low-cost electro-absorption modulated laser (EML), a 25G driver and current state-of-the-art high Speed 84 GS/s CMOS digital-to-analog converter (DAC) and analog-to-digital converter (ADC) test chips. Each modulation Format is optimized independently for the desired scenario and their digital signal processing (DSP) requirements are investigated. The performance of Nyquist PAM4 and PR PAM4 depend very much on the efficiency of pre- and post-equalization. We show the necessity for at least 11 FFE-taps for pre-emphasis and up to 41 FFE coefficients at the receiver side. In addition, PR PAM4 requires an MLSE with four states to decode the signal back to a PAM4 signal. On the contrary, bit- and power-loading (BL, PL) is crucial for DMT and an FFT length of at least 512 is necessary. With optimized parameters, all Modulation formats result in a very similar performances, demonstrating a transmission distance of up to 10 km over SSMF with bit error rates (BERs) below a FEC threshold of 4.4E-3, allowing error free transmission.

  1. Fiber-laser frequency combs for the generation of tunable single-frequency laser lines, mm- and THz-waves and sinc-shaped Nyquist pulses

    NASA Astrophysics Data System (ADS)

    Schneider, Thomas

    2015-03-01

    High-quality frequency comb sources like femtosecond-lasers have revolutionized the metrology of fundamental physical constants. The generated comb consists of frequency lines with an equidistant separation over a bandwidth of several THz. This bandwidth can be broadened further to a super-continuum of more than an octave through propagation in nonlinear media. The frequency separation between the lines is defined by the repetition rate and the width of each comb line can be below 1 Hz, even without external stabilization. By extracting just one of these lines, an ultra-narrow linewidth, tunable laser line for applications in communications and spectroscopy can be generated. If two lines are extracted, the superposition of these lines in an appropriate photo-mixer produces high-quality millimeter- and THz-waves. The extraction of several lines can be used for the creation of almost-ideally sinc-shaped Nyquist pulses, which enable optical communications with the maximum-possible baud rate. Especially combs generated by low-cost, small-footprint fs-fiber lasers are very promising. However due to the resonator length, the comb frequencies have a typical separation of 80 - 100 MHz, far too narrow for the selection of single tones with standard optical filters. Here the extraction of single lines of an fs-fiber laser by polarization pulling assisted stimulated Brillouin scattering is presented. The application of these extracted lines as ultra-narrow, stable and tunable laser lines, for the generation of very high-quality mm and THz-waves with an ultra-narrow linewidth and phase noise and for the generation of sinc-shaped Nyquist pulses with arbitrary bandwidth and repetition rate is discussed.

  2. Experimental Test of the Differential Fluctuation Theorem and a Generalized Jarzynski Equality for Arbitrary Initial States

    NASA Astrophysics Data System (ADS)

    Hoang, Thai M.; Pan, Rui; Ahn, Jonghoon; Bang, Jaehoon; Quan, H. T.; Li, Tongcang

    2018-02-01

    Nonequilibrium processes of small systems such as molecular machines are ubiquitous in biology, chemistry, and physics but are often challenging to comprehend. In the past two decades, several exact thermodynamic relations of nonequilibrium processes, collectively known as fluctuation theorems, have been discovered and provided critical insights. These fluctuation theorems are generalizations of the second law and can be unified by a differential fluctuation theorem. Here we perform the first experimental test of the differential fluctuation theorem using an optically levitated nanosphere in both underdamped and overdamped regimes and in both spatial and velocity spaces. We also test several theorems that can be obtained from it directly, including a generalized Jarzynski equality that is valid for arbitrary initial states, and the Hummer-Szabo relation. Our study experimentally verifies these fundamental theorems and initiates the experimental study of stochastic energetics with the instantaneous velocity measurement.

  3. Generalized virial theorem for massless electrons in graphene and other Dirac materials

    NASA Astrophysics Data System (ADS)

    Sokolik, A. A.; Zabolotskiy, A. D.; Lozovik, Yu. E.

    2016-05-01

    The virial theorem for a system of interacting electrons in a crystal, which is described within the framework of the tight-binding model, is derived. We show that, in the particular case of interacting massless electrons in graphene and other Dirac materials, the conventional virial theorem is violated. Starting from the tight-binding model, we derive the generalized virial theorem for Dirac electron systems, which contains an additional term associated with a momentum cutoff at the bottom of the energy band. Additionally, we derive the generalized virial theorem within the Dirac model using the minimization of the variational energy. The obtained theorem is illustrated by many-body calculations of the ground-state energy of an electron gas in graphene carried out in Hartree-Fock and self-consistent random-phase approximations. Experimental verification of the theorem in the case of graphene is discussed.

  4. The geometric Mean Value Theorem

    NASA Astrophysics Data System (ADS)

    de Camargo, André Pierro

    2018-05-01

    In a previous article published in the American Mathematical Monthly, Tucker (Amer Math Monthly. 1997; 104(3): 231-240) made severe criticism on the Mean Value Theorem and, unfortunately, the majority of calculus textbooks also do not help to improve its reputation. The standard argument for proving it seems to be applying Rolle's theorem to a function like Although short and effective, such reasoning is not intuitive. Perhaps for this reason, Tucker classified the Mean Value Theorem as a technical existence theorem used to prove intuitively obvious statements. Moreover, he argued that there is nothing obvious about the Mean Value Theorem without the continuity of the derivative. Under so unfair discrimination, we felt the need to come to the defense of this beautiful theorem in order to clear up these misunderstandings.

  5. A note on generalized Weyl's theorem

    NASA Astrophysics Data System (ADS)

    Zguitti, H.

    2006-04-01

    We prove that if either T or T* has the single-valued extension property, then the spectral mapping theorem holds for B-Weyl spectrum. If, moreover T is isoloid, and generalized Weyl's theorem holds for T, then generalized Weyl's theorem holds for f(T) for every . An application is given for algebraically paranormal operators.

  6. On the addition theorem of spherical functions

    NASA Astrophysics Data System (ADS)

    Shkodrov, V. G.

    The addition theorem of spherical functions is expressed in two reference systems, viz., an inertial system and a system rigidly fixed to a planet. A generalized addition theorem of spherical functions and a particular addition theorem for the rigidly fixed system are derived. The results are applied to the theory of a planetary potential.

  7. Integrated Chassis Control of Active Front Steering and Yaw Stability Control Based on Improved Inverse Nyquist Array Method

    PubMed Central

    2014-01-01

    An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method. PMID:24782676

  8. Diagonal dominance for the multivariable Nyquist array using function minimization

    NASA Technical Reports Server (NTRS)

    Leininger, G. G.

    1977-01-01

    A new technique for the design of multivariable control systems using the multivariable Nyquist array method was developed. A conjugate direction function minimization algorithm is utilized to achieve a diagonal dominant condition over the extended frequency range of the control system. The minimization is performed on the ratio of the moduli of the off-diagonal terms to the moduli of the diagonal terms of either the inverse or direct open loop transfer function matrix. Several new feedback design concepts were also developed, including: (1) dominance control parameters for each control loop; (2) compensator normalization to evaluate open loop conditions for alternative design configurations; and (3) an interaction index to determine the degree and type of system interaction when all feedback loops are closed simultaneously. This new design capability was implemented on an IBM 360/75 in a batch mode but can be easily adapted to an interactive computer facility. The method was applied to the Pratt and Whitney F100 turbofan engine.

  9. Integrated chassis control of active front steering and yaw stability control based on improved inverse nyquist array method.

    PubMed

    Zhu, Bing; Chen, Yizhou; Zhao, Jian

    2014-01-01

    An integrated chassis control (ICC) system with active front steering (AFS) and yaw stability control (YSC) is introduced in this paper. The proposed ICC algorithm uses the improved Inverse Nyquist Array (INA) method based on a 2-degree-of-freedom (DOF) planar vehicle reference model to decouple the plant dynamics under different frequency bands, and the change of velocity and cornering stiffness were considered to calculate the analytical solution in the precompensator design so that the INA based algorithm runs well and fast on the nonlinear vehicle system. The stability of the system is guaranteed by dynamic compensator together with a proposed PI feedback controller. After the response analysis of the system on frequency domain and time domain, simulations under step steering maneuver were carried out using a 2-DOF vehicle model and a 14-DOF vehicle model by Matlab/Simulink. The results show that the system is decoupled and the vehicle handling and stability performance are significantly improved by the proposed method.

  10. Coherent ultra dense wavelength division multiplexing passive optical networks

    NASA Astrophysics Data System (ADS)

    Shahpari, Ali; Ferreira, Ricardo; Ribeiro, Vitor; Sousa, Artur; Ziaie, Somayeh; Tavares, Ana; Vujicic, Zoran; Guiomar, Fernando P.; Reis, Jacklyn D.; Pinto, Armando N.; Teixeira, António

    2015-12-01

    In this paper, we firstly review the progress in ultra-dense wavelength division multiplexing passive optical network (UDWDM-PON), by making use of the key attributes of this technology in the context of optical access and metro networks. Besides the inherit properties of coherent technology, we explore different modulation formats and pulse shaping. The performance is experimentally demonstrated through a 12 × 10 Gb/s bidirectional UDWDM-PON over hybrid 80 km standard single mode fiber (SSMF) and optical wireless link. High density, 6.25 GHz grid, Nyquist shaped 16-ary quadrature amplitude modulation (16QAM) and digital frequency shifting are some of the properties exploited together in the tests. Also, bidirectional transmission in fiber, relevant in the context, is analyzed in terms of nonlinear and back-reflection effects on receiver sensitivity. In addition, as a basis for the discussion on market readiness, we experimentally demonstrate real-time detection of a Nyquist-shaped quaternary phase-shift keying (QPSK) signal using simple 8-bit digital signal processing (DSP) on a field-programmable gate array (FPGA).

  11. A 6-bit 4 GS/s pseudo-thermometer segmented CMOS DAC

    NASA Astrophysics Data System (ADS)

    Yijun, Song; Wenyuan, Li

    2014-06-01

    A 6-bit 4 GS/s, high-speed and power-efficient DAC for ultra-high-speed transceivers in 60 GHz band millimeter wave technology is presented. A novel pseudo-thermometer architecture is proposed to realize a good compromise between the fast conversion speed and the chip area. Symmetrical and compact floor planning and layout techniques including tree-like routing, cross-quading and common-centroid method are adopted to guarantee the chip is fully functional up to near-Nyquist frequency in a standard 0.18 μm CMOS process. Post simulation results corroborate the feasibility of the designed DAC, which canperform good static and dynamic linearity without calibration. DNL errors and INL errors can be controlled within ±0.28 LSB and ±0.26 LSB, respectively. SFDR at 4 GHz clock frequency for a 1.9 GHz near-Nyquist sinusoidal output signal is 40.83 dB and the power dissipation is less than 37 mW.

  12. Characterization of a maximum-likelihood nonparametric density estimator of kernel type

    NASA Technical Reports Server (NTRS)

    Geman, S.; Mcclure, D. E.

    1982-01-01

    Kernel type density estimators calculated by the method of sieves. Proofs are presented for the characterization theorem: Let x(1), x(2),...x(n) be a random sample from a population with density f(0). Let sigma 0 and consider estimators f of f(0) defined by (1).

  13. Discovering the Theorem of Pythagoras

    NASA Technical Reports Server (NTRS)

    Lattanzio, Robert (Editor)

    1988-01-01

    In this 'Project Mathematics! series, sponsored by the California Institute of Technology, Pythagoraus' theorem a(exp 2) + b(exp 2) = c(exp 2) is discussed and the history behind this theorem is explained. hrough live film footage and computer animation, applications in real life are presented and the significance of and uses for this theorem are put into practice.

  14. Bertrand's theorem and virial theorem in fractional classical mechanics

    NASA Astrophysics Data System (ADS)

    Yu, Rui-Yan; Wang, Towe

    2017-09-01

    Fractional classical mechanics is the classical counterpart of fractional quantum mechanics. The central force problem in this theory is investigated. Bertrand's theorem is generalized, and virial theorem is revisited, both in three spatial dimensions. In order to produce stable, closed, non-circular orbits, the inverse-square law and the Hooke's law should be modified in fractional classical mechanics.

  15. Guided Discovery of the Nine-Point Circle Theorem and Its Proof

    ERIC Educational Resources Information Center

    Buchbinder, Orly

    2018-01-01

    The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through…

  16. Renyi entropy measures of heart rate Gaussianity.

    PubMed

    Lake, Douglas E

    2006-01-01

    Sample entropy and approximate entropy are measures that have been successfully utilized to study the deterministic dynamics of heart rate (HR). A complementary stochastic point of view and a heuristic argument using the Central Limit Theorem suggests that the Gaussianity of HR is a complementary measure of the physiological complexity of the underlying signal transduction processes. Renyi entropy (or q-entropy) is a widely used measure of Gaussianity in many applications. Particularly important members of this family are differential (or Shannon) entropy (q = 1) and quadratic entropy (q = 2). We introduce the concepts of differential and conditional Renyi entropy rate and, in conjunction with Burg's theorem, develop a measure of the Gaussianity of a linear random process. Robust algorithms for estimating these quantities are presented along with estimates of their standard errors.

  17. General solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging.

    PubMed

    Nakata, Toshihiko; Ninomiya, Takanori

    2006-10-10

    A general solution of undersampling frequency conversion and its optimization for parallel photodisplacement imaging is presented. Phase-modulated heterodyne interference light generated by a linear region of periodic displacement is captured by a charge-coupled device image sensor, in which the interference light is sampled at a sampling rate lower than the Nyquist frequency. The frequencies of the components of the light, such as the sideband and carrier (which include photodisplacement and topography information, respectively), are downconverted and sampled simultaneously based on the integration and sampling effects of the sensor. A general solution of frequency and amplitude in this downconversion is derived by Fourier analysis of the sampling procedure. The optimal frequency condition for the heterodyne beat signal, modulation signal, and sensor gate pulse is derived such that undesirable components are eliminated and each information component is converted into an orthogonal function, allowing each to be discretely reproduced from the Fourier coefficients. The optimal frequency parameters that maximize the sideband-to-carrier amplitude ratio are determined, theoretically demonstrating its high selectivity over 80 dB. Preliminary experiments demonstrate that this technique is capable of simultaneous imaging of reflectivity, topography, and photodisplacement for the detection of subsurface lattice defects at a speed corresponding to an acquisition time of only 0.26 s per 256 x 256 pixel area.

  18. Interpreting the power spectrum of Dansgaard-Oeschger events via stochastic dynamical systems

    NASA Astrophysics Data System (ADS)

    Mitsui, Takahito; Lenoir, Guillaume; Crucifix, Michel

    2017-04-01

    Dansgaard-Oeschger (DO) events are abrupt climate shifts, which are particularly pronounced in the North Atlantic region during glacial periods [Dansgaard et al. 1993]. The signals are most clearly found in δ 18O or log [Ca2+] records of Greenland ice cores. The power spectrum S(f) of DO events has attracted attention over two decades with debates on the apparent 1.5-kyr periodicity [Grootes & Stuiver 1997; Schultz et al. 2002; Ditlevsen et al. 2007] and scaling property over several time scales [Schmitt, Lovejoy, & Schertzer 1995; Rypdal & Rypdal 2016]. The scaling property is written most simply as S(f)˜ f-β , β ≈ 1.4. However, physical as well as underlying dynamics of the periodicity and the scaling property are still not clear. Pioneering works for modelling the spectrum of DO events are done by Cessi (1994) and Ditlevsen (1999), but their model-data comparisons of the spectra are rather qualitative. Here, we show that simple stochastic dynamical systems can generate power spectra statistically consistent with the observed spectra over a wide range of frequency from orbital to the Nyquist frequency (=1/40 yr-1). We characterize the scaling property of the spectrum by defining a local scaling exponentβ _loc. For the NGRIP log [Ca2+] record, the local scaling exponent β _loc increases from ˜ 1 to ˜ 2 as the frequency increases from ˜ 1/5000 yr-1 to ˜ 1/500 yr-1, and β _loc decreases toward zero as the frequency increases from ˜ 1/500 yr-1 to the Nyquist frequency. For the δ 18O record, the local scaling exponent β _loc increases from ˜ 1 to ˜ 1.5 as the frequency increases from ˜ 1/5000 yr^{-1 to ˜ 1/1000 yr-1, and β _loc decreases toward zero as the frequency increases from ˜ 1/1000 yr-1 to the Nyquist frequency. This systematic breaking of a single scaling is reproduced by the simple stochastic models. Especially, the models suggest that the flattening of the spectra starting from multi-centennial scale and ending at the Nyquist frequency results from both non-dynamical (or non-system) noise and 20-yr binning of the ice core records. The modelling part of this research is partially based on the following work: Takahito Mitsui and Michel Crucifix, Influence of external forcings on abrupt millennial-scale climate changes: a statistical modelling study, Climate Dynamics (first online). doi:10.1007/s00382-016-3235-z

  19. Blessing of dimensionality: mathematical foundations of the statistical physics of data.

    PubMed

    Gorban, A N; Tyukin, I Y

    2018-04-28

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction.This article is part of the theme issue 'Hilbert's sixth problem'. © 2018 The Author(s).

  20. Blessing of dimensionality: mathematical foundations of the statistical physics of data

    NASA Astrophysics Data System (ADS)

    Gorban, A. N.; Tyukin, I. Y.

    2018-04-01

    The concentrations of measure phenomena were discovered as the mathematical background to statistical mechanics at the end of the nineteenth/beginning of the twentieth century and have been explored in mathematics ever since. At the beginning of the twenty-first century, it became clear that the proper utilization of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarizes recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median-level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fisher's discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us with such classifiers and determine a non-iterative (one-shot) procedure for their construction. This article is part of the theme issue `Hilbert's sixth problem'.

  1. Characterization of Generalized Young Measures Generated by Symmetric Gradients

    NASA Astrophysics Data System (ADS)

    De Philippis, Guido; Rindler, Filip

    2017-06-01

    This work establishes a characterization theorem for (generalized) Young measures generated by symmetric derivatives of functions of bounded deformation (BD) in the spirit of the classical Kinderlehrer-Pedregal theorem. Our result places such Young measures in duality with symmetric-quasiconvex functions with linear growth. The "local" proof strategy combines blow-up arguments with the singular structure theorem in BD (the analogue of Alberti's rank-one theorem in BV), which was recently proved by the authors. As an application of our characterization theorem we show how an atomic part in a BD-Young measure can be split off in generating sequences.

  2. The Poincaré-Hopf Theorem for line fields revisited

    NASA Astrophysics Data System (ADS)

    Crowley, Diarmuid; Grant, Mark

    2017-07-01

    A Poincaré-Hopf Theorem for line fields with point singularities on orientable surfaces can be found in Hopf's 1956 Lecture Notes on Differential Geometry. In 1955 Markus presented such a theorem in all dimensions, but Markus' statement only holds in even dimensions 2 k ≥ 4. In 1984 Jänich presented a Poincaré-Hopf theorem for line fields with more complicated singularities and focussed on the complexities arising in the generalized setting. In this expository note we review the Poincaré-Hopf Theorem for line fields with point singularities, presenting a careful proof which is valid in all dimensions.

  3. Common fixed point theorems for maps under a contractive condition of integral type

    NASA Astrophysics Data System (ADS)

    Djoudi, A.; Merghadi, F.

    2008-05-01

    Two common fixed point theorems for mapping of complete metric space under a general contractive inequality of integral type and satisfying minimal commutativity conditions are proved. These results extend and improve several previous results, particularly Theorem 4 of Rhoades [B.E. Rhoades, Two fixed point theorems for mappings satisfying a general contractive condition of integral type, Int. J. Math. Math. Sci. 63 (2003) 4007-4013] and Theorem 4 of Sessa [S. Sessa, On a weak commutativity condition of mappings in fixed point considerations, Publ. Inst. Math. (Beograd) (N.S.) 32 (46) (1982) 149-153].

  4. UWB pulse detection and TOA estimation using GLRT

    NASA Astrophysics Data System (ADS)

    Xie, Yan; Janssen, Gerard J. M.; Shakeri, Siavash; Tiberius, Christiaan C. J. M.

    2017-12-01

    In this paper, a novel statistical approach is presented for time-of-arrival (TOA) estimation based on first path (FP) pulse detection using a sub-Nyquist sampling ultra-wide band (UWB) receiver. The TOA measurement accuracy, which cannot be improved by averaging of the received signal, can be enhanced by the statistical processing of a number of TOA measurements. The TOA statistics are modeled and analyzed for a UWB receiver using threshold crossing detection of a pulse signal with noise. The detection and estimation scheme based on the Generalized Likelihood Ratio Test (GLRT) detector, which captures the full statistical information of the measurement data, is shown to achieve accurate TOA estimation and allows for a trade-off between the threshold level, the noise level, the amplitude and the arrival time of the first path pulse, and the accuracy of the obtained final TOA.

  5. GMTIFS: The Giant Magellan Telescope integral fields spectrograph and imager

    NASA Astrophysics Data System (ADS)

    Sharp, Rob; Bloxham, G.; Boz, R.; Bundy, D.; Davies, J.; Espeland, B.; Fordham, B.; Hart, J.; Herrald, N.; Nielsen, J.; Vaccarella, A.; Vest, C.; Young, P.; McGregor, P.

    2016-08-01

    GMTIFS is the first-generation adaptive optics integral-field spectrograph for the GMT, having been selected through a competitive review process in 2011. The GMTIFS concept is for a workhorse single-object integral-field spectrograph, operating at intermediate resolution (R 5,000 and 10,000) with a parallel imaging channel. The IFS offers variable spaxel scales to Nyquist sample the diffraction limited GMT PSF from λ 1-2.5 μm as well as a 50 mas scale to provide high sensitivity for low surface brightness objects. The GMTIFS will operate with all AO modes of the GMT (Natural guide star - NGSAO, Laser Tomography - LTAO, and, Ground Layer - GLAO) with an emphasis on achieving high sky coverage for LTAO observations. We summarize the principle science drivers for GMTIFS and the major design concepts that allow these goals to be achieved.

  6. A Converse of the Mean Value Theorem Made Easy

    ERIC Educational Resources Information Center

    Mortici, Cristinel

    2011-01-01

    The aim of this article is to discuss some results about the converse mean value theorem stated by Tong and Braza [J. Tong and P. Braza, "A converse of the mean value theorem", Amer. Math. Monthly 104(10), (1997), pp. 939-942] and Almeida [R. Almeida, "An elementary proof of a converse mean-value theorem", Internat. J. Math. Ed. Sci. Tech. 39(8)…

  7. Recurrence theorems: A unified account

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallace, David, E-mail: david.wallace@balliol.ox.ac.uk

    I discuss classical and quantum recurrence theorems in a unified manner, treating both as generalisations of the fact that a system with a finite state space only has so many places to go. Along the way, I prove versions of the recurrence theorem applicable to dynamics on linear and metric spaces and make some comments about applications of the classical recurrence theorem in the foundations of statistical mechanics.

  8. Zero-Bounded Limits as a Special Case of the Squeeze Theorem for Evaluating Single-Variable and Multivariable Limits

    ERIC Educational Resources Information Center

    Gkioulekas, Eleftherios

    2013-01-01

    Many limits, typically taught as examples of applying the "squeeze" theorem, can be evaluated more easily using the proposed zero-bounded limit theorem. The theorem applies to functions defined as a product of a factor going to zero and a factor that remains bounded in some neighborhood of the limit. This technique is immensely useful…

  9. Correcting Duporcq's theorem☆

    PubMed Central

    Nawratil, Georg

    2014-01-01

    In 1898, Ernest Duporcq stated a famous theorem about rigid-body motions with spherical trajectories, without giving a rigorous proof. Today, this theorem is again of interest, as it is strongly connected with the topic of self-motions of planar Stewart–Gough platforms. We discuss Duporcq's theorem from this point of view and demonstrate that it is not correct. Moreover, we also present a revised version of this theorem. PMID:25540467

  10. Knowledge-based nonuniform sampling in multidimensional NMR.

    PubMed

    Schuyler, Adam D; Maciejewski, Mark W; Arthanari, Haribabu; Hoch, Jeffrey C

    2011-07-01

    The full resolution afforded by high-field magnets is rarely realized in the indirect dimensions of multidimensional NMR experiments because of the time cost of uniformly sampling to long evolution times. Emerging methods utilizing nonuniform sampling (NUS) enable high resolution along indirect dimensions by sampling long evolution times without sampling at every multiple of the Nyquist sampling interval. While the earliest NUS approaches matched the decay of sampling density to the decay of the signal envelope, recent approaches based on coupled evolution times attempt to optimize sampling by choosing projection angles that increase the likelihood of resolving closely-spaced resonances. These approaches employ knowledge about chemical shifts to predict optimal projection angles, whereas prior applications of tailored sampling employed only knowledge of the decay rate. In this work we adapt the matched filter approach as a general strategy for knowledge-based nonuniform sampling that can exploit prior knowledge about chemical shifts and is not restricted to sampling projections. Based on several measures of performance, we find that exponentially weighted random sampling (envelope matched sampling) performs better than shift-based sampling (beat matched sampling). While shift-based sampling can yield small advantages in sensitivity, the gains are generally outweighed by diminished robustness. Our observation that more robust sampling schemes are only slightly less sensitive than schemes highly optimized using prior knowledge about chemical shifts has broad implications for any multidimensional NMR study employing NUS. The results derived from simulated data are demonstrated with a sample application to PfPMT, the phosphoethanolamine methyltransferase of the human malaria parasite Plasmodium falciparum.

  11. Voronovskaja's theorem revisited

    NASA Astrophysics Data System (ADS)

    Tachev, Gancho T.

    2008-07-01

    We represent a new quantitative variant of Voronovskaja's theorem for Bernstein operator. This estimate improves the recent quantitative versions of Voronovskaja's theorem for certain Bernstein-type operators, obtained by H. Gonska, P. Pitul and I. Rasa in 2006.

  12. Riemannian and Lorentzian flow-cut theorems

    NASA Astrophysics Data System (ADS)

    Headrick, Matthew; Hubeny, Veronika E.

    2018-05-01

    We prove several geometric theorems using tools from the theory of convex optimization. In the Riemannian setting, we prove the max flow-min cut (MFMC) theorem for boundary regions, applied recently to develop a ‘bit-thread’ interpretation of holographic entanglement entropies. We also prove various properties of the max flow and min cut, including respective nesting properties. In the Lorentzian setting, we prove the analogous MFMC theorem, which states that the volume of a maximal slice equals the flux of a minimal flow, where a flow is defined as a divergenceless timelike vector field with norm at least 1. This theorem includes as a special case a continuum version of Dilworth’s theorem from the theory of partially ordered sets. We include a brief review of the necessary tools from the theory of convex optimization, in particular Lagrangian duality and convex relaxation.

  13. Random Walks on Cartesian Products of Certain Nonamenable Groups and Integer Lattices

    NASA Astrophysics Data System (ADS)

    Vishnepolsky, Rachel

    A random walk on a discrete group satisfies a local limit theorem with power law exponent \\alpha if the return probabilities follow the asymptotic law. P{ return to starting point after n steps } ˜ Crhonn-alpha.. A group has a universal local limit theorem if all random walks on the group with finitely supported step distributions obey a local limit theorem with the same power law exponent. Given two groups that obey universal local limit theorems, it is not known whether their cartesian product also has a universal local limit theorem. We settle the question affirmatively in one case, by considering a random walk on the cartesian product of a nonamenable group whose Cayley graph is a tree, and the integer lattice. As corollaries, we derive large deviations estimates and a central limit theorem.

  14. An Introduction to Kristof's Theorem for Solving Least-Square Optimization Problems Without Calculus.

    PubMed

    Waller, Niels

    2018-01-01

    Kristof's Theorem (Kristof, 1970 ) describes a matrix trace inequality that can be used to solve a wide-class of least-square optimization problems without calculus. Considering its generality, it is surprising that Kristof's Theorem is rarely used in statistics and psychometric applications. The underutilization of this method likely stems, in part, from the mathematical complexity of Kristof's ( 1964 , 1970 ) writings. In this article, I describe the underlying logic of Kristof's Theorem in simple terms by reviewing four key mathematical ideas that are used in the theorem's proof. I then show how Kristof's Theorem can be used to provide novel derivations to two cognate models from statistics and psychometrics. This tutorial includes a glossary of technical terms and an online supplement with R (R Core Team, 2017 ) code to perform the calculations described in the text.

  15. Covariant information-density cutoff in curved space-time.

    PubMed

    Kempf, Achim

    2004-06-04

    In information theory, the link between continuous information and discrete information is established through well-known sampling theorems. Sampling theory explains, for example, how frequency-filtered music signals are reconstructible perfectly from discrete samples. In this Letter, sampling theory is generalized to pseudo-Riemannian manifolds. This provides a new set of mathematical tools for the study of space-time at the Planck scale: theories formulated on a differentiable space-time manifold can be equivalent to lattice theories. There is a close connection to generalized uncertainty relations which have appeared in string theory and other studies of quantum gravity.

  16. Exact Interval Estimation, Power Calculation, and Sample Size Determination in Normal Correlation Analysis

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2006-01-01

    This paper considers the problem of analysis of correlation coefficients from a multivariate normal population. A unified theorem is derived for the regression model with normally distributed explanatory variables and the general results are employed to provide useful expressions for the distributions of simple, multiple, and partial-multiple…

  17. Oversampling of digitized images. [effects on interpolation in signal processing

    NASA Technical Reports Server (NTRS)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  18. Double soft graviton theorems and Bondi-Metzner-Sachs symmetries

    NASA Astrophysics Data System (ADS)

    Anupam, A. H.; Kundu, Arpan; Ray, Krishnendu

    2018-05-01

    It is now well understood that Ward identities associated with the (extended) BMS algebra are equivalent to single soft graviton theorems. In this work, we show that if we consider nested Ward identities constructed out of two BMS charges, a class of double soft factorization theorems can be recovered. By making connections with earlier works in the literature, we argue that at the subleading order, these double soft graviton theorems are the so-called consecutive double soft graviton theorems. We also show how these nested Ward identities can be understood as Ward identities associated with BMS symmetries in scattering states defined around (non-Fock) vacua parametrized by supertranslations or superrotations.

  19. A fermionic de Finetti theorem

    NASA Astrophysics Data System (ADS)

    Krumnow, Christian; Zimborás, Zoltán; Eisert, Jens

    2017-12-01

    Quantum versions of de Finetti's theorem are powerful tools, yielding conceptually important insights into the security of key distribution protocols or tomography schemes and allowing one to bound the error made by mean-field approaches. Such theorems link the symmetry of a quantum state under the exchange of subsystems to negligible quantum correlations and are well understood and established in the context of distinguishable particles. In this work, we derive a de Finetti theorem for finite sized Majorana fermionic systems. It is shown, much reflecting the spirit of other quantum de Finetti theorems, that a state which is invariant under certain permutations of modes loses most of its anti-symmetric character and is locally well described by a mode separable state. We discuss the structure of the resulting mode separable states and establish in specific instances a quantitative link to the quality of the Hartree-Fock approximation of quantum systems. We hint at a link to generalized Pauli principles for one-body reduced density operators. Finally, building upon the obtained de Finetti theorem, we generalize and extend the applicability of Hudson's fermionic central limit theorem.

  20. Visual Theorems.

    ERIC Educational Resources Information Center

    Davis, Philip J.

    1993-01-01

    Argues for a mathematics education that interprets the word "theorem" in a sense that is wide enough to include the visual aspects of mathematical intuition and reasoning. Defines the term "visual theorems" and illustrates the concept using the Marigold of Theodorus. (Author/MDH)

  1. Note on the theorems of Bjerknes and Crocco

    NASA Technical Reports Server (NTRS)

    Theodorsen, Theodore

    1946-01-01

    The theorems of Bjerknes and Crocco are of great interest in the theory of flow around airfoils at Mach numbers near and above unity. A brief note shows how both theorems are developed by short vector transformations.

  2. Analysis of non locality proofs in Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Nisticò, Giuseppe

    2012-02-01

    Two kinds of non-locality theorems in Quantum Mechanics are taken into account: the theorems based on the criterion of reality and the quite different theorem proposed by Stapp. In the present work the analyses of the theorem due to Greenberger, Horne, Shimony and Zeilinger, based on the criterion of reality, and of Stapp's argument are shown. The results of these analyses show that the alleged violations of locality cannot be considered definitive.

  3. PYGMALION: A Creative Programming Environment

    DTIC Science & Technology

    1975-06-01

    iiiiiimimmmimm wm^m^mmm’ wi-i ,»■»’■’.■- v* 26 Examples of Purely Iconic Reasoning 1-H Pythagoras ’ original proof of the Pythagorean Theorem ... Theorem Proving Machine񓟋. His program employed properties of the representation to guide the proof of theorems . His simple heruristic "Reject...one theorem the square of the hypotenuse. "Every proposition is presented as a self-contained fact relying on its own intrinsic evidence. Instead

  4. A Maximal Element Theorem in FWC-Spaces and Its Applications

    PubMed Central

    Hu, Qingwen; Miao, Yulin

    2014-01-01

    A maximal element theorem is proved in finite weakly convex spaces (FWC-spaces, in short) which have no linear, convex, and topological structure. Using the maximal element theorem, we develop new existence theorems of solutions to variational relation problem, generalized equilibrium problem, equilibrium problem with lower and upper bounds, and minimax problem in FWC-spaces. The results represented in this paper unify and extend some known results in the literature. PMID:24782672

  5. Generalized Bloch theorem and topological characterization

    NASA Astrophysics Data System (ADS)

    Dobardžić, E.; Dimitrijević, M.; Milovanović, M. V.

    2015-03-01

    The Bloch theorem enables reduction of the eigenvalue problem of the single-particle Hamiltonian that commutes with the translational group. Based on a group theory analysis we present a generalization of the Bloch theorem that incorporates all additional symmetries of a crystal. The generalized Bloch theorem constrains the form of the Hamiltonian which becomes manifestly invariant under additional symmetries. In the case of isotropic interactions the generalized Bloch theorem gives a unique Hamiltonian. This Hamiltonian coincides with the Hamiltonian in the periodic gauge. In the case of anisotropic interactions the generalized Bloch theorem allows a family of Hamiltonians. Due to the continuity argument we expect that even in this case the Hamiltonian in the periodic gauge defines observables, such as Berry curvature, in the inverse space. For both cases we present examples and demonstrate that the average of the Berry curvatures of all possible Hamiltonians in the Bloch gauge is the Berry curvature in the periodic gauge.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, V.V.; Conley, R.; Anderson, E.H.

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binarypseudo-random (BPR) gratings and arrays has been suggested and and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer. Here we describe the details of development of binarypseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electronmore » microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML testsamples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  7. Revisiting Ramakrishnan's approach to relatively. [Velocity addition theorem uniqueness

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nandi, K.K.; Shankara, T.S.

    The conditions under which the velocity addition theorem (VAT) is formulated by Ramakrishnan gave rise to doubts about the uniqueness of the theorem. These conditions are rediscussed with reference to their algebraic and experimental implications. 9 references.

  8. General Theorems about Homogeneous Ellipsoidal Inclusions

    ERIC Educational Resources Information Center

    Korringa, J.; And Others

    1978-01-01

    Mathematical theorems about the properties of ellipsoids are developed. Included are Poisson's theorem concerning the magnetization of a homogeneous body of ellipsoidal shape, the polarization of a dielectric, the transport of heat or electricity through an ellipsoid, and other problems. (BB)

  9. A no-hair theorem for black holes in f(R) gravity

    NASA Astrophysics Data System (ADS)

    Cañate, Pedro

    2018-01-01

    In this work we present a no-hair theorem which discards the existence of four-dimensional asymptotically flat, static and spherically symmetric or stationary axisymmetric, non-trivial black holes in the frame of f(R) gravity under metric formalism. Here we show that our no-hair theorem also can discard asymptotic de Sitter stationary and axisymmetric non-trivial black holes. The novelty is that this no-hair theorem is built without resorting to known mapping between f(R) gravity and scalar–tensor theory. Thus, an advantage will be that our no-hair theorem applies as well to metric f(R) models that cannot be mapped to scalar–tensor theory.

  10. Generalized Browder's and Weyl's theorems for Banach space operators

    NASA Astrophysics Data System (ADS)

    Curto, Raúl E.; Han, Young Min

    2007-12-01

    We find necessary and sufficient conditions for a Banach space operator T to satisfy the generalized Browder's theorem. We also prove that the spectral mapping theorem holds for the Drazin spectrum and for analytic functions on an open neighborhood of [sigma](T). As applications, we show that if T is algebraically M-hyponormal, or if T is algebraically paranormal, then the generalized Weyl's theorem holds for f(T), where f[set membership, variant]H((T)), the space of functions analytic on an open neighborhood of [sigma](T). We also show that if T is reduced by each of its eigenspaces, then the generalized Browder's theorem holds for f(T), for each f[set membership, variant]H([sigma](T)).

  11. Lanchester-Type Models of Warfare. Volume II

    DTIC Science & Technology

    1980-10-01

    the so-called PERRON - FROBENIUS theorem50 for nonnegative matrices that one can guarantee that (without any further assumptions about A and B) there...always exists a vector of nonnegative values such that, for example, (7.18.6) holds. Before we state the PERRON - FROBENIUS theorem for nonnegative...a proof of this important theorem). THEOREM .5.-1.1 ( PERRON [121] and FROBENIUS [60]): Let C z 0 be an n x n matrix. Then, 1. C has a nonnegative real

  12. A remark on the energy conditions for Hawking's area theorem

    NASA Astrophysics Data System (ADS)

    Lesourd, Martin

    2018-06-01

    Hawking's area theorem is a fundamental result in black hole theory that is universally associated with the null energy condition. That this condition can be weakened is illustrated by the formulation of a strengthened version of the theorem based on an energy condition that allows for violations of the null energy condition. With the semi-classical context in mind, some brief remarks pertaining to the suitability of the area theorem and its energy condition are made.

  13. Gibbs-Curie-Wulff Theorem in Organic Materials: A Case Study on the Relationship between Surface Energy and Crystal Growth.

    PubMed

    Li, Rongjin; Zhang, Xiaotao; Dong, Huanli; Li, Qikai; Shuai, Zhigang; Hu, Wenping

    2016-02-24

    The equilibrium crystal shape and shape evolution of organic crystals are found to follow the Gibbs-Curie-Wulff theorem. Organic crystals are grown by the physical vapor transport technique and exhibit exactly the same shape as predicted by the Gibbs-Curie-Wulff theorem under optimal conditions. This accordance provides concrete proof for the theorem. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    V Yashchuk; R Conley; E Anderson

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanningmore » (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.« less

  15. Frequency position modulation using multi-spectral projections

    NASA Astrophysics Data System (ADS)

    Goodman, Joel; Bertoncini, Crystal; Moore, Michael; Nousain, Bryan; Cowart, Gregory

    2012-10-01

    In this paper we present an approach to harness multi-spectral projections (MSPs) to carefully shape and locate tones in the spectrum, enabling a new and robust modulation in which a signal's discrete frequency support is used to represent symbols. This method, called Frequency Position Modulation (FPM), is an innovative extension to MT-FSK and OFDM and can be non-uniformly spread over many GHz of instantaneous bandwidth (IBW), resulting in a communications system that is difficult to intercept and jam. The FPM symbols are recovered using adaptive projections that in part employ an analog polynomial nonlinearity paired with an analog-to-digital converter (ADC) sampling at a rate at that is only a fraction of the IBW of the signal. MSPs also facilitate using commercial of-the-shelf (COTS) ADCs with uniform-sampling, standing in sharp contrast to random linear projections by random sampling, which requires a full Nyquist rate sample-and-hold. Our novel communication system concept provides an order of magnitude improvement in processing gain over conventional LPI/LPD communications (e.g., FH- or DS-CDMA) and facilitates the ability to operate in interference laden environments where conventional compressed sensing receivers would fail. We quantitatively analyze the bit error rate (BER) and processing gain (PG) for a maximum likelihood based FPM demodulator and demonstrate its performance in interference laden conditions.

  16. Annual Pennsylvania Conference on Postsecondary Occupational Education: Programming Postsecondary Occupational Education. (Ninth, Pennsylvania State University, September 28-29, 1977).

    ERIC Educational Resources Information Center

    Martorana, S. V., Ed.; And Others

    This publication contains the text of the main presentations and the highlights of discussion groups from the Ninth Annual Pennsylvania Conference on Postsecondary Occupational Education. The conference theme was "Programming Postsecondary Occupational Education." Ewald Nyquist, the first speaker, delineated the problems faced by…

  17. Discrete-Time Demodulator Architectures for Free-Space Broadband Optical Pulse-Position Modulation

    NASA Technical Reports Server (NTRS)

    Gray, A. A.; Lee, C.

    2004-01-01

    The objective of this work is to develop discrete-time demodulator architectures for broadband optical pulse-position modulation (PPM) that are capable of processing Nyquist or near-Nyquist data rates. These architectures are motivated by the numerous advantages of realizing communications demodulators in digital very large scale integrated (VLSI) circuits. The architectures are developed within a framework that encompasses a large body of work in optical communications, synchronization, and multirate discrete-time signal processing and are constrained by the limitations of the state of the art in digital hardware. This work attempts to create a bridge between theoretical communication algorithms and analysis for deep-space optical PPM and modern digital VLSI. The primary focus of this work is on the synthesis of discrete-time processing architectures for accomplishing the most fundamental functions required in PPM demodulators, post-detection filtering, synchronization, and decision processing. The architectures derived are capable of closely approximating the theoretical performance of the continuous-time algorithms from which they are derived. The work concludes with an outline of the development path that leads to hardware.

  18. Digital nonlinearity compensation in high-capacity optical communication systems considering signal spectral broadening effect.

    PubMed

    Xu, Tianhua; Karanov, Boris; Shevchenko, Nikita A; Lavery, Domaniç; Liga, Gabriele; Killey, Robert I; Bayvel, Polina

    2017-10-11

    Nyquist-spaced transmission and digital signal processing have proved effective in maximising the spectral efficiency and reach of optical communication systems. In these systems, Kerr nonlinearity determines the performance limits, and leads to spectral broadening of the signals propagating in the fibre. Although digital nonlinearity compensation was validated to be promising for mitigating Kerr nonlinearities, the impact of spectral broadening on nonlinearity compensation has never been quantified. In this paper, the performance of multi-channel digital back-propagation (MC-DBP) for compensating fibre nonlinearities in Nyquist-spaced optical communication systems is investigated, when the effect of signal spectral broadening is considered. It is found that accounting for the spectral broadening effect is crucial for achieving the best performance of DBP in both single-channel and multi-channel communication systems, independent of modulation formats used. For multi-channel systems, the degradation of DBP performance due to neglecting the spectral broadening effect in the compensation is more significant for outer channels. Our work also quantified the minimum bandwidths of optical receivers and signal processing devices to ensure the optimal compensation of deterministic nonlinear distortions.

  19. Corrosion Properties of Dissimilar Friction Stir Welded 6061 Aluminum and HT590 Steel

    NASA Astrophysics Data System (ADS)

    Seo, Bosung; Song, Kuk Hyun; Park, Kwangsuk

    2018-05-01

    Corrosion properties of dissimilar friction stir welded 6061 aluminum and HT590 steel were investigated to understand effects of galvanic corrosion. As cathode when coupled, HT590 was cathodically protected. However, the passivation of AA6061 made the aluminum alloy cathode temporarily, which leaded to corrosion of HT590. From the EIS analysis showing Warburg diffusion plot in Nyquist plots, it can be inferred that the stable passivation layer was formed on AA6061. However, the weld as well as HT590 did not show Warburg diffusion plot in Nyquist plots, suggesting that there was no barrier for corrosion or even if it exists, the barrier had no function for preventing and/or retarding charge transport through the passivation layer. The open circuit potential measurements showed that the potential of the weld was similar to that of HT590, which lied in the pitting region for AA6061, making the aluminum alloy part of the weld keep corrosion state. That resulted in the cracked oxide film on AA6061 of the weld, which could not play a role of corrosion barrier.

  20. The Sampling Distribution and the Central Limit Theorem: What They Are and Why They're Important.

    ERIC Educational Resources Information Center

    Kennedy, Charlotte A.

    The use of and emphasis on statistical significance testing has pervaded educational and behavioral research for many decades in spite of criticism by prominent researchers in this field. Much of the controversy is caused by lack of understanding or misinterpretations. This paper reviews criticisms of statistical significance testing and discusses…

  1. The Holographic Electron Density Theorem, de-quantization, re-quantization, and nuclear charge space extrapolations of the Universal Molecule Model

    NASA Astrophysics Data System (ADS)

    Mezey, Paul G.

    2017-11-01

    Two strongly related theorems on non-degenerate ground state electron densities serve as the basis of "Molecular Informatics". The Hohenberg-Kohn theorem is a statement on global molecular information, ensuring that the complete electron density contains the complete molecular information. However, the Holographic Electron Density Theorem states more: the local information present in each and every positive volume density fragment is already complete: the information in the fragment is equivalent to the complete molecular information. In other words, the complete molecular information provided by the Hohenberg-Kohn Theorem is already provided, in full, by any positive volume, otherwise arbitrarily small electron density fragment. In this contribution some of the consequences of the Holographic Electron Density Theorem are discussed within the framework of the "Nuclear Charge Space" and the Universal Molecule Model. In the Nuclear Charge Space" the nuclear charges are regarded as continuous variables, and in the more general Universal Molecule Model some other quantized parameteres are also allowed to become "de-quantized and then re-quantized, leading to interrelations among real molecules through abstract molecules. Here the specific role of the Holographic Electron Density Theorem is discussed within the above context.

  2. Generalized Dandelin’s Theorem

    NASA Astrophysics Data System (ADS)

    Kheyfets, A. L.

    2017-11-01

    The paper gives a geometric proof of the theorem which states that in case of the plane section of a second-order surface of rotation (quadrics of rotation, QR), such conics as an ellipse, a hyperbola or a parabola (types of conic sections) are formed. The theorem supplements the well-known Dandelin’s theorem which gives the geometric proof only for a circular cone and applies the proof to all QR, namely an ellipsoid, a hyperboloid, a paraboloid and a cylinder. That’s why the considered theorem is known as the generalized Dandelin’s theorem (GDT). The GDT proof is based on a relatively unknown generalized directrix definition (GDD) of conics. The work outlines the GDD proof for all types of conics as their necessary and sufficient condition. Based on the GDD, the author proves the GDT for all QR in case of a random position of the cutting plane. The graphical stereometric structures necessary for the proof are given. The implementation of the structures by 3d computer methods is considered. The article shows the examples of the builds made in the AutoCAD package. The theorem is intended for the training course of theoretical training of elite student groups of architectural and construction specialties.

  3. The B-field soft theorem and its unification with the graviton and dilaton

    NASA Astrophysics Data System (ADS)

    Di Vecchia, Paolo; Marotta, Raffaele; Mojaza, Matin

    2017-10-01

    In theories of Einstein gravity coupled with a dilaton and a two-form, a soft theorem for the two-form, known as the Kalb-Ramond B-field, has so far been missing. In this work we fill the gap, and in turn formulate a unified soft theorem valid for gravitons, dilatons and B-fields in any tree-level scattering amplitude involving the three massless states. The new soft theorem is fixed by means of on-shell gauge invariance and enters at the subleading order of the graviton's soft theorem. In contrast to the subsubleading soft behavior of gravitons and dilatons, we show that the soft behavior of B-fields at this order cannot be fully fixed by gauge invariance. Nevertheless, we show that it is possible to establish a gauge invariant decomposition of the amplitudes to any order in the soft expansion. We check explicitly the new soft theorem in the bosonic string and in Type II superstring theories, and furthermore demonstrate that, at the next order in the soft expansion, totally gauge invariant terms appear in both string theories which cannot be factorized into a soft theorem.

  4. Magnetoelectric coupling and electrical properties of inorganic-organic based LSMO - PVDF hybrid nanocomposites

    NASA Astrophysics Data System (ADS)

    Debnath, Rajesh; Mandal, S. K.; Dey, P.; Nath, A.

    2018-04-01

    We have investigated strain mediated magnetoelectric coupling and ac electrical properties of 0.5La0.7Sr0.3MnO3-0.5 Polyvinylidene Fluoride nanocomposites at room temperature. The sample has been prepared through low temperature pyrophoric chemical process. The detailed study of X-ray diffraction pattern shows simultaneous co-existence of two phases of nanometric grains. Field emission scanning electron micrograph shows the absence of any phase segregation and good chemical homogeneity in composites. The magnetoelectric voltage is measured in both longitudinal and transverse direction at a frequency of 73 Hz. The magnetoelectric coefficient in transverse direction is found to ˜0.17 mV/cmOe and in longitudinal direction it is found to ˜0.08 mV/cmOe. With the application of dc magnetic field the real and imaginary part of impedance are increased where the dielectric constant has been decreased. Nyquist plots have been fitted using two parallel combinations of resistances - constant phase element circuits considering dominant role of grains and grain boundaries resistance in the conduction process of the sample.

  5. Enhancing the Electrochemical Behavior of Pure Copper by Cyclic Potentiodynamic Passivation: A Comparison between Coarse- and Nano-Grained Pure Copper

    NASA Astrophysics Data System (ADS)

    Fattah-alhosseini, Arash; Imantalab, Omid; Attarzadeh, Farid Reza

    2016-10-01

    Electrochemical behavior of coarse- and nano-grained pure copper were modified and improved to a large extent by the application of cyclic potentiodynamic passivation. The efficacy of this method was evaluated on the basis of grain size which is of great importance in corrosion studies. In this study, the eight passes of accumulative roll bonding process at room temperature were successfully performed to produce nano-grained pure copper. Transmission electron microscopy image indicated that the average grain size reached below 100 nm after eight passes. On the basis of cyclic voltammetry and also the electrochemical tests performed after that, it was revealed that cyclic potentiodynamic passivation had a significant improving effect on the passive behavior of both coarse- and nano-grained samples. In addition, a superior behavior of nano-grained sample in comparison to coarse-grained one was distinguished by its smaller cyclic voltammogram loops, nobler free potentials, larger capacitive arcs in the Nyquist plots, and less charge carrier densities within the passive film.

  6. Ba doped Fe3O4 nanocrystals: Magnetic field and temperature tuning dielectric and electrical transport

    NASA Astrophysics Data System (ADS)

    Dutta, Papia; Mandal, S. K.; Nath, A.

    2018-05-01

    Nanocrystalline BaFe2O4 has been prepared through low temperature pyrophoric reaction method. The structural, dielectric and electrical transport properties of BaFe2O4 are investigated in detail. AC electrical properties have been studied over the wide range of frequencies with applied dc magnetic fields and temperatures. The value of impedance is found to increase with increase in magnetic field attributing the magnetostriction property of the sample. The observed value of magneto-impedance and magnetodielectric is found to ∼32% and ∼33% at room temperature. Nyquist plots have been fitted using resistance-capacitor circuits at different magnetic fields and temperatures showing the dominant role of grain and grain boundaries of the sample. Metal-semiconductor transition ∼403 K has been discussed in terms of delocalized and localized charge carrier.We have estimated activation energy using Arrhenius relation indicating temperature dependent electrical relaxation process in the system. Ac conductivity follow a Jonscher’s single power law indicating the large and small polaronic hopping conduction mechanism in the system.

  7. Sparse dictionary for synthetic transmit aperture medical ultrasound imaging.

    PubMed

    Wang, Ping; Jiang, Jin-Yang; Li, Na; Luo, Han-Wu; Li, Fang; Cui, Shi-Gang

    2017-07-01

    It is possible to recover a signal below the Nyquist sampling limit using a compressive sensing technique in ultrasound imaging. However, the reconstruction enabled by common sparse transform approaches does not achieve satisfactory results. Considering the ultrasound echo signal's features of attenuation, repetition, and superposition, a sparse dictionary with the emission pulse signal is proposed. Sparse coefficients in the proposed dictionary have high sparsity. Images reconstructed with this dictionary were compared with those obtained with the three other common transforms, namely, discrete Fourier transform, discrete cosine transform, and discrete wavelet transform. The performance of the proposed dictionary was analyzed via a simulation and experimental data. The mean absolute error (MAE) was used to quantify the quality of the reconstructions. Experimental results indicate that the MAE associated with the proposed dictionary was always the smallest, the reconstruction time required was the shortest, and the lateral resolution and contrast of the reconstructed images were also the closest to the original images. The proposed sparse dictionary performed better than the other three sparse transforms. With the same sampling rate, the proposed dictionary achieved excellent reconstruction quality.

  8. Abel's theorem in the noncommutative case

    NASA Astrophysics Data System (ADS)

    Leitenberger, Frank

    2004-03-01

    We define noncommutative binary forms. Using the typical representation of Hermite we prove the fundamental theorem of algebra and we derive a noncommutative Cardano formula for cubic forms. We define quantized elliptic and hyperelliptic differentials of the first kind. Following Abel we prove Abel's theorem.

  9. Impossible colorings and Bell's theorem

    NASA Astrophysics Data System (ADS)

    Aravind, P. K.

    1999-11-01

    An argument due to Zimba and Penrose is generalized to show how all known non-coloring proofs of the Bell-Kochen-Specker (BKS) theorem can be converted into inequality-free proofs of Bell's nonlocality theorem. A compilation of many such inequality-free proofs is given.

  10. Understanding Rolle's Theorem

    ERIC Educational Resources Information Center

    Parameswaran, Revathy

    2009-01-01

    This paper reports on an experiment studying twelfth grade students' understanding of Rolle's Theorem. In particular, we study the influence of different concept images that students employ when solving reasoning tasks related to Rolle's Theorem. We argue that students' "container schema" and "motion schema" allow for rich…

  11. An Application of the Perron-Frobenius Theorem to a Damage Model Problem.

    DTIC Science & Technology

    1985-04-01

    RO-RI6I 20B AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A ill I DAMAGOE MODEL PR BLEM.. (U) PITTSBURGH UNIV PA CENTER FOR I MULTIYARIATE...any copyright notation herein. * . .r * j * :h ~ ** . . .~. ~ % *~’ :. ~ ~ v 4 .% % %~ AN APPLICATION OF THE PERRON - FROBENIUS THEOREM TO A DAMAGE...University of Sheffield, U.K. S ~ Summry Using the Perron - Frobenius theorem, it is established that if’ (X,Y) is a random vector of non-negative

  12. International Conference on Fixed Point Theory and Applications (Colloque International Theorie Du Point Fixe et Applications)

    DTIC Science & Technology

    1989-06-09

    Theorem and the Perron - Frobenius Theorem in matrix theory. We use the Hahn-Banach theorem and do not use any fixed-point related concepts. 179 A...games defined b’, tions 87 Isac G. Fixed point theorems on convex cones , generalized pseudo-contractive mappings and the omplementarity problem 89...and (II), af(x) ° denotes the negative polar cone ot of(x). This condition are respectively called "inward" and "outward". Indeed, when X is convex

  13. Numerical solution of linear and nonlinear Fredholm integral equations by using weighted mean-value theorem.

    PubMed

    Altürk, Ahmet

    2016-01-01

    Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.

  14. Markov Property of the Conformal Field Theory Vacuum and the a Theorem.

    PubMed

    Casini, Horacio; Testé, Eduardo; Torroba, Gonzalo

    2017-06-30

    We use strong subadditivity of entanglement entropy, Lorentz invariance, and the Markov property of the vacuum state of a conformal field theory to give new proof of the irreversibility of the renormalization group in d=4 space-time dimensions-the a theorem. This extends the proofs of the c and F theorems in dimensions d=2 and d=3 based on vacuum entanglement entropy, and gives a unified picture of all known irreversibility theorems in relativistic quantum field theory.

  15. A Polarimetric Extension of the van Cittert-Zernike Theorem for Use with Microwave Interferometers

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Simon, N. K.

    2004-01-01

    The van Cittert-Zernike theorem describes the Fourier-transform relationship between an extended source and its visibility function. Developments in classical optics texts use scalar field formulations for the theorem. Here, we develop a polarimetric extension to the van Cittert-Zernike theorem with applications to passive microwave Earth remote sensing. The development provides insight into the mechanics of two-dimensional interferometric imaging, particularly the effects of polarization basis differences between the scene and the observer.

  16. Nonlocal Quantum Information Transfer Without Superluminal Signalling and Communication

    NASA Astrophysics Data System (ADS)

    Walleczek, Jan; Grössing, Gerhard

    2016-09-01

    It is a frequent assumption that—via superluminal information transfers—superluminal signals capable of enabling communication are necessarily exchanged in any quantum theory that posits hidden superluminal influences. However, does the presence of hidden superluminal influences automatically imply superluminal signalling and communication? The non-signalling theorem mediates the apparent conflict between quantum mechanics and the theory of special relativity. However, as a `no-go' theorem there exist two opposing interpretations of the non-signalling constraint: foundational and operational. Concerning Bell's theorem, we argue that Bell employed both interpretations, and that he finally adopted the operational position which is associated often with ontological quantum theory, e.g., de Broglie-Bohm theory. This position we refer to as "effective non-signalling". By contrast, associated with orthodox quantum mechanics is the foundational position referred to here as "axiomatic non-signalling". In search of a decisive communication-theoretic criterion for differentiating between "axiomatic" and "effective" non-signalling, we employ the operational framework offered by Shannon's mathematical theory of communication, whereby we distinguish between Shannon signals and non-Shannon signals. We find that an effective non-signalling theorem represents two sub-theorems: (1) Non-transfer-control (NTC) theorem, and (2) Non-signification-control (NSC) theorem. Employing NTC and NSC theorems, we report that effective, instead of axiomatic, non-signalling is entirely sufficient for prohibiting nonlocal communication. Effective non-signalling prevents the instantaneous, i.e., superluminal, transfer of message-encoded information through the controlled use—by a sender-receiver pair —of informationally-correlated detection events, e.g., in EPR-type experiments. An effective non-signalling theorem allows for nonlocal quantum information transfer yet—at the same time—effectively denies superluminal signalling and communication.

  17. On Euler's Theorem for Homogeneous Functions and Proofs Thereof.

    ERIC Educational Resources Information Center

    Tykodi, R. J.

    1982-01-01

    Euler's theorem for homogenous functions is useful when developing thermodynamic distinction between extensive and intensive variables of state and when deriving the Gibbs-Duhem relation. Discusses Euler's theorem and thermodynamic applications. Includes six-step instructional strategy for introducing the material to students. (Author/JN)

  18. Validation of a Custom-made Software for DQE Assessment in Mammography Digital Detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ayala-Dominguez, L.; Perez-Ponce, H.; Brandan, M. E.

    2010-12-07

    This works presents the validation of a custom-made software, designed and developed in Matlab, intended for routine evaluation of detective quantum efficiency DQE, according to algorithms described in the IEC 62220-1-2 standard. DQE, normalized noise power spectrum NNPS and pre-sampling modulation transfer function MTF were calculated from RAW images from a GE Senographe DS (FineView disabled) and a Siemens Novation system. Calculated MTF is in close agreement with results obtained with alternative codes: MTF lowbar tool (Maidment), ImageJ plug-in (Perez-Ponce) and MIQuaELa (Ayala). Overall agreement better than {approx_equal}90% was found in MTF; the largest differences were observed at frequencies closemore » to the Nyquist limit. For the measurement of NNPS and DQE, agreement is similar to that obtained in the MTF. These results suggest that the developed software can be used with confidence for image quality assessment.« less

  19. Single lens system for forward-viewing navigation and scanning side-viewing optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Tate, Tyler H.; McGregor, Davis; Barton, Jennifer K.

    2017-02-01

    The optical design for a dual modality endoscope based on piezo scanning fiber technology is presented including a novel technique to combine forward-viewing navigation and side viewing OCT. Potential applications include navigating body lumens such as the fallopian tube, biliary ducts and cardiovascular system. A custom cover plate provides a rotationally symmetric double reflection of the OCT beam to deviate and focus the OCT beam out the side of the endoscope for cross-sectional imaging of the tubal lumen. Considerations in the choice of the scanning fiber are explored and a new technique to increase the divergence angle of the scanning fiber to improve system performance is presented. Resolution and the necessary scanning density requirements to achieve Nyquist sampling of the full image are considered. The novel optical design lays the groundwork for a new approach integrating side-viewing OCT into multimodality endoscopes for small lumen imaging. KEYWORDS:

  20. Vertical blind phase search for low-complexity carrier phase recovery of offset-QAM Nyquist WDM transmission

    NASA Astrophysics Data System (ADS)

    Lu, Jianing; Fu, Songnian; Tang, Haoyuan; Xiang, Meng; Tang, Ming; Liu, Deming

    2017-01-01

    Low complexity carrier phase recovery (CPR) scheme based on vertical blind phase search (V-BPS) for M-ary offset quadrature amplitude modulation (OQAM) is proposed and numerically verified. After investigating the constellations of both even and odd samples with respect to the phase noise, we identify that the CPR can be realized by measuring the verticality of constellation with respect to different test phase angles. Then measurement without multiplication in the complex plane is found with low complexity. Furthermore, a two-stage configuration is put forward to further reduce the computational complexity (CC). Compared with our recently proposed modified blind phase search (M-BPS) algorithm, the proposed algorithm shows comparable tolerance of phase noise, but reduces the CC by a factor of 3.81 (or 3.05) in the form of multipliers (or adders), taking the CPR of 16-OQAM into account.

  1. Numerical simulation and optimal design of Segmented Planar Imaging Detector for Electro-Optical Reconnaissance

    NASA Astrophysics Data System (ADS)

    Chu, Qiuhui; Shen, Yijie; Yuan, Meng; Gong, Mali

    2017-12-01

    Segmented Planar Imaging Detector for Electro-Optical Reconnaissance (SPIDER) is a cutting-edge electro-optical imaging technology to realize miniaturization and complanation of imaging systems. In this paper, the principle of SPIDER has been numerically demonstrated based on the partially coherent light theory, and a novel concept of adjustable baseline pairing SPIDER system has further been proposed. Based on the results of simulation, it is verified that the imaging quality could be effectively improved by adjusting the Nyquist sampling density, optimizing the baseline pairing method and increasing the spectral channel of demultiplexer. Therefore, an adjustable baseline pairing algorithm is established for further enhancing the image quality, and the optimal design procedure in SPIDER for arbitrary targets is also summarized. The SPIDER system with adjustable baseline pairing method can broaden its application and reduce cost under the same imaging quality.

  2. Gigahertz repetition rate, sub-femtosecond timing jitter optical pulse train directly generated from a mode-locked Yb:KYW laser.

    PubMed

    Yang, Heewon; Kim, Hyoji; Shin, Junho; Kim, Chur; Choi, Sun Young; Kim, Guang-Hoon; Rotermund, Fabian; Kim, Jungwon

    2014-01-01

    We show that a 1.13 GHz repetition rate optical pulse train with 0.70 fs high-frequency timing jitter (integration bandwidth of 17.5 kHz-10 MHz, where the measurement instrument-limited noise floor contributes 0.41 fs in 10 MHz bandwidth) can be directly generated from a free-running, single-mode diode-pumped Yb:KYW laser mode-locked by single-wall carbon nanotube-coated mirrors. To our knowledge, this is the lowest-timing-jitter optical pulse train with gigahertz repetition rate ever measured. If this pulse train is used for direct sampling of 565 MHz signals (Nyquist frequency of the pulse train), the jitter level demonstrated would correspond to the projected effective-number-of-bit of 17.8, which is much higher than the thermal noise limit of 50 Ω load resistance (~14 bits).

  3. Ergodic theorem, ergodic theory, and statistical mechanics

    PubMed Central

    Moore, Calvin C.

    2015-01-01

    This perspective highlights the mean ergodic theorem established by John von Neumann and the pointwise ergodic theorem established by George Birkhoff, proofs of which were published nearly simultaneously in PNAS in 1931 and 1932. These theorems were of great significance both in mathematics and in statistical mechanics. In statistical mechanics they provided a key insight into a 60-y-old fundamental problem of the subject—namely, the rationale for the hypothesis that time averages can be set equal to phase averages. The evolution of this problem is traced from the origins of statistical mechanics and Boltzman's ergodic hypothesis to the Ehrenfests' quasi-ergodic hypothesis, and then to the ergodic theorems. We discuss communications between von Neumann and Birkhoff in the Fall of 1931 leading up to the publication of these papers and related issues of priority. These ergodic theorems initiated a new field of mathematical-research called ergodic theory that has thrived ever since, and we discuss some of recent developments in ergodic theory that are relevant for statistical mechanics. PMID:25691697

  4. From Einstein's theorem to Bell's theorem: a history of quantum non-locality

    NASA Astrophysics Data System (ADS)

    Wiseman, H. M.

    2006-04-01

    In this Einstein Year of Physics it seems appropriate to look at an important aspect of Einstein's work that is often down-played: his contribution to the debate on the interpretation of quantum mechanics. Contrary to physics ‘folklore’, Bohr had no defence against Einstein's 1935 attack (the EPR paper) on the claimed completeness of orthodox quantum mechanics. I suggest that Einstein's argument, as stated most clearly in 1946, could justly be called Einstein's reality locality completeness theorem, since it proves that one of these three must be false. Einstein's instinct was that completeness of orthodox quantum mechanics was the falsehood, but he failed in his quest to find a more complete theory that respected reality and locality. Einstein's theorem, and possibly Einstein's failure, inspired John Bell in 1964 to prove his reality locality theorem. This strengthened Einstein's theorem (but showed the futility of his quest) by demonstrating that either reality or locality is a falsehood. This revealed the full non-locality of the quantum world for the first time.

  5. The spectral theorem for quaternionic unbounded normal operators based on the S-spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alpay, Daniel, E-mail: dany@math.bgu.ac.il; Kimsey, David P., E-mail: dpkimsey@gmail.com; Colombo, Fabrizio, E-mail: fabrizio.colombo@polimi.it

    In this paper we prove the spectral theorem for quaternionic unbounded normal operators using the notion of S-spectrum. The proof technique consists of first establishing a spectral theorem for quaternionic bounded normal operators and then using a transformation which maps a quaternionic unbounded normal operator to a quaternionic bounded normal operator. With this paper we complete the foundation of spectral analysis of quaternionic operators. The S-spectrum has been introduced to define the quaternionic functional calculus but it turns out to be the correct object also for the spectral theorem for quaternionic normal operators. The lack of a suitable notion ofmore » spectrum was a major obstruction to fully understand the spectral theorem for quaternionic normal operators. A prime motivation for studying the spectral theorem for quaternionic unbounded normal operators is given by the subclass of unbounded anti-self adjoint quaternionic operators which play a crucial role in the quaternionic quantum mechanics.« less

  6. Bring the Pythagorean Theorem "Full Circle"

    ERIC Educational Resources Information Center

    Benson, Christine C.; Malm, Cheryl G.

    2011-01-01

    Middle school mathematics generally explores applications of the Pythagorean theorem and lays the foundation for working with linear equations. The Grade 8 Curriculum Focal Points recommend that students "apply the Pythagorean theorem to find distances between points in the Cartesian coordinate plane to measure lengths and analyze polygons and…

  7. Using Discovery in the Calculus Class

    ERIC Educational Resources Information Center

    Shilgalis, Thomas W.

    1975-01-01

    This article shows how two discoverable theorems from elementary calculus can be presented to students in a manner that assists them in making the generalizations themselves. The theorems are the mean value theorems for derivatives and for integrals. A conjecture is suggested by pictures and then refined. (Author/KM)

  8. Three Lectures on Theorem-proving and Program Verification

    NASA Technical Reports Server (NTRS)

    Moore, J. S.

    1983-01-01

    Topics concerning theorem proving and program verification are discussed with particlar emphasis on the Boyer/Moore theorem prover, and approaches to program verification such as the functional and interpreter methods and the inductive assertion approach. A history of the discipline and specific program examples are included.

  9. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  10. Nonequilibrium thermodynamics of restricted Boltzmann machines.

    PubMed

    Salazar, Domingos S P

    2017-08-01

    In this work, we analyze the nonequilibrium thermodynamics of a class of neural networks known as restricted Boltzmann machines (RBMs) in the context of unsupervised learning. We show how the network is described as a discrete Markov process and how the detailed balance condition and the Maxwell-Boltzmann equilibrium distribution are sufficient conditions for a complete thermodynamics description, including nonequilibrium fluctuation theorems. Numerical simulations in a fully trained RBM are performed and the heat exchange fluctuation theorem is verified with excellent agreement to the theory. We observe how the contrastive divergence functional, mostly used in unsupervised learning of RBMs, is closely related to nonequilibrium thermodynamic quantities. We also use the framework to interpret the estimation of the partition function of RBMs with the annealed importance sampling method from a thermodynamics standpoint. Finally, we argue that unsupervised learning of RBMs is equivalent to a work protocol in a system driven by the laws of thermodynamics in the absence of labeled data.

  11. Experimental violation of local causality in a quantum network.

    PubMed

    Carvacho, Gonzalo; Andreoli, Francesco; Santodonato, Luca; Bentivegna, Marco; Chaves, Rafael; Sciarrino, Fabio

    2017-03-16

    Bell's theorem plays a crucial role in quantum information processing and thus several experimental investigations of Bell inequalities violations have been carried out over the years. Despite their fundamental relevance, however, previous experiments did not consider an ingredient of relevance for quantum networks: the fact that correlations between distant parties are mediated by several, typically independent sources. Here, using a photonic setup, we investigate a quantum network consisting of three spatially separated nodes whose correlations are mediated by two distinct sources. This scenario allows for the emergence of the so-called non-bilocal correlations, incompatible with any local model involving two independent hidden variables. We experimentally witness the emergence of this kind of quantum correlations by violating a Bell-like inequality under the fair-sampling assumption. Our results provide a proof-of-principle experiment of generalizations of Bell's theorem for networks, which could represent a potential resource for quantum communication protocols.

  12. Two Universality Properties Associated with the Monkey Model of Zipf's Law

    NASA Astrophysics Data System (ADS)

    Perline, Richard; Perline, Ron

    2016-03-01

    The distribution of word probabilities in the monkey model of Zipf's law is associated with two universality properties: (1) the power law exponent converges strongly to $-1$ as the alphabet size increases and the letter probabilities are specified as the spacings from a random division of the unit interval for any distribution with a bounded density function on $[0,1]$; and (2), on a logarithmic scale the version of the model with a finite word length cutoff and unequal letter probabilities is approximately normally distributed in the part of the distribution away from the tails. The first property is proved using a remarkably general limit theorem for the logarithm of sample spacings from Shao and Hahn, and the second property follows from Anscombe's central limit theorem for a random number of i.i.d. random variables. The finite word length model leads to a hybrid Zipf-lognormal mixture distribution closely related to work in other areas.

  13. Experimental violation of local causality in a quantum network

    PubMed Central

    Carvacho, Gonzalo; Andreoli, Francesco; Santodonato, Luca; Bentivegna, Marco; Chaves, Rafael; Sciarrino, Fabio

    2017-01-01

    Bell's theorem plays a crucial role in quantum information processing and thus several experimental investigations of Bell inequalities violations have been carried out over the years. Despite their fundamental relevance, however, previous experiments did not consider an ingredient of relevance for quantum networks: the fact that correlations between distant parties are mediated by several, typically independent sources. Here, using a photonic setup, we investigate a quantum network consisting of three spatially separated nodes whose correlations are mediated by two distinct sources. This scenario allows for the emergence of the so-called non-bilocal correlations, incompatible with any local model involving two independent hidden variables. We experimentally witness the emergence of this kind of quantum correlations by violating a Bell-like inequality under the fair-sampling assumption. Our results provide a proof-of-principle experiment of generalizations of Bell's theorem for networks, which could represent a potential resource for quantum communication protocols. PMID:28300068

  14. Experimental violation of local causality in a quantum network

    NASA Astrophysics Data System (ADS)

    Carvacho, Gonzalo; Andreoli, Francesco; Santodonato, Luca; Bentivegna, Marco; Chaves, Rafael; Sciarrino, Fabio

    2017-03-01

    Bell's theorem plays a crucial role in quantum information processing and thus several experimental investigations of Bell inequalities violations have been carried out over the years. Despite their fundamental relevance, however, previous experiments did not consider an ingredient of relevance for quantum networks: the fact that correlations between distant parties are mediated by several, typically independent sources. Here, using a photonic setup, we investigate a quantum network consisting of three spatially separated nodes whose correlations are mediated by two distinct sources. This scenario allows for the emergence of the so-called non-bilocal correlations, incompatible with any local model involving two independent hidden variables. We experimentally witness the emergence of this kind of quantum correlations by violating a Bell-like inequality under the fair-sampling assumption. Our results provide a proof-of-principle experiment of generalizations of Bell's theorem for networks, which could represent a potential resource for quantum communication protocols.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Austin, Anthony P.; Trefethen, Lloyd N.

    The trigonometric interpolants to a periodic function f in equispaced points converge if f is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if f is continuous. What if the points are perturbed? With equispaced grid spacing h, let each point be perturbed by an arbitrary amount <= alpha h, where alpha is an element of[0, 1/2) is a fixed constant. The Kadec 1/4 theorem of sampling theory suggests there may be trouble for alpha >= 1/4. We show that convergence of both the interpolants and the quadrature estimates is guaranteed for all alpha < 1/2 if fmore » is twice continuously differentiable, with the convergence rate depending on the smoothness of f. More precisely, it is enough for f to have 4 alpha derivatives in a certain sense, and we conjecture that 2 alpha derivatives are enough. Connections with the Fejer-Kalmar theorem are discussed.« less

  16. Generalized chaos synchronization theorems for bidirectional differential equations and discrete systems with applications

    NASA Astrophysics Data System (ADS)

    Ji, Ye; Liu, Ting; Min, Lequan

    2008-05-01

    Two constructive generalized chaos synchronization (GCS) theorems for bidirectional differential equations and discrete systems are introduced. Using the two theorems, one can construct new chaos systems to make the system variables be in GCS. Five examples are presented to illustrate the effectiveness of the theoretical results.

  17. The Law of Cosines for an "n"-Dimensional Simplex

    ERIC Educational Resources Information Center

    Ding, Yiren

    2008-01-01

    Using the divergence theorem technique of L. Eifler and N.H. Rhee, "The n-dimensional Pythagorean Theorem via the Divergence Theorem" (to appear: Amer. Math. Monthly), we extend the law of cosines for a triangle in a plane to an "n"-dimensional simplex in an "n"-dimensional space.

  18. When 95% Accurate Isn't: Exploring Bayes's Theorem

    ERIC Educational Resources Information Center

    CadwalladerOlsker, Todd D.

    2011-01-01

    Bayes's theorem is notorious for being a difficult topic to learn and to teach. Problems involving Bayes's theorem (either implicitly or explicitly) generally involve calculations based on two or more given probabilities and their complements. Further, a correct solution depends on students' ability to interpret the problem correctly. Most people…

  19. Optimal Keno Strategies and the Central Limit Theorem

    ERIC Educational Resources Information Center

    Johnson, Roger W.

    2006-01-01

    For the casino game Keno we determine optimal playing strategies. To decide such optimal strategies, both exact (hypergeometric) and approximate probability calculations are used. The approximate calculations are obtained via the Central Limit Theorem and simulation, and an important lesson about the application of the Central Limit Theorem is…

  20. Computer Algebra Systems and Theorems on Real Roots of Polynomials

    ERIC Educational Resources Information Center

    Aidoo, Anthony Y.; Manthey, Joseph L.; Ward, Kim Y.

    2010-01-01

    A computer algebra system is used to derive a theorem on the existence of roots of a quadratic equation on any bounded real interval. This is extended to a cubic polynomial. We discuss how students could be led to derive and prove these theorems. (Contains 1 figure.)

  1. Fluctuation theorem for Hamiltonian Systems: Le Chatelier's principle

    NASA Astrophysics Data System (ADS)

    Evans, Denis J.; Searles, Debra J.; Mittag, Emil

    2001-05-01

    For thermostated dissipative systems, the fluctuation theorem gives an analytical expression for the ratio of probabilities that the time-averaged entropy production in a finite system observed for a finite time takes on a specified value compared to the negative of that value. In the past, it has been generally thought that the presence of some thermostating mechanism was an essential component of any system that satisfies a fluctuation theorem. In the present paper, we point out that a fluctuation theorem can be derived for purely Hamiltonian systems, with or without applied dissipative fields.

  2. Nambu-Goldstone theorem and spin-statistics theorem

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo

    On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of "Fundamental Problems in Field Theory and their Implications". Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to nonrelativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.

  3. Counting Heron Triangles with Constraints

    DTIC Science & Technology

    2013-01-25

    Heron triangle is an integer, then b is even, say b = 2b1. By Pythagoras ’ theorem , a4 = h2 +4b21, and since in a Heron triangle, the heights are always...our first result, which follows an idea of [10, Theorem 2.3]. Theorem 4. Let a, b be two fixed integers, and let ab be factored as in (1). Then H(a, b...which we derive the result. Theorem 4 immediately offers us an interesting observation regarding a special class of fixed sides (a, b). Corollary 5. If

  4. On Pythagoras Theorem for Products of Spectral Triples

    NASA Astrophysics Data System (ADS)

    D'Andrea, Francesco; Martinetti, Pierre

    2013-05-01

    We discuss a version of Pythagoras theorem in noncommutative geometry. Usual Pythagoras theorem can be formulated in terms of Connes' distance, between pure states, in the product of commutative spectral triples. We investigate the generalization to both non-pure states and arbitrary spectral triples. We show that Pythagoras theorem is replaced by some Pythagoras inequalities, that we prove for the product of arbitrary (i.e. non-necessarily commutative) spectral triples, assuming only some unitality condition. We show that these inequalities are optimal, and we provide non-unital counter-examples inspired by K-homology.

  5. Which symmetry? Noether, Weyl, and conservation of electric charge

    NASA Astrophysics Data System (ADS)

    Brading, Katherine A.

    In 1918, Emmy Noether published a (now famous) theorem establishing a general connection between continuous 'global' symmetries and conserved quantities. In fact, Noether's paper contains two theorems, and the second of these deals with 'local' symmetries; prima facie, this second theorem has nothing to do with conserved quantities. In the same year, Hermann Weyl independently made the first attempt to derive conservation of electric charge from a postulated gauge symmetry. In the light of Noether's work, it is puzzling that Weyl's argument uses local gauge symmetry. This paper explores the relationships between Weyl's work, Noether's two theorems, and the modern connection between gauge symmetry and conservation of electric charge. This includes showing that Weyl's connection is essentially an application of Noether's second theorem, with a novel twist.

  6. Quotable Quotes on the Value of Language Study.

    ERIC Educational Resources Information Center

    Language Association Bulletin, 1974

    1974-01-01

    Quotations about the importance of and need for foreign language teaching and learning by well-known U. S. politicians, college and university presidents, religious leaders, and government officials are presented. Those quoted include: (1) J. M. Leslie; (2) E. Nyquist; (3) J. M. Hester; (4) C. B. Saunders; (5) L. White, Jr.; (6) H. Humphrey; (7)…

  7. Mineralogical Study of a Gray Anorthositic Clast in the Yamato 86032 Lunar Meteorite: Windows to the Far-Side Highland

    NASA Astrophysics Data System (ADS)

    Takeda, H.; Nyquist, L. E.; Kojima, H.

    2002-03-01

    We performed a mineralogical study of a large gray clast (Y86032,83-1). Comparing our data and an Ar-Ar age of 4.49 Ga and negative epsilonNd data (Nyquist et al.), we propose that the original anorthosite is an important FAN of the farside highland.

  8. Preclinical Testing of a New MR Imaging Approach to Distinguish Aggressive from Indolent Disease

    DTIC Science & Technology

    2014-06-01

    Litwin , M. S. (2004) Predicting quality of life after radical prostatectomy: results from CaPSURE. J Urol 171, 703-7; discussion 707-8. 4. Wei...J. T., Dunn, R. L., Sandler, H. M., McLaughlin, P. W., Montie, J. E., Litwin , M. S., Nyquist, L., & Sanda, M. G. (2002) Comprehensive comparison of

  9. Collected Papers in Teaching English as a Second Language and Bilingual Education: Themes, Practices, Viewpoints.

    ERIC Educational Resources Information Center

    Light, Richard L., Ed.; Osman, Alice H., Ed.

    This volume contains the following papers: (1) "Linguistics, TESOL, and Bilingual Education: An Overview," by J.E. Alatis; (2) "TESOL: Meeting a Social Need," by M. Galvan; (3) "Bilingual Education, TESL, and Ethnicity in New York State," by E.B. Nyquist; (4) "Control, Initiative, and the Whole Learner," by…

  10. Rocks: A Concrete Activity That Introduces Normal Distribution, Sampling Error, Central Limit Theorem and True Score Theory

    ERIC Educational Resources Information Center

    Van Duzer, Eric

    2011-01-01

    This report introduces a short, hands-on activity that addresses a key challenge in teaching quantitative methods to students who lack confidence or experience with statistical analysis. Used near the beginning of the course, this activity helps students develop an intuitive insight regarding a number of abstract concepts which are key to…

  11. Estimation of a closed population size of tadpoles in temporary pond.

    PubMed

    Lima, M S C S; Pederassi, J; Souza, C A S

    2018-05-01

    The practice of capture-recapture to estimate the diversity is well known to many animal groups, however this practice in the larval phase of anuran amphibians is incipient. We aimed at evaluating the Lincoln estimator, Venn diagram and Bayes theorem in the inference of population size of a larval phase anurocenose from lotic environment. The adherence of results was evaluated using the Kolmogorov-Smirnov test. The marking of tadpoles for later recapture and methods measurement was made with eosin methylene blue. When comparing the results of Lincoln-Petersen estimator corresponding to the Venn diagram and Bayes theorem, we detected percentage differences per sampling, i.e., the proportion of sampled anuran genera is kept among the three methods, although the values are numerically different. By submitting these results to the Kolmogorov-Smirnov test we have found no significant differences. Therefore, no matter the estimator, the measured value is adherent and estimates the total population. Together with the marking methodology, which did not change the behavior of tadpoles, the present study helps to fill the need of more studies on larval phase of amphibians in Brazil, especially in semi-arid northeast.

  12. Terrestrial Chemical Alteration of Hot Desert Meteorites

    NASA Astrophysics Data System (ADS)

    Crozaz, G.; Floss, C.

    2001-12-01

    Large numbers of meteorites have recently been recovered from terrestrial hot deserts. They include objects whose study holds the promise of significantly increasing our knowledge of the origin and petrogenesis of rare groups of meteorites (e.g., martian and lunar rocks, ureilites, etc). However, these meteorites have typically been exposed to harsh desert conditions for more than 10,000 yr since their fall on earth. A number of alterations have been described, including mineralogical and chemical changes (e.g., Crozaz and Wadhwa, 2001, and references therein). Through weathering, Fe-bearing minerals are progressively altered into clays and iron oxides and hydroxides, which often fill cracks and mineral fractures, together with terrestrial quartz and carbonates. In addition, for whole rock samples, elevated Ba, Sr, and U seem to be the telltale signs of terrestrial contamination (e.g., Barrat et al., 1999). In our work, we use the rare earth elements (REE) as monitors of terrestrial alteration. These elements are important because they are commonly used to decipher the petrogenesis and chronology of meteorites. We have made in-situ concentration measurements, by secondary ion mass spectrometry (SIMS), of individual grains in shergottites (assumed to have formed on Mars), lunar, and angritic meteorites. Terrestrial contamination, in the form of LREE enrichment and Ce anomalies, is encountered in the olivine and pyroxene, the two minerals with the lowest REE concentrations, of all objects analyzed. However, the contamination is highly heterogeneous, affecting some grains and not others of a given phase. Therefore, provided one uses a measurement technique such as SIMS to obtain data on individual grains and to identify the unaltered ones, it is still possible to obtain geochemical information about the origins of hot desert meteorites. On the other hand, great caution must be exercised if one uses data for whole rocks or mineral separates. The U-Pb, Rb-Sr and Sm-Nd systematics are likely to be affected by terrestrial contamination even in samples with a fresh appearance. Leachates are particularly suspicious (Crozaz and Wadhwa, 2001; Dreibus et al., 2001). In the case of shergottites which have proven difficult to date (Nyquist et al., 2001), this is a complicating and especially unfortunate factor. References: Barrat et al. (1999) MAPS 34, 91-97. Crozaz G. and Wadhwa M. (2001) GCA 65, 971-978. Dreibus et al. (2001) MAPS, in press. Nyquist et al. (2001) Space Sci. Rev., in press.

  13. Potential applications of microtesla magnetic resonance imaging detected using a superconducting quantum interference device

    NASA Astrophysics Data System (ADS)

    Myers, Whittier Ryan

    This dissertation describes magnetic resonance imaging (MRI) of protons performed in a precession field of 132 muT. In order to increase the signal-to-noise ratio (SNR), a pulsed 40-300 mT magnetic field prepolarizes the sample spins and an untuned second-order superconducting gradiometer coupled to a low transition temperature superconducting quantum interference device (SQUID) detects the subsequent 5.6-kHz spin precession. Imaging sequences including multiple echoes and partial Fourier reconstruction are developed. Calculating the SNR of prepolarized SQUID-detected MRI shows that three-dimensional Fourier imaging yields higher SNR than slice-selection imaging. An experimentally demonstrated field-cycling pulse sequence and post-processing algorithm mitigate image artifacts caused by concomitant gradients in low-field MRI. The magnetic field noise of SQUID untuned detection is compared to the noise of SQUID tuned detection, conventional Faraday detection, and the Nyquist noise generated by conducting biological samples. A second-generation microtesla MRI system employing a low-noise SQUID is constructed to increase SNR. A 2.4-m cubic, eddy-current shield with 6-mm thick aluminum walls encloses the experiment to attenuate external noise. The measured noise is 0.75 fT Hz 1/2 referred to the bottom gradiometer loop. Solenoids wound from 30-strand braided wire to decrease Nyquist noise and cooled by either liquid nitrogen or water polarize the spins. Copper wire coils wound on wooden supports produce the imaging magnetic fields and field gradients. Water phantom images with 0.8 x 0.8 x 10 mm3 resolution have a SNR of 6. Three-dimensional 1.6 x 1.9 x 14 mm3 images of bell peppers and 3 x 3 x 26 mm3 in vivo images of the human arm are presented. Since contrast based on the transverse spin relaxation rate (T1 ) is enhanced at low magnetic fields, microtesla MRI could potentially be used for tumor imaging. The measured T1 of ex vivo normal and cancerous prostate tissue differ significantly at 132 muT. A single-sided MRI system designed for prostate imaging could achieve 3 x 3 x 5 mm3 resolution in 8 minutes. Existing SQUID-based magnetoencephalography (MEG) systems could be used as microtesla MRI detectors. A commercial 275-channel MEG system could acquire 6-minute brain images with (4 mm)3 resolution and SNR 16.

  14. Discrimination of bed form scales using robust spline filters and wavelet transforms: Methods and application to synthetic signals and bed forms of the Río Paraná, Argentina

    NASA Astrophysics Data System (ADS)

    Gutierrez, Ronald R.; Abad, Jorge D.; Parsons, Daniel R.; Best, James L.

    2013-09-01

    There is no standard nomenclature and procedure to systematically identify the scale and magnitude of bed forms such as bars, dunes, and ripples that are commonly present in many sedimentary environments. This paper proposes a standardization of the nomenclature and symbolic representation of bed forms and details the combined application of robust spline filters and continuous wavelet transforms to discriminate these morphodynamic features, allowing the quantitative recognition of bed form hierarchies. Herein the proposed methodology for bed form discrimination is first applied to synthetic bed form profiles, which are sampled at a Nyquist ratio interval of 2.5-50 and a signal-to-noise ratio interval of 1-20 and subsequently applied to a detailed 3-D bed topography from the Río Paraná, Argentina, which exhibits large-scale dunes with superimposed, smaller bed forms. After discriminating the synthetic bed form signals into three-bed form hierarchies that represent bars, dunes, and ripples, the accuracy of the methodology is quantified by estimating the reproducibility, the cross correlation, and the standard deviation ratio of the actual and retrieved signals. For the case of the field measurements, the proposed method is used to discriminate small and large dunes and subsequently obtain and statistically analyze the common morphological descriptors such as wavelength, slope, and amplitude of both stoss and lee sides of these different size bed forms. Analysis of the synthetic signals demonstrates that the Morlet wavelet function is the most efficient in retrieving smaller periodicities such as ripples and smaller dunes and that the proposed methodology effectively discriminates waves of different periods for Nyquist ratios higher than 25 and signal-to-noise ratios higher than 5. The analysis of bed forms in the Río Paraná reveals that, in most cases, a Gamma probability distribution, with a positive skewness, best describes the dimensionless wavelength and amplitude for both the lee and stoss sides of large dunes. For the case of smaller superimposed dunes, the dimensionless wavelength shows a discrete behavior that is governed by the sampling frequency of the data, and the dimensionless amplitude better fits the Gamma probability distribution, again with a positive skewness. This paper thus provides a robust methodology for systematically identifying the scales and magnitudes of bed forms in a range of environments.

  15. Time Evolution of the Dynamical Variables of a Stochastic System.

    ERIC Educational Resources Information Center

    de la Pena, L.

    1980-01-01

    By using the method of moments, it is shown that several important and apparently unrelated theorems describing average properties of stochastic systems are in fact particular cases of a general law; this method is applied to generalize the virial theorem and the fluctuation-dissipation theorem to the time-dependent case. (Author/SK)

  16. A Generalization of the Prime Number Theorem

    ERIC Educational Resources Information Center

    Bruckman, Paul S.

    2008-01-01

    In this article, the author begins with the prime number theorem (PNT), and then develops this into a more general theorem, of which many well-known number theoretic results are special cases, including PNT. He arrives at an asymptotic relation that allows the replacement of certain discrete sums involving primes into corresponding differentiable…

  17. A Fascinating Application of Steiner's Theorem for Trapezium: Geometric Constructions Using Straightedge Alone

    ERIC Educational Resources Information Center

    Stupel, Moshe; Ben-Chaim, David

    2013-01-01

    Based on Steiner's fascinating theorem for trapezium, seven geometrical constructions using straight-edge alone are described. These constructions provide an excellent base for teaching theorems and the properties of geometrical shapes, as well as challenging thought and inspiring deeper insight into the world of geometry. In particular, this…

  18. Leaning on Socrates to Derive the Pythagorean Theorem

    ERIC Educational Resources Information Center

    Percy, Andrew; Carr, Alistair

    2010-01-01

    The one theorem just about every student remembers from school is the theorem about the side lengths of a right angled triangle which Euclid attributed to Pythagoras when writing Proposition 47 of "The Elements". Usually first met in middle school, the student will be continually exposed throughout their mathematical education to the…

  19. Unpacking Rouché's Theorem

    ERIC Educational Resources Information Center

    Howell, Russell W.; Schrohe, Elmar

    2017-01-01

    Rouché's Theorem is a standard topic in undergraduate complex analysis. It is usually covered near the end of the course with applications relating to pure mathematics only (e.g., using it to produce an alternate proof of the Fundamental Theorem of Algebra). The "winding number" provides a geometric interpretation relating to the…

  20. Geometry of the Adiabatic Theorem

    ERIC Educational Resources Information Center

    Lobo, Augusto Cesar; Ribeiro, Rafael Antunes; Ribeiro, Clyffe de Assis; Dieguez, Pedro Ruas

    2012-01-01

    We present a simple and pedagogical derivation of the quantum adiabatic theorem for two-level systems (a single qubit) based on geometrical structures of quantum mechanics developed by Anandan and Aharonov, among others. We have chosen to use only the minimum geometric structure needed for the understanding of the adiabatic theorem for this case.…

  1. The Classical Version of Stokes' Theorem Revisited

    ERIC Educational Resources Information Center

    Markvorsen, Steen

    2008-01-01

    Using only fairly simple and elementary considerations--essentially from first year undergraduate mathematics--we show how the classical Stokes' theorem for any given surface and vector field in R[superscript 3] follows from an application of Gauss' divergence theorem to a suitable modification of the vector field in a tubular shell around the…

  2. The Parity Theorem Shuffle

    ERIC Educational Resources Information Center

    Smith, Michael D.

    2016-01-01

    The Parity Theorem states that any permutation can be written as a product of transpositions, but no permutation can be written as a product of both an even number and an odd number of transpositions. Most proofs of the Parity Theorem take several pages of mathematical formalism to complete. This article presents an alternative but equivalent…

  3. Visualizing the Central Limit Theorem through Simulation

    ERIC Educational Resources Information Center

    Ruggieri, Eric

    2016-01-01

    The Central Limit Theorem is one of the most important concepts taught in an introductory statistics course, however, it may be the least understood by students. Sure, students can plug numbers into a formula and solve problems, but conceptually, do they really understand what the Central Limit Theorem is saying? This paper describes a simulation…

  4. Virtual continuity of measurable functions and its applications

    NASA Astrophysics Data System (ADS)

    Vershik, A. M.; Zatitskii, P. B.; Petrov, F. V.

    2014-12-01

    A classical theorem of Luzin states that a measurable function of one real variable is `almost' continuous. For measurable functions of several variables the analogous statement (continuity on a product of sets having almost full measure) does not hold in general. The search for a correct analogue of Luzin's theorem leads to a notion of virtually continuous functions of several variables. This apparently new notion implicitly appears in the statements of embedding theorems and trace theorems for Sobolev spaces. In fact it reveals the nature of such theorems as statements about virtual continuity. The authors' results imply that under the conditions of Sobolev theorems there is a well-defined integration of a function with respect to a wide class of singular measures, including measures concentrated on submanifolds. The notion of virtual continuity is also used for the classification of measurable functions of several variables and in some questions on dynamical systems, the theory of polymorphisms, and bistochastic measures. In this paper the necessary definitions and properties of admissible metrics are recalled, several definitions of virtual continuity are given, and some applications are discussed. Bibliography: 24 titles.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koenig, Robert; Institute for Quantum Information, California Institute of Technology, Pasadena, California 91125; Mitchison, Graeme

    In its most basic form, the finite quantum de Finetti theorem states that the reduced k-partite density operator of an n-partite symmetric state can be approximated by a convex combination of k-fold product states. Variations of this result include Renner's 'exponential' approximation by 'almost-product' states, a theorem which deals with certain triples of representations of the unitary group, and the result of D'Cruz et al. [e-print quant-ph/0606139;Phys. Rev. Lett. 98, 160406 (2007)] for infinite-dimensional systems. We show how these theorems follow from a single, general de Finetti theorem for representations of symmetry groups, each instance corresponding to a particular choicemore » of symmetry group and representation of that group. This gives some insight into the nature of the set of approximating states and leads to some new results, including an exponential theorem for infinite-dimensional systems.« less

  6. The Levy sections theorem revisited

    NASA Astrophysics Data System (ADS)

    Figueiredo, Annibal; Gleria, Iram; Matsushita, Raul; Da Silva, Sergio

    2007-06-01

    This paper revisits the Levy sections theorem. We extend the scope of the theorem to time series and apply it to historical daily returns of selected dollar exchange rates. The elevated kurtosis usually observed in such series is then explained by their volatility patterns. And the duration of exchange rate pegs explains the extra elevated kurtosis in the exchange rates of emerging markets. In the end, our extension of the theorem provides an approach that is simpler than the more common explicit modelling of fat tails and dependence. Our main purpose is to build up a technique based on the sections that allows one to artificially remove the fat tails and dependence present in a data set. By analysing data through the lenses of the Levy sections theorem one can find common patterns in otherwise very different data sets.

  7. Cellular compartmentation follows rules: The Schnepf theorem, its consequences and exceptions: A biological membrane separates a plasmatic from a non-plasmatic phase.

    PubMed

    Moog, Daniel; Maier, Uwe G

    2017-08-01

    Is the spatial organization of membranes and compartments within cells subjected to any rules? Cellular compartmentation differs between prokaryotic and eukaryotic life, because it is present to a high degree only in eukaryotes. In 1964, Prof. Eberhard Schnepf formulated the compartmentation rule (Schnepf theorem), which posits that a biological membrane, the main physical structure responsible for cellular compartmentation, usually separates a plasmatic form a non-plasmatic phase. Here we review and re-investigate the Schnepf theorem by applying the theorem to different cellular structures, from bacterial cells to eukaryotes with their organelles and compartments. In conclusion, we can confirm the general correctness of the Schnepf theorem, noting explicit exceptions only in special cases such as endosymbiosis and parasitism. © 2017 WILEY Periodicals, Inc.

  8. Guided discovery of the nine-point circle theorem and its proof

    NASA Astrophysics Data System (ADS)

    Buchbinder, Orly

    2018-01-01

    The nine-point circle theorem is one of the most beautiful and surprising theorems in Euclidean geometry. It establishes an existence of a circle passing through nine points, all of which are related to a single triangle. This paper describes a set of instructional activities that can help students discover the nine-point circle theorem through investigation in a dynamic geometry environment, and consequently prove it using a method of guided discovery. The paper concludes with a variety of suggestions for the ways in which the whole set of activities can be implemented in geometry classrooms.

  9. Kato type operators and Weyl's theorem

    NASA Astrophysics Data System (ADS)

    Duggal, B. P.; Djordjevic, S. V.; Kubrusly, Carlos

    2005-09-01

    A Banach space operator T satisfies Weyl's theorem if and only if T or T* has SVEP at all complex numbers [lambda] in the complement of the Weyl spectrum of T and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity. If T* (respectively, T) has SVEP and T is Kato type at all [lambda] which are isolated eigenvalues of T of finite algebraic multiplicity (respectively, T is Kato type at all [lambda][set membership, variant]iso[sigma](T)), then T satisfies a-Weyl's theorem (respectively, T* satisfies a-Weyl's theorem).

  10. Cooperation Among Theorem Provers

    NASA Technical Reports Server (NTRS)

    Waldinger, Richard J.

    1998-01-01

    In many years of research, a number of powerful theorem-proving systems have arisen with differing capabilities and strengths. Resolution theorem provers (such as Kestrel's KITP or SRI's SNARK) deal with first-order logic with equality but not the principle of mathematical induction. The Boyer-Moore theorem prover excels at proof by induction but cannot deal with full first-order logic. Both are highly automated but cannot accept user guidance easily. The purpose of this project, and the companion project at Kestrel, has been to use the category-theoretic notion of logic morphism to combine systems with different logics and languages.

  11. Fluctuation theorem: A critical review

    NASA Astrophysics Data System (ADS)

    Malek Mansour, M.; Baras, F.

    2017-10-01

    Fluctuation theorem for entropy production is revisited in the framework of stochastic processes. The applicability of the fluctuation theorem to physico-chemical systems and the resulting stochastic thermodynamics were analyzed. Some unexpected limitations are highlighted in the context of jump Markov processes. We have shown that these limitations handicap the ability of the resulting stochastic thermodynamics to correctly describe the state of non-equilibrium systems in terms of the thermodynamic properties of individual processes therein. Finally, we considered the case of diffusion processes and proved that the fluctuation theorem for entropy production becomes irrelevant at the stationary state in the case of one variable systems.

  12. The Cr dependence problem of eigenvalues of the Laplace operator on domains in the plane

    NASA Astrophysics Data System (ADS)

    Haddad, Julian; Montenegro, Marcos

    2018-03-01

    The Cr dependence problem of multiple Dirichlet eigenvalues on domains is discussed for elliptic operators by regarding C r + 1-smooth one-parameter families of C1 perturbations of domains in Rn. As applications of our main theorem (Theorem 1), we provide a fairly complete description for all eigenvalues of the Laplace operator on disks and squares in R2 and also for its second eigenvalue on balls in Rn for any n ≥ 3. The central tool used in our proof is a degenerate implicit function theorem on Banach spaces (Theorem 2) of independent interest.

  13. Nambu-Goldstone theorem and spin-statistics theorem

    NASA Astrophysics Data System (ADS)

    Fujikawa, Kazuo

    2016-05-01

    On December 19-21 in 2001, we organized a yearly workshop at Yukawa Institute for Theoretical Physics in Kyoto on the subject of “Fundamental Problems in Field Theory and their Implications”. Prof. Yoichiro Nambu attended this workshop and explained a necessary modification of the Nambu-Goldstone theorem when applied to non-relativistic systems. At the same workshop, I talked on a path integral formulation of the spin-statistics theorem. The present essay is on this memorable workshop, where I really enjoyed the discussions with Nambu, together with a short comment on the color freedom of quarks.

  14. Solving a Class of Spatial Reasoning Problems: Minimal-Cost Path Planning in the Cartesian Plane.

    DTIC Science & Technology

    1987-06-01

    as in Figure 72. By the Theorem of Pythagoras : Z1 <a z 2 < C Yl(bl+b 2)uI, the cost of going along (a,b,c) is greater that the...preceding lemmas to an indefinite number of boundary-crossing episodes is accomplished by the following theorems . Theorem 1 extends the result of Lemma 1... Theorem 1: Any two Snell’s-law paths within a K-explored wedge defined by Snell’s-law paths RL and R. do not intersect within the K-explored portion of

  15. Discovering Theorems in Abstract Algebra Using the Software "GAP"

    ERIC Educational Resources Information Center

    Blyth, Russell D.; Rainbolt, Julianne G.

    2010-01-01

    A traditional abstract algebra course typically consists of the professor stating and then proving a sequence of theorems. As an alternative to this classical structure, the students could be expected to discover some of the theorems even before they are motivated by classroom examples. This can be done by using a software system to explore a…

  16. Bell's Theorem and Einstein's "Spooky Actions" from a Simple Thought Experiment

    ERIC Educational Resources Information Center

    Kuttner, Fred; Rosenblum, Bruce

    2010-01-01

    In 1964 John Bell proved a theorem allowing the experimental test of whether what Einstein derided as "spooky actions at a distance" actually exist. We will see that they "do". Bell's theorem can be displayed with a simple, nonmathematical thought experiment suitable for a physics course at "any" level. And a simple, semi-classical derivation of…

  17. Unique Factorization and the Fundamental Theorem of Arithmetic

    ERIC Educational Resources Information Center

    Sprows, David

    2017-01-01

    The fundamental theorem of arithmetic is one of those topics in mathematics that somehow "falls through the cracks" in a student's education. When asked to state this theorem, those few students who are willing to give it a try (most have no idea of its content) will say something like "every natural number can be broken down into a…

  18. Viète's Formula and an Error Bound without Taylor's Theorem

    ERIC Educational Resources Information Center

    Boucher, Chris

    2018-01-01

    This note presents a derivation of Viète's classic product approximation of pi that relies on only the Pythagorean Theorem. We also give a simple error bound for the approximation that, while not optimal, still reveals the exponential convergence of the approximation and whose derivation does not require Taylor's Theorem.

  19. A Physical Proof of the Pythagorean Theorem

    ERIC Educational Resources Information Center

    Treeby, David

    2017-01-01

    What proof of the Pythagorean theorem might appeal to a physics teacher? A proof that involved the notion of mass would surely be of interest. While various proofs of the Pythagorean theorem employ the circumcenter and incenter of a right-angled triangle, we are not aware of any proof that uses the triangle's center of mass. This note details one…

  20. Quantum Field Theory on Spacetimes with a Compactly Generated Cauchy Horizon

    NASA Astrophysics Data System (ADS)

    Kay, Bernard S.; Radzikowski, Marek J.; Wald, Robert M.

    1997-02-01

    We prove two theorems which concern difficulties in the formulation of the quantum theory of a linear scalar field on a spacetime, (M,g_{ab}), with a compactly generated Cauchy horizon. These theorems demonstrate the breakdown of the theory at certain base points of the Cauchy horizon, which are defined as 'past terminal accumulation points' of the horizon generators. Thus, the theorems may be interpreted as giving support to Hawking's 'Chronology Protection Conjecture', according to which the laws of physics prevent one from manufacturing a 'time machine'. Specifically, we prove: Theorem 1. There is no extension to (M,g_{ab}) of the usual field algebra on the initial globally hyperbolic region which satisfies the condition of F-locality at any base point. In other words, any extension of the field algebra must, in any globally hyperbolic neighbourhood of any base point, differ from the algebra one would define on that neighbourhood according to the rules for globally hyperbolic spacetimes. Theorem 2. The two-point distribution for any Hadamard state defined on the initial globally hyperbolic region must (when extended to a distributional bisolution of the covariant Klein-Gordon equation on the full spacetime) be singular at every base point x in the sense that the difference between this two point distribution and a local Hadamard distribution cannot be given by a bounded function in any neighbourhood (in M 2 M) of (x,x). In consequence of Theorem 2, quantities such as the renormalized expectation value of J2 or of the stress-energy tensor are necessarily ill-defined or singular at any base point. The proof of these theorems relies on the 'Propagation of Singularities' theorems of Duistermaat and Hörmander.

  1. Enter the reverend: introduction to and application of Bayes' theorem in clinical ophthalmology.

    PubMed

    Thomas, Ravi; Mengersen, Kerrie; Parikh, Rajul S; Walland, Mark J; Muliyil, Jayprakash

    2011-12-01

    Ophthalmic practice utilizes numerous diagnostic tests, some of which are used to screen for disease. Interpretation of test results and many clinical management issues are actually problems in inverse probability that can be solved using Bayes' theorem. Use two-by-two tables to understand Bayes' theorem and apply it to clinical examples. Specific examples of the utility of Bayes' theorem in diagnosis and management. Two-by-two tables are used to introduce concepts and understand the theorem. The application in interpretation of diagnostic tests is explained. Clinical examples demonstrate its potential use in making management decisions. Positive predictive value and conditional probability. The theorem demonstrates the futility of testing when prior probability of disease is low. Application to untreated ocular hypertension demonstrates that the estimate of glaucomatous optic neuropathy is similar to that obtained from the Ocular Hypertension Treatment Study. Similar calculations are used to predict the risk of acute angle closure in a primary angle closure suspect, the risk of pupillary block in a diabetic undergoing cataract surgery, and the probability that an observed decrease in intraocular pressure is due to the medication that has been started. The examples demonstrate how data required for management can at times be easily obtained from available information. Knowledge of Bayes' theorem helps in interpreting test results and supports the clinical teaching that testing for conditions with a low prevalence has a poor predictive value. In some clinical situations Bayes' theorem can be used to calculate vital data required for patient management. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  2. Communication. Kinetics of scavenging of small, nucleating clusters. First nucleation theorem and sum rules

    DOE PAGES

    Malila, Jussi; McGraw, Robert; Laaksonen, Ari; ...

    2015-01-07

    Despite recent advances in monitoring nucleation from a vapor at close-to-molecular resolution, the identity of the critical cluster, forming the bottleneck for the nucleation process, remains elusive. During past twenty years, the first nucleation theorem has been often used to extract the size of the critical cluster from nucleation rate measurements. However, derivations of the first nucleation theorem invoke certain questionable assumptions that may fail, e.g., in the case of atmospheric new particle formation, including absence of subcritical cluster losses and heterogeneous nucleation on pre-existing nanoparticles. Here we extend the kinetic derivation of the first nucleation theorem to give amore » general framework to include such processes, yielding sum rules connecting the size dependent particle formation and loss rates to the corresponding loss-free nucleation rate and the apparent critical size from a naïve application of the first nucleation theorem that neglects them.« less

  3. A new blackhole theorem and its applications to cosmology and astrophysics

    NASA Astrophysics Data System (ADS)

    Wang, Shouhong; Ma, Tian

    2015-04-01

    We shall present a blackhole theorem and a theorem on the structure of our Universe, proved in a recently published paper, based on 1) the Einstein general theory of relativity, and 2) the cosmological principle that the universe is homogeneous and isotropic. These two theorems are rigorously proved using astrophysical dynamical models coupling fluid dynamics and general relativity based on a symmetry-breaking principle. With the new blackhole theorem, we further demonstrate that both supernovae explosion and AGN jets, as well as many astronomical phenomena including e.g. the recent reported are due to combined relativistic, magnetic and thermal effects. The radial temperature gradient causes vertical Benard type convection cells, and the relativistic viscous force (via electromagnetic, the weak and the strong interactions) gives rise to a huge explosive radial force near the Schwarzschild radius, leading e.g. to supernovae explosion and AGN jets.

  4. Atiyah-Patodi-Singer index theorem for domain-wall fermion Dirac operator

    NASA Astrophysics Data System (ADS)

    Fukaya, Hidenori; Onogi, Tetsuya; Yamaguchi, Satoshi

    2018-03-01

    Recently, the Atiyah-Patodi-Singer(APS) index theorem attracts attention for understanding physics on the surface of materials in topological phases. Although it is widely applied to physics, the mathematical set-up in the original APS index theorem is too abstract and general (allowing non-trivial metric and so on) and also the connection between the APS boundary condition and the physical boundary condition on the surface of topological material is unclear. For this reason, in contrast to the Atiyah-Singer index theorem, derivation of the APS index theorem in physics language is still missing. In this talk, we attempt to reformulate the APS index in a "physicist-friendly" way, similar to the Fujikawa method on closed manifolds, for our familiar domain-wall fermion Dirac operator in a flat Euclidean space. We find that the APS index is naturally embedded in the determinant of domain-wall fermions, representing the so-called anomaly descent equations.

  5. The detailed balance principle and the reciprocity theorem between photocarrier collection and dark carrier distribution in solar cells

    NASA Astrophysics Data System (ADS)

    Rau, Uwe; Brendel, Rolf

    1998-12-01

    It is shown that a recently described general relationship between the local collection efficiency of solar cells and the dark carrier concentration (reciprocity theorem) directly follows from the principle of detailed balance. We derive the relationship for situations where transport of charge carriers occurs between discrete states as well as for the situation where electronic transport is described in terms of continuous functions. Combining both situations allows to extend the range of applicability of the reciprocity theorem to all types of solar cells, including, e.g., metal-insulator-semiconductor-type, electrochemical solar cells, as well as the inclusion of the impurity photovoltaic effect. We generalize the theorem further to situations where the occupation probability of electronic states is governed by Fermi-Dirac statistics instead of Boltzmann statistics as underlying preceding work. In such a situation the reciprocity theorem is restricted to small departures from equilibrium.

  6. Dynamic relaxation of a levitated nanoparticle from a non-equilibrium steady state.

    PubMed

    Gieseler, Jan; Quidant, Romain; Dellago, Christoph; Novotny, Lukas

    2014-05-01

    Fluctuation theorems are a generalization of thermodynamics on small scales and provide the tools to characterize the fluctuations of thermodynamic quantities in non-equilibrium nanoscale systems. They are particularly important for understanding irreversibility and the second law in fundamental chemical and biological processes that are actively driven, thus operating far from thermal equilibrium. Here, we apply the framework of fluctuation theorems to investigate the important case of a system relaxing from a non-equilibrium state towards equilibrium. Using a vacuum-trapped nanoparticle, we demonstrate experimentally the validity of a fluctuation theorem for the relative entropy change occurring during relaxation from a non-equilibrium steady state. The platform established here allows non-equilibrium fluctuation theorems to be studied experimentally for arbitrary steady states and can be extended to investigate quantum fluctuation theorems as well as systems that do not obey detailed balance.

  7. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1994-01-01

    This annual report summarizes the research activities that were performed from 26 Jun. 1993 to 28 Feb. 1994. We continued to investigate the Robust Stability of Systems where transfer functions or characteristic polynomials are affine multilinear functions of parameters. An approach that differs from 'Stability by Linear Process' and that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty was found for low order, 2-order, and 3-order cases. We proved a crucial theorem, the so-called Face Theorem. Previously, we have proven Kharitonov's Vertex Theorem and the Edge Theorem by Bartlett. The detail of this proof is contained in the Appendix. This Theorem provides a tool to describe the boundary of the image of the affine multilinear function. For SPR design, we have developed some new results. The third objective for this period is to design a controller for IHM by the H-infinity optimization technique. The details are presented in the Appendix.

  8. Orbit-averaged quantities, the classical Hellmann-Feynman theorem, and the magnetic flux enclosed by gyro-motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkins, R. J., E-mail: rperkins@pppl.gov; Bellan, P. M.

    Action integrals are often used to average a system over fast oscillations and obtain reduced dynamics. It is not surprising, then, that action integrals play a central role in the Hellmann-Feynman theorem of classical mechanics, which furnishes the values of certain quantities averaged over one period of rapid oscillation. This paper revisits the classical Hellmann-Feynman theorem, rederiving it in connection to an analogous theorem involving the time-averaged evolution of canonical coordinates. We then apply a modified version of the Hellmann-Feynman theorem to obtain a new result: the magnetic flux enclosed by one period of gyro-motion of a charged particle inmore » a non-uniform magnetic field. These results further demonstrate the utility of the action integral in regards to obtaining orbit-averaged quantities and the usefulness of this formalism in characterizing charged particle motion.« less

  9. An Integrated Environment for Efficient Formal Design and Verification

    NASA Technical Reports Server (NTRS)

    1998-01-01

    The general goal of this project was to improve the practicality of formal methods by combining techniques from model checking and theorem proving. At the time the project was proposed, the model checking and theorem proving communities were applying different tools to similar problems, but there was not much cross-fertilization. This project involved a group from SRI that had substantial experience in the development and application of theorem-proving technology, and a group at Stanford that specialized in model checking techniques. Now, over five years after the proposal was submitted, there are many research groups working on combining theorem-proving and model checking techniques, and much more communication between the model checking and theorem proving research communities. This project contributed significantly to this research trend. The research work under this project covered a variety of topics: new theory and algorithms; prototype tools; verification methodology; and applications to problems in particular domains.

  10. Compressed sensing with cyclic-S Hadamard matrix for terahertz imaging applications

    NASA Astrophysics Data System (ADS)

    Ermeydan, Esra Şengün; ćankaya, Ilyas

    2018-01-01

    Compressed Sensing (CS) with Cyclic-S Hadamard matrix is proposed for single pixel imaging applications in this study. In single pixel imaging scheme, N = r . c samples should be taken for r×c pixel image where . denotes multiplication. CS is a popular technique claiming that the sparse signals can be reconstructed with samples under Nyquist rate. Therefore to solve the slow data acquisition problem in Terahertz (THz) single pixel imaging, CS is a good candidate. However, changing mask for each measurement is a challenging problem since there is no commercial Spatial Light Modulators (SLM) for THz band yet, therefore circular masks are suggested so that for each measurement one or two column shifting will be enough to change the mask. The CS masks are designed using cyclic-S matrices based on Hadamard transform for 9 × 7 and 15 × 17 pixel images within the framework of this study. The %50 compressed images are reconstructed using total variation based TVAL3 algorithm. Matlab simulations demonstrates that cyclic-S matrices can be used for single pixel imaging based on CS. The circular masks have the advantage to reduce the mechanical SLMs to a single sliding strip, whereas the CS helps to reduce acquisition time and energy since it allows to reconstruct the image from fewer samples.

  11. Effect of ca+2 addition on the properties of ce0.8gd0.2o2-δ for it-sofc

    NASA Astrophysics Data System (ADS)

    Koteswararao, P.; Buchi Suresh, M.; Wani, B. N.; Bhaskara Rao, P. V.; Varalaxmi, P.

    2018-03-01

    This paper reports the effect of Ca2+ addition on the structural and electrical properties of Ce0.8Gd0.2O2-δ(GDC) electrolyte for low temperature solid oxide fuel cell application. The Ca (0, 0.5, 1 and 2 mol %) doped GDC solid electrolytes have been prepared by solid state method. The sintered densities of the samples are greater than 95%. XRD study reveals the cubic fluorite structure. The microstructure of the samples sintered at 1400°C resulted into grain sizes in the range of 1.72 to 10.20 μm. Raman spectra show the presence of GDC single phase. AC impedance analysis is used to measure the ionic conductivity of the electrolyte. Among all the compositions, the highest conductivity is observed in the GDC sample with 0.5 mol% Ca addition. Nyquist plots resulted in multiple redoxation process such as grain and grain boundary conductions to final conductivity. Estimated blocking factor is lower for the GDC electrolyte with 0.5mol% Ca, indicating that Ca addition was promoted grain boundary conduction. Activation energies were calculated from Arrhenius plot and are found in the range of 1eV.

  12. Preclinical Testing of a New MR Imaging Approach to Distinguish Aggressive from Indolent Disease

    DTIC Science & Technology

    2015-08-01

    Lubeck, D. P., Kattan, M. W., Carroll, P. R., & Litwin , M. S. (2004) Predicting quality of life after radical prostatectomy: results from CaPSURE. J Urol...171, 703-7; discussion 707-8. 4. Wei, J. T., Dunn, R. L., Sandler, H. M., McLaughlin, P. W., Montie, J. E., Litwin , M. S., Nyquist, L., & Sanda, M. G

  13. Quantum shot noise in tunnel junctions

    NASA Technical Reports Server (NTRS)

    Ben-Jacob, E.; Mottola, E.; Schoen, G.

    1983-01-01

    The current and voltage fluctuations in a normal tunnel junction are calculated from microscopic theory. The power spectrum can deviate from the familiar Johnson-Nyquist form when the self-capacitance of the junction is small, at low temperatures permitting experimental verification. The deviation reflects the discrete nature of the charge transfer across the junction and should be present in a wide class of similar systems.

  14. High-speed digital phonoscopy images analyzed by Nyquist plots

    NASA Astrophysics Data System (ADS)

    Yan, Yuling

    2012-02-01

    Vocal-fold vibration is a key dynamic event in voice production, and the vibratory characteristics of the vocal fold correlate closely with voice quality and health condition. Laryngeal imaging provides direct means to observe the vocal fold vibration; in the past, however, available modalities were either too slow or impractical to resolve the actual vocal fold vibrations. This limitation has now been overcome by high-speed digital imaging (HSDI) (or high-speed digital phonoscopy), which records images of the vibrating vocal folds at a rate of 2000 frames per second or higher- fast enough to resolve a specific, sustained phonatory vocal fold vibration. The subsequent image-based functional analysis of voice is essential to better understanding the mechanism underlying voice production, as well as assisting the clinical diagnosis of voice disorders. Our primary objective is to develop a comprehensive analytical platform for voice analysis using the HSDI recordings. So far, we have developed various analytical approaches for the HSDI-based voice analyses. These include Nyquist plots and associated analysese that are used along with FFT and Spectrogram in the analysis of the HSDI data representing normal voice and specific voice pathologies.

  15. Single-shot EPI with Nyquist ghost compensation: Interleaved Dual-Echo with Acceleration (IDEA) EPI

    PubMed Central

    Poser, Benedikt A; Barth, Markus; Goa, Pål-Erik; Deng, Weiran; Stenger, V Andrew

    2012-01-01

    Echo planar imaging is most commonly used for BOLD fMRI, owing to its sensitivity and acquisition speed. A major problem with EPI is Nyquist (N/2) ghosting, most notably at high field. EPI data are acquired under an oscillating readout gradient and hence vulnerable to gradient imperfections such as eddy current delays and off-resonance effects, as these cause inconsistencies between odd and even k-space lines after time reversal. We propose a straightforward and pragmatic method herein termed Interleaved Dual Echo with Acceleration (IDEA) EPI: Two k-spaces (echoes) are acquired under the positive and negative readout lobes, respectively, by performing phase blips only before alternate readout gradients. From these two k-spaces, two almost entirely ghost free images per shot can be constructed, without need for phase correction. The doubled echo train length can be compensated by parallel imaging and/or partial Fourier acquisition. The two k-spaces can either be complex-averaged during reconstruction, which results in near-perfect cancellation of residual phase errors, or reconstructed into separate images. We demonstrate the efficacy of IDEA EPI and show phantom and in vivo images at both 3 and 7 Tesla. PMID:22411762

  16. Distortions in Distributions of Impact Estimates in Multi-Site Trials: The Central Limit Theorem Is Not Your Friend

    ERIC Educational Resources Information Center

    May, Henry

    2014-01-01

    Interest in variation in program impacts--How big is it? What might explain it?--has inspired recent work on the analysis of data from multi-site experiments. One critical aspect of this problem involves the use of random or fixed effect estimates to visualize the distribution of impact estimates across a sample of sites. Unfortunately, unless the…

  17. Use of Bayes theorem to correct size-specific sampling bias in growth data.

    PubMed

    Troynikov, V S

    1999-03-01

    The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yashchuk, Valeriy V; Conley, Raymond; Anderson, Erik H

    Verification of the reliability of metrology data from high quality x-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [Proc. SPIE 7077-7 (2007), Opt. Eng. 47(7), 073602-1-5 (2008)} and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [Nucl. Instr. and Meth. A 616, 172-82 (2010)]. Here we describe the details ofmore » development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.« less

  19. Systematic Approaches to Experimentation: The Case of Pick's Theorem

    ERIC Educational Resources Information Center

    Papadopoulos, Ioannis; Iatridou, Maria

    2010-01-01

    In this paper two 10th graders having an accumulated experience on problem-solving ancillary to the concept of area confronted the task to find Pick's formula for a lattice polygon's area. The formula was omitted from the theorem in order for the students to read the theorem as a problem to be solved. Their working is examined and emphasis is…

  20. Topology and the Lay of the Land: A Mathematician on the Topographer's Turf.

    ERIC Educational Resources Information Center

    Shubin, Mikhail

    1992-01-01

    Presents a proof of Euler's Theorem on polyhedra by relating the theorem to the field of modern topology, specifically to the topology of relief maps. An analogous theorem involving the features of mountain summits, basins, and passes on a terrain is proved and related to the faces, vertices, and edges on a convex polyhedron. (MDH)

  1. Weak Compactness and Control Measures in the Space of Unbounded Measures

    PubMed Central

    Brooks, James K.; Dinculeanu, Nicolae

    1972-01-01

    We present a synthesis theorem for a family of locally equivalent measures defined on a ring of sets. This theorem is then used to exhibit a control measure for weakly compact sets of unbounded measures. In addition, the existence of a local control measure for locally strongly bounded vector measures is proved by means of the synthesis theorem. PMID:16591980

  2. A Layer Framework to Investigate Student Understanding and Application of the Existence and Uniqueness Theorems of Differential Equations

    ERIC Educational Resources Information Center

    Raychaudhuri, D.

    2007-01-01

    The focus of this paper is on student interpretation and usage of the existence and uniqueness theorems for first-order ordinary differential equations. The inherent structure of the theorems is made explicit by the introduction of a framework of layers concepts-conditions-connectives-conclusions, and we discuss the manners in which students'…

  3. Erratum: Correction to: Information Transmission and Criticality in the Contact Process

    NASA Astrophysics Data System (ADS)

    Cassandro, M.; Galves, A.; Löcherbach, E.

    2018-01-01

    The original publication of the article unfortunately contained a mistake in the first sentence of Theorem 1 and in the second part of the proof of Theorem 1. The corrected statement of Theorem as well as the corrected proof are given below. The full text of the corrected version is available at http://arxiv.org/abs/1705.11150.

  4. Optical theorem for acoustic non-diffracting beams and application to radiation force and torque

    PubMed Central

    Zhang, Likun; Marston, Philip L.

    2013-01-01

    Acoustical and optical non-diffracting beams are potentially useful for manipulating particles and larger objects. An extended optical theorem for a non-diffracting beam was given recently in the context of acoustics. The theorem relates the extinction by an object to the scattering at the forward direction of the beam’s plane wave components. Here we use this theorem to examine the extinction cross section of a sphere centered on the axis of the beam, with a non-diffracting Bessel beam as an example. The results are applied to recover the axial radiation force and torque on the sphere by the Bessel beam. PMID:24049681

  5. Republication of: A theorem on Petrov types

    NASA Astrophysics Data System (ADS)

    Goldberg, J. N.; Sachs, R. K.

    2009-02-01

    This is a republication of the paper “A Theorem on Petrov Types” by Goldberg and Sachs, Acta Phys. Pol. 22 (supplement), 13 (1962), in which they proved the Goldberg-Sachs theorem. The article has been selected for publication in the Golden Oldies series of General Relativity and Gravitation. Typographical errors of the original publication were corrected by the editor. The paper is accompanied by a Golden Oldie Editorial containing an editorial note written by Andrzej Krasiński and Maciej Przanowski and Goldberg’s brief autobiography. The editorial note explains some difficult parts of the proof of the theorem and discusses the influence of results of the paper on later research.

  6. A general Kastler-Kalau-Walze type theorem for manifolds with boundary

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Wang, Yong

    2016-11-01

    In this paper, we establish a general Kastler-Kalau-Walze type theorem for any dimensional manifolds with boundary which generalizes the results in [Y. Wang, Lower-dimensional volumes and Kastler-Kalau-Walze type theorem for manifolds with boundary, Commun. Theor. Phys. 54 (2010) 38-42]. This solves a problem of the referee of [J. Wang and Y. Wang, A Kastler-Kalau-Walze type theorem for five-dimensional manifolds with boundary, Int. J. Geom. Meth. Mod. Phys. 12(5) (2015), Article ID: 1550064, 34 pp.], which is a general expression of the lower dimensional volumes in terms of the geometric data on the manifold.

  7. Electrostatic Hellmann-Feynman theorem applied to long-range interatomic forces - The hydrogen molecule.

    NASA Technical Reports Server (NTRS)

    Steiner, E.

    1973-01-01

    The use of the electrostatic Hellmann-Feynman theorem for the calculation of the leading term in the 1/R expansion of the force of interaction between two well-separated hydrogen atoms is discussed. Previous work has suggested that whereas this term is determined wholly by the first-order wavefunction when calculated by perturbation theory, the use of the Hellmann-Feynman theorem apparently requires the wavefunction through second order. It is shown how the two results may be reconciled and that the Hellmann-Feynman theorem may be reformulated in such a way that only the first-order wavefunction is required.

  8. A Benes-like theorem for the shuffle-exchange graph

    NASA Technical Reports Server (NTRS)

    Schwabe, Eric J.

    1992-01-01

    One of the first theorems on permutation routing, proved by V. E. Beness (1965), shows that given a set of source-destination pairs in an N-node butterfly network with at most a constant number of sources or destinations in each column of the butterfly, there exists a set of paths of lengths O(log N) connecting each pair such that the total congestion is constant. An analogous theorem yielding constant-congestion paths for off-line routing in the shuffle-exchange graph is proved here. The necklaces of the shuffle-exchange graph play the same structural role as the columns of the butterfly in Beness' theorem.

  9. Tree-manipulating systems and Church-Rosser theorems.

    NASA Technical Reports Server (NTRS)

    Rosen, B. K.

    1973-01-01

    Study of a broad class of tree-manipulating systems called subtree replacement systems. The use of this framework is illustrated by general theorems analogous to the Church-Rosser theorem and by applications of these theorems. Sufficient conditions are derived for the Church-Rosser property, and their applications to recursive definitions, the lambda calculus, and parallel programming are discussed. McCarthy's (1963) recursive calculus is extended by allowing a choice between call-by-value and call-by-name. It is shown that recursively defined functions are single-valued despite the nondeterminism of the evaluation algorithm. It is also shown that these functions solve their defining equations in a 'canonical' manner.

  10. Quantum voting and violation of Arrow's impossibility theorem

    NASA Astrophysics Data System (ADS)

    Bao, Ning; Yunger Halpern, Nicole

    2017-06-01

    We propose a quantum voting system in the spirit of quantum games such as the quantum prisoner's dilemma. Our scheme enables a constitution to violate a quantum analog of Arrow's impossibility theorem. Arrow's theorem is a claim proved deductively in economics: Every (classical) constitution endowed with three innocuous-seeming properties is a dictatorship. We construct quantum analogs of constitutions, of the properties, and of Arrow's theorem. A quantum version of majority rule, we show, violates this quantum Arrow conjecture. Our voting system allows for tactical-voting strategies reliant on entanglement, interference, and superpositions. This contribution to quantum game theory helps elucidate how quantum phenomena can be harnessed for strategic advantage.

  11. Common fixed points in best approximation for Banach operator pairs with Ciric type I-contractions

    NASA Astrophysics Data System (ADS)

    Hussain, N.

    2008-02-01

    The common fixed point theorems, similar to those of Ciric [Lj.B. Ciric, On a common fixed point theorem of a Gregus type, Publ. Inst. Math. (Beograd) (N.S.) 49 (1991) 174-178; Lj.B. Ciric, On Diviccaro, Fisher and Sessa open questions, Arch. Math. (Brno) 29 (1993) 145-152; Lj.B. Ciric, On a generalization of Gregus fixed point theorem, Czechoslovak Math. J. 50 (2000) 449-458], Fisher and Sessa [B. Fisher, S. Sessa, On a fixed point theorem of Gregus, Internat. J. Math. Math. Sci. 9 (1986) 23-28], Jungck [G. Jungck, On a fixed point theorem of Fisher and Sessa, Internat. J. Math. Math. Sci. 13 (1990) 497-500] and Mukherjee and Verma [R.N. Mukherjee, V. Verma, A note on fixed point theorem of Gregus, Math. Japon. 33 (1988) 745-749], are proved for a Banach operator pair. As applications, common fixed point and approximation results for Banach operator pair satisfying Ciric type contractive conditions are obtained without the assumption of linearity or affinity of either T or I. Our results unify and generalize various known results to a more general class of noncommuting mappings.

  12. Restoring the consistency with the contact density theorem of a classical density functional theory of ions at a planar electrical double layer.

    PubMed

    Gillespie, Dirk

    2014-11-01

    Classical density functional theory (DFT) of fluids is a fast and efficient theory to compute the structure of the electrical double layer in the primitive model of ions where ions are modeled as charged, hard spheres in a background dielectric. While the hard-core repulsive component of this ion-ion interaction can be accurately computed using well-established DFTs, the electrostatic component is less accurate. Moreover, many electrostatic functionals fail to satisfy a basic theorem, the contact density theorem, that relates the bulk pressure, surface charge, and ion densities at their distances of closest approach for ions in equilibrium at a smooth, hard, planar wall. One popular electrostatic functional that fails to satisfy the contact density theorem is a perturbation approach developed by Kierlik and Rosinberg [Phys. Rev. A 44, 5025 (1991)PLRAAN1050-294710.1103/PhysRevA.44.5025] and Rosenfeld [J. Chem. Phys. 98, 8126 (1993)JCPSA60021-960610.1063/1.464569], where the full free-energy functional is Taylor-expanded around a bulk (homogeneous) reference fluid. Here, it is shown that this functional fails to satisfy the contact density theorem because it also fails to satisfy the known low-density limit. When the functional is corrected to satisfy this limit, a corrected bulk pressure is derived and it is shown that with this pressure both the contact density theorem and the Gibbs adsorption theorem are satisfied.

  13. Central Limit Theorem for Exponentially Quasi-local Statistics of Spin Models on Cayley Graphs

    NASA Astrophysics Data System (ADS)

    Reddy, Tulasi Ram; Vadlamani, Sreekar; Yogeshwaran, D.

    2018-04-01

    Central limit theorems for linear statistics of lattice random fields (including spin models) are usually proven under suitable mixing conditions or quasi-associativity. Many interesting examples of spin models do not satisfy mixing conditions, and on the other hand, it does not seem easy to show central limit theorem for local statistics via quasi-associativity. In this work, we prove general central limit theorems for local statistics and exponentially quasi-local statistics of spin models on discrete Cayley graphs with polynomial growth. Further, we supplement these results by proving similar central limit theorems for random fields on discrete Cayley graphs taking values in a countable space, but under the stronger assumptions of α -mixing (for local statistics) and exponential α -mixing (for exponentially quasi-local statistics). All our central limit theorems assume a suitable variance lower bound like many others in the literature. We illustrate our general central limit theorem with specific examples of lattice spin models and statistics arising in computational topology, statistical physics and random networks. Examples of clustering spin models include quasi-associated spin models with fast decaying covariances like the off-critical Ising model, level sets of Gaussian random fields with fast decaying covariances like the massive Gaussian free field and determinantal point processes with fast decaying kernels. Examples of local statistics include intrinsic volumes, face counts, component counts of random cubical complexes while exponentially quasi-local statistics include nearest neighbour distances in spin models and Betti numbers of sub-critical random cubical complexes.

  14. Performance parameters of a liquid filled ionization chamber array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poppe, B.; Stelljes, T. S.; Looe, H. K.

    2013-08-15

    Purpose: In this work, the properties of the two-dimensional liquid filled ionization chamber array Octavius 1000SRS (PTW-Freiburg, Germany) for use in clinical photon-beam dosimetry are investigated.Methods: Measurements were carried out at an Elekta Synergy and Siemens Primus accelerator. For measurements of stability, linearity, and saturation effects of the 1000SRS array a Semiflex 31013 ionization chamber (PTW-Freiburg, Germany) was used as a reference. The effective point of measurement was determined by TPR measurements of the array in comparison with a Roos chamber (type 31004, PTW-Freiburg, Germany). The response of the array with varying field size and depth of measurement was evaluatedmore » using a Semiflex 31010 ionization chamber as a reference. Output factor measurements were carried out with a Semiflex 31010 ionization chamber, a diode (type 60012, PTW-Freiburg, Germany), and the detector array under investigation. The dose response function for a single detector of the array was determined by measuring 1 cm wide slit-beam dose profiles and comparing them against diode-measured profiles. Theoretical aspects of the low pass properties and of the sampling frequency of the detector array were evaluated. Dose profiles measured with the array and the diode detector were compared, and an intensity modulated radiation therapy (IMRT) field was verified using the Gamma-Index method and the visualization of line dose profiles.Results: The array showed a short and long term stability better than 0.1% and 0.2%, respectively. Fluctuations in linearity were found to be within ±0.2% for the vendor specified dose range. Saturation effects were found to be similar to those reported in other studies for liquid-filled ionization chambers. The detector's relative response varied with field size and depth of measurement, showing a small energy dependence accounting for maximum signal deviations of ±2.6% from the reference condition for the setup used. The σ-values of the Gaussian dose response function for a single detector of the array were found to be (0.72 ± 0.25) mm at 6 MV and (0.74 ± 0.25) mm at 15 MV and the corresponding low pass cutoff frequencies are 0.22 and 0.21 mm{sup −1}, respectively. For the inner 5 × 5 cm{sup 2} region and the outer 11 × 11 cm{sup 2} region of the array the Nyquist theorem is fulfilled for maximum sampling frequencies of 0.2 and 0.1 mm{sup −1}, respectively. An IMRT field verification with a Gamma-Index analysis yielded a passing rate of 95.2% for a 3 mm/3% criterion with a TPS calculation as reference.Conclusions: This study shows the applicability of the Octavius 1000SRS in modern dosimetry. Output factor and dose profile measurements illustrated the applicability of the array in small field and stereotactic dosimetry. The high spatial resolution ensures adequate measurements of dose profiles in regular and intensity modulated photon-beam fields.« less

  15. A Microsoft® Excel Simulation Illustrating the Central Limit Theorem's Appropriateness for Comparing the Difference between the Means of Any Two Populations

    ERIC Educational Resources Information Center

    Moen, David H.; Powell, John E.

    2008-01-01

    Using Microsoft® Excel, several interactive, computerized learning modules are developed to illustrate the Central Limit Theorem's appropriateness for comparing the difference between the means of any two populations. These modules are used in the classroom to enhance the comprehension of this theorem as well as the concepts that provide the…

  16. Optimal Repairman Allocation Models

    DTIC Science & Technology

    1976-03-01

    state X under policy ir. Then lim {k1’ lC0 (^)I) e.(X,k) - 0 k*0 *’-’ (3.1.1) Proof; The result is proven by induction on |CQ(X...following theorem. Theorem 3.1 D. Under the conditions of theorem 3.1 A, define g1[ 1) (X) - g^U), then lim k- lC0 W l-mle (XHkl00^ Ig*11 (X

  17. Individual and Collective Analyses of the Genesis of Student Reasoning Regarding the Invertible Matrix Theorem in Linear Algebra

    ERIC Educational Resources Information Center

    Wawro, Megan Jean

    2011-01-01

    In this study, I considered the development of mathematical meaning related to the Invertible Matrix Theorem (IMT) for both a classroom community and an individual student over time. In this particular linear algebra course, the IMT was a core theorem in that it connected many concepts fundamental to linear algebra through the notion of…

  18. A Converse of Fermat's Little Theorem

    ERIC Educational Resources Information Center

    Bruckman, P. S.

    2007-01-01

    As the name of the paper implies, a converse of Fermat's Little Theorem (FLT) is stated and proved. FLT states the following: if p is any prime, and x any integer, then x[superscript p] [equivalent to] x (mod p). There is already a well-known converse of FLT, known as Lehmer's Theorem, which is as follows: if x is an integer coprime with m, such…

  19. Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.

    ERIC Educational Resources Information Center

    Rudner, Lawrence M.

    This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…

  20. CONTRIBUTIONS TO RATIONAL APPROXIMATION,

    DTIC Science & Technology

    Some of the key results of linear Chebyshev approximation theory are extended to generalized rational functions. Prominent among these is Haar’s...linear theorem which yields necessary and sufficient conditions for uniqueness. Some new results in the classic field of rational function Chebyshev...Furthermore a Weierstrass type theorem is proven for rational Chebyshev approximation. A characterization theorem for rational trigonometric Chebyshev approximation in terms of sign alternation is developed. (Author)

  1. Generalization of the Bogoliubov-Zubarev Theorem for Dynamic Pressure to the Case of Compressibility

    NASA Astrophysics Data System (ADS)

    Rudoi, Yu. G.

    2018-01-01

    We present the motivation, formulation, and modified proof of the Bogoliubov-Zubarev theorem connecting the pressure of a dynamical object with its energy within the framework of a classical description and obtain a generalization of this theorem to the case of dynamical compressibility. In both cases, we introduce the volume of the object into consideration using a singular addition to the Hamiltonian function of the physical object, which allows using the concept of the Bogoliubov quasiaverage explicitly already on a dynamical level of description. We also discuss the relation to the same result known as the Hellmann-Feynman theorem in the framework of the quantum description of a physical object.

  2. Some constructions of biharmonic maps and Chen’s conjecture on biharmonic hypersurfaces

    NASA Astrophysics Data System (ADS)

    Ou, Ye-Lin

    2012-04-01

    We give several construction methods and use them to produce many examples of proper biharmonic maps including biharmonic tori of any dimension in Euclidean spheres (Theorem 2.2, Corollaries 2.3, 2.4 and 2.6), biharmonic maps between spheres (Theorem 2.9) and into spheres (Theorem 2.10) via orthogonal multiplications and eigenmaps. We also study biharmonic graphs of maps, derive the equation for a function whose graph is a biharmonic hypersurface in a Euclidean space, and give an equivalent formulation of Chen's conjecture on biharmonic hypersurfaces by using the biharmonic graph equation (Theorem 4.1) which paves a way for the analytic study of the conjecture.

  3. Reciprocity relations in aerodynamics

    NASA Technical Reports Server (NTRS)

    Heaslet, Max A; Spreiter, John R

    1953-01-01

    Reverse flow theorems in aerodynamics are shown to be based on the same general concepts involved in many reciprocity theorems in the physical sciences. Reciprocal theorems for both steady and unsteady motion are found as a logical consequence of this approach. No restrictions on wing plan form or flight Mach number are made beyond those required in linearized compressible-flow analysis. A number of examples are listed, including general integral theorems for lifting, rolling, and pitching wings and for wings in nonuniform downwash fields. Correspondence is also established between the buildup of circulation with time of a wing starting impulsively from rest and the buildup of lift of the same wing moving in the reverse direction into a sharp-edged gust.

  4. Fluctuation theorem for channel-facilitated membrane transport of interacting and noninteracting solutes.

    PubMed

    Berezhkovskii, Alexander M; Bezrukov, Sergey M

    2008-05-15

    In this paper, we discuss the fluctuation theorem for channel-facilitated transport of solutes through a membrane separating two reservoirs. The transport is characterized by the probability, P(n)(t), that n solute particles have been transported from one reservoir to the other in time t. The fluctuation theorem establishes a relation between P(n)(t) and P-(n)(t): The ratio P(n)(t)/P-(n)(t) is independent of time and equal to exp(nbetaA), where betaA is the affinity measured in the thermal energy units. We show that the same fluctuation theorem is true for both single- and multichannel transport of noninteracting particles and particles which strongly repel each other.

  5. One-range addition theorems for derivatives of Slater-type orbitals.

    PubMed

    Guseinov, Israfil

    2004-06-01

    Using addition theorems for STOs introduced by the author with the help of complete orthonormal sets of psi(alpha)-ETOs (Guseinov II (2003) J Mol Model 9:190-194), where alpha=1, 0, -1, -2, ..., a large number of one-range addition theorems for first and second derivatives of STOs are established. These addition theorems are especially useful for computation of multicenter-multielectron integrals over STOs that arise in the Hartree-Fock-Roothaan approximation and also in the Hylleraas function method, which play a significant role for the study of electronic structure and electron-nuclei interaction properties of atoms, molecules, and solids. The relationships obtained are valid for arbitrary quantum numbers, screening constants and location of STOs.

  6. Out-of-time-order fluctuation-dissipation theorem

    NASA Astrophysics Data System (ADS)

    Tsuji, Naoto; Shitara, Tomohiro; Ueda, Masahito

    2018-01-01

    We prove a generalized fluctuation-dissipation theorem for a certain class of out-of-time-ordered correlators (OTOCs) with a modified statistical average, which we call bipartite OTOCs, for general quantum systems in thermal equilibrium. The difference between the bipartite and physical OTOCs defined by the usual statistical average is quantified by a measure of quantum fluctuations known as the Wigner-Yanase skew information. Within this difference, the theorem describes a universal relation between chaotic behavior in quantum systems and a nonlinear-response function that involves a time-reversed process. We show that the theorem can be generalized to higher-order n -partite OTOCs as well as in the form of generalized covariance.

  7. Some theorems and properties of multi-dimensional fractional Laplace transforms

    NASA Astrophysics Data System (ADS)

    Ahmood, Wasan Ajeel; Kiliçman, Adem

    2016-06-01

    The aim of this work is to study theorems and properties for the one-dimensional fractional Laplace transform, generalize some properties for the one-dimensional fractional Lapalce transform to be valid for the multi-dimensional fractional Lapalce transform and is to give the definition of the multi-dimensional fractional Lapalce transform. This study includes: dedicate the one-dimensional fractional Laplace transform for functions of only one independent variable with some of important theorems and properties and develop of some properties for the one-dimensional fractional Laplace transform to multi-dimensional fractional Laplace transform. Also, we obtain a fractional Laplace inversion theorem after a short survey on fractional analysis based on the modified Riemann-Liouville derivative.

  8. A coupled mode formulation by reciprocity and a variational principle

    NASA Technical Reports Server (NTRS)

    Chuang, Shun-Lien

    1987-01-01

    A coupled mode formulation for parallel dielectric waveguides is presented via two methods: a reciprocity theorem and a variational principle. In the first method, a generalized reciprocity relation for two sets of field solutions satisfying Maxwell's equations and the boundary conditions in two different media, respectively, is derived. Based on the generalized reciprocity theorem, the coupled mode equations can then be formulated. The second method using a variational principle is also presented for a general waveguide system which can be lossy. The results of the variational principle can also be shown to be identical to those from the reciprocity theorem. The exact relations governing the 'conventional' and the new coupling coefficients are derived. It is shown analytically that the present formulation satisfies the reciprocity theorem and power conservation exactly, while the conventional theory violates the power conservation and reciprocity theorem by as much as 55 percent and the Hardy-Streifer (1985, 1986) theory by 0.033 percent, for example.

  9. Does the Coase theorem hold in real markets? An application to the negotiations between waterworks and farmers in Denmark.

    PubMed

    Abildtrup, Jens; Jensen, Frank; Dubgaard, Alex

    2012-01-01

    The Coase theorem depends on a number of assumptions, among others, perfect information about each other's payoff function, maximising behaviour and zero transaction costs. An important question is whether the Coase theorem holds for real market transactions when these assumptions are violated. This is the question examined in this paper. We consider the results of Danish waterworks' attempts to establish voluntary cultivation agreements with Danish farmers. A survey of these negotiations shows that the Coase theorem is not robust in the presence of imperfect information, non-maximising behaviour and transaction costs. Thus, negotiations between Danish waterworks and farmers may not be a suitable mechanism to achieve efficiency in the protection of groundwater quality due to violations of the assumptions of the Coase theorem. The use of standard schemes or government intervention (e.g. expropriation) may, under some conditions, be a more effective and cost efficient approach for the protection of vulnerable groundwater resources in Denmark. Copyright © 2011 Elsevier Ltd. All rights reserved.

  10. A Formally-Verified Decision Procedure for Univariate Polynomial Computation Based on Sturm's Theorem

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony J.; Munoz, Cesar A.

    2014-01-01

    Sturm's Theorem is a well-known result in real algebraic geometry that provides a function that computes the number of roots of a univariate polynomial in a semiopen interval. This paper presents a formalization of this theorem in the PVS theorem prover, as well as a decision procedure that checks whether a polynomial is always positive, nonnegative, nonzero, negative, or nonpositive on any input interval. The soundness and completeness of the decision procedure is proven in PVS. The procedure and its correctness properties enable the implementation of a PVS strategy for automatically proving existential and universal univariate polynomial inequalities. Since the decision procedure is formally verified in PVS, the soundness of the strategy depends solely on the internal logic of PVS rather than on an external oracle. The procedure itself uses a combination of Sturm's Theorem, an interval bisection procedure, and the fact that a polynomial with exactly one root in a bounded interval is always nonnegative on that interval if and only if it is nonnegative at both endpoints.

  11. Nonlinear Cascades of Surface Oceanic Geostrophic Kinetic Energy in the Frequency Domain

    DTIC Science & Technology

    2012-09-01

    kinetic energy in wavenumber k space for surface ocean geostrophic flows have been computed from sat - ellite altimetry data of sea surface height (Scott...5 0.65kN, where kN corresponds to the Nyquist scale. The filter is applied to bq 1 and bq 2 , the Fourier transforms of q1 and q2, at every time step

  12. Linear Modulation Techniques for Digital Microwave

    DTIC Science & Technology

    1979-08-01

    impulse response. Following Forney, a polynomial R(D) is defined such that +0o R(D) - Rh (iT)0i (2-2) i00 The coefficients of R(D) are symnetrical...EQUALIZATION: 8/ I - NYQUIST EQUALIZED / 5- -- DUOINARY EQUALIZED NOTE: 6 MODIFIED 6-QAM I- 4 / 4 -2 2 0 5 10 15 20 25 30 35 40 PEAK AMPLIFIER Eb/N0 Ift 103M

  13. Objective evaluation of slanted edge charts

    NASA Astrophysics Data System (ADS)

    Hornung, Harvey (.

    2015-01-01

    Camera objective characterization methodologies are widely used in the digital camera industry. Most objective characterization systems rely on a chart with specific patterns, a software algorithm measures a degradation or difference between the captured image and the chart itself. The Spatial Frequency Response (SFR) method, which is part of the ISO 122331 standard, is now very commonly used in the imaging industry, it is a very convenient way to measure a camera Modulation transfer function (MTF). The SFR algorithm can measure frequencies beyond the Nyquist frequency thanks to super-resolution, so it does provide useful information on aliasing and can provide modulation for frequencies between half Nyquist and Nyquist on all color channels of a color sensor with a Bayer pattern. The measurement process relies on a chart that is simple to manufacture: a straight transition from a bright reflectance to a dark one (black and white for instance), while a sine chart requires handling precisely shades of gray which can also create all sort of issues with printers that rely on half-toning. However, no technology can create a perfect edge, so it is important to assess the quality of the chart and understand how it affects the accuracy of the measurement. In this article, I describe a protocol to characterize the MTF of a slanted edge chart, using a high-resolution flatbed scanner. The main idea is to use the RAW output of the scanner as a high-resolution micro-densitometer, since the signal is linear it is suitable to measure the chart MTF using the SFR algorithm. The scanner needs to be calibrated in sharpness: the scanner MTF is measured with a calibrated sine chart and inverted to compensate for the modulation loss from the scanner. Then the true chart MTF is computed. This article compares measured MTF from commercial charts and charts printed on printers, and also compares how of the contrast of the edge (using different shades of gray) can affect the chart MTF, then concludes on what distance range and camera resolution the chart can reliably measure the camera MTF.

  14. SU-D-204-05: Quantitative Comparison of a High Resolution Micro-Angiographic Fluoroscopic (MAF) Detector with a Standard Flat Panel Detector (FPD) Using the New Metric of Generalized Measured Relative Object Detectability (GM-ROD)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Russ, M; Ionita, C; Bednarek, D

    Purpose: In endovascular image-guided neuro-interventions, visualization of fine detail is paramount. For example, the ability of the interventionist to visualize the stent struts depends heavily on the x-ray imaging detector performance. Methods: A study to examine the relative performance of the high resolution MAF-CMOS (pixel size 75µm, Nyquist frequency 6.6 cycles/mm) and a standard Flat Panel Detector (pixel size 194µm, Nyquist frequency 2.5 cycles/mm) detectors in imaging a neuro stent was done using the Generalized Measured Relative Object Detectability (GM-ROD) metric. Low quantum noise images of a deployed stent were obtained by averaging 95 frames obtained by both detectors withoutmore » changing other exposure or geometric parameters. The square of the Fourier transform of each image is taken and divided by the generalized normalized noise power spectrum to give an effective measured task-specific signal-to-noise ratio. This expression is then integrated from 0 to each of the detector’s Nyquist frequencies, and the GM-ROD value is determined by taking a ratio of the integrals for the MAF-CMOS to that of the FPD. The lower bound of integration can be varied to emphasize high frequencies in the detector comparisons. Results: The MAF-CMOS detector exhibits vastly superior performance over the FPD when integrating over all frequencies, yielding a GM-ROD value of 63.1. The lower bound of integration was stepped up in increments of 0.5 cycles/mm for higher frequency comparisons. As the lower bound increased, the GM-ROD value was augmented, reflecting the superior performance of the MAF-CMOS in the high frequency regime. Conclusion: GM-ROD is a versatile metric that can provide quantitative detector and task dependent comparisons that can be used as a basis for detector selection. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.« less

  15. Some functional limit theorems for compound Cox processes

    NASA Astrophysics Data System (ADS)

    Korolev, Victor Yu.; Chertok, A. V.; Korchagin, A. Yu.; Kossova, E. V.; Zeifman, Alexander I.

    2016-06-01

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  16. Quantum Mechanics, Can It Be Consistent with Locality?

    NASA Astrophysics Data System (ADS)

    Nisticò, Giuseppe; Sestito, Angela

    2011-07-01

    We single out an alternative, strict interpretation of the Einstein-Podolsky-Rosen criterion of reality, and identify the implied extensions of quantum correlations. Then we prove that the theorem of Bell, and the non-locality theorems without inequalities, fail if the new extensions are adopted. Therefore, these theorems can be interpreted as arguments against the wide interpretation of the criterion of reality rather than as a violation of locality.

  17. Specification Improvement Through Analysis of Proof Structure (SITAPS): High Assurance Software Development

    DTIC Science & Technology

    2016-02-01

    proof in mathematics. For example, consider the proof of the Pythagorean Theorem illustrated at: http://www.cut-the-knot.org/ pythagoras / where 112...methods and tools have made significant progress in their ability to model software designs and prove correctness theorems about the systems modeled...assumption criticality” or “ theorem root set size” SITAPS detects potentially brittle verification cases. SITAPS provides tools and techniques that

  18. Delaunay Refinement Mesh Generation

    DTIC Science & Technology

    1997-05-18

    edge is locally Delaunay; thus, by Lemma 3, every edge is Delaunay. Theorem 5 Let V be a set of three or more vertices in the plane that are not all...this document. Delaunay triangulations are valuable in part because they have the following optimality properties. Theorem 6 Among all triangulations of...no locally Delaunay edges. By Theorem 5, a triangulation with no locally Delaunay edges is the Delaunay triangulation. The property of max-min

  19. Development of a Dependency Theory Toolbox for Database Design.

    DTIC Science & Technology

    1987-12-01

    published algorithms and theorems , and hand simulating these algorithms can be a tedious and error prone chore. Additionally, since the process of...to design and study relational databases exists in the form of published algorithms and theorems . However, hand simulating these algorithms can be a...published algorithms and theorems . Hand simulating these algorithms can be a tedious and error prone chore. Therefore, a toolbox of algorithms and

  20. Field Computation and Nonpropositional Knowledge.

    DTIC Science & Technology

    1987-09-01

    field computer It is based on xeneralization of Taylor’s theorem to continuous dimensional vector spaces. 20. DISTRIBUTION/AVAILABILITY OF ABSTRACT 21...generalization of Taylor’s theorem to continuous dimensional vector -5paces A number of field computations are illustrated, including several Lransforma...paradigm. The "old" Al has been quite successful in performing a number of difficult tasks, such as theorem prov- ing, chess playing, medical diagnosis and

  1. Ignoring the Innocent: Non-combatants in Urban Operations and in Military Models and Simulations

    DTIC Science & Technology

    2006-01-01

    such a model yields is a sufficiency theorem , a single run does not provide any information on the robustness of such theorems . That is, given that...often formally resolvable via inspection, simple differentiation, the implicit function theorem , comparative statistics, and so on. The only way to... Pythagoras , and Bactowars. For each, Grieger discusses model parameters, data collection, terrain, and other features. Grieger also discusses

  2. Some functional limit theorems for compound Cox processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korolev, Victor Yu.; Institute of Informatics Problems FRC CSC RAS; Chertok, A. V.

    2016-06-08

    An improved version of the functional limit theorem is proved establishing weak convergence of random walks generated by compound doubly stochastic Poisson processes (compound Cox processes) to Lévy processes in the Skorokhod space under more realistic moment conditions. As corollaries, theorems are proved on convergence of random walks with jumps having finite variances to Lévy processes with variance-mean mixed normal distributions, in particular, to stable Lévy processes.

  3. Mean energy of some interacting bosonic systems derived by virtue of the generalized Hellmann-Feynman theorem

    NASA Astrophysics Data System (ADS)

    Fan, Hong-yi; Xu, Xue-xiang

    2009-06-01

    By virtue of the generalized Hellmann-Feynman theorem [H. Y. Fan and B. Z. Chen, Phys. Lett. A 203, 95 (1995)], we derive the mean energy of some interacting bosonic systems for some Hamiltonian models without proceeding with diagonalizing the Hamiltonians. Our work extends the field of applications of the Hellmann-Feynman theorem and may enrich the theory of quantum statistics.

  4. Reduction theorems for optimal unambiguous state discrimination of density matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Raynal, Philippe; Luetkenhaus, Norbert; Enk, Steven J. van

    2003-08-01

    We present reduction theorems for the problem of optimal unambiguous state discrimination of two general density matrices. We show that this problem can be reduced to that of two density matrices that have the same rank n and are described in a Hilbert space of dimensions 2n. We also show how to use the reduction theorems to discriminate unambiguously between N mixed states (N{>=}2)

  5. Proof of factorization using background field method of QCD

    NASA Astrophysics Data System (ADS)

    Nayak, Gouranga C.

    2010-02-01

    Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.

  6. Proof of factorization using background field method of QCD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nayak, Gouranga C.

    Factorization theorem plays the central role at high energy colliders to study standard model and beyond standard model physics. The proof of factorization theorem is given by Collins, Soper and Sterman to all orders in perturbation theory by using diagrammatic approach. One might wonder if one can obtain the proof of factorization theorem through symmetry considerations at the lagrangian level. In this paper we provide such a proof.

  7. Formalization of the Integral Calculus in the PVS Theorem Prover

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    2004-01-01

    The PVS Theorem prover is a widely used formal verification tool used for the analysis of safety-critical systems. The PVS prover, though fully equipped to support deduction in a very general logic framework, namely higher-order logic, it must nevertheless, be augmented with the definitions and associated theorems for every branch of mathematics and Computer Science that is used in a verification. This is a formidable task, ultimately requiring the contributions of researchers and developers all over the world. This paper reports on the formalization of the integral calculus in the PVS theorem prover. All of the basic definitions and theorems covered in a first course on integral calculus have been completed.The theory and proofs were based on Rosenlicht's classic text on real analysis and follow the traditional epsilon-delta method. The goal of this work was to provide a practical set of PVS theories that could be used for verification of hybrid systems that arise in air traffic management systems and other aerospace applications. All of the basic linearity, integrability, boundedness, and continuity properties of the integral calculus were proved. The work culminated in the proof of the Fundamental Theorem Of Calculus. There is a brief discussion about why mechanically checked proofs are so much longer than standard mathematics textbook proofs.

  8. The Knaster-Kuratowski-Mazurkiewicz theorem and abstract convexities

    NASA Astrophysics Data System (ADS)

    Cain, George L., Jr.; González, Luis

    2008-02-01

    The Knaster-Kuratowski-Mazurkiewicz covering theorem (KKM), is the basic ingredient in the proofs of many so-called "intersection" theorems and related fixed point theorems (including the famous Brouwer fixed point theorem). The KKM theorem was extended from Rn to Hausdorff linear spaces by Ky Fan. There has subsequently been a plethora of attempts at extending the KKM type results to arbitrary topological spaces. Virtually all these involve the introduction of some sort of abstract convexity structure for a topological space, among others we could mention H-spaces and G-spaces. We have introduced a new abstract convexity structure that generalizes the concept of a metric space with a convex structure, introduced by E. Michael in [E. Michael, Convex structures and continuous selections, Canad. J. MathE 11 (1959) 556-575] and called a topological space endowed with this structure an M-space. In an article by Shie Park and Hoonjoo Kim [S. Park, H. Kim, Coincidence theorems for admissible multifunctions on generalized convex spaces, J. Math. Anal. Appl. 197 (1996) 173-187], the concepts of G-spaces and metric spaces with Michael's convex structure, were mentioned together but no kind of relationship was shown. In this article, we prove that G-spaces and M-spaces are close related. We also introduce here the concept of an L-space, which is inspired in the MC-spaces of J.V. Llinares [J.V. Llinares, Unified treatment of the problem of existence of maximal elements in binary relations: A characterization, J. Math. Econom. 29 (1998) 285-302], and establish relationships between the convexities of these spaces with the spaces previously mentioned.

  9. A generalized measurement equation and van Cittert-Zernike theorem for wide-field radio astronomical interferometry

    NASA Astrophysics Data System (ADS)

    Carozzi, T. D.; Woan, G.

    2009-05-01

    We derive a generalized van Cittert-Zernike (vC-Z) theorem for radio astronomy that is valid for partially polarized sources over an arbitrarily wide field of view (FoV). The classical vC-Z theorem is the theoretical foundation of radio astronomical interferometry, and its application is the basis of interferometric imaging. Existing generalized vC-Z theorems in radio astronomy assume, however, either paraxiality (narrow FoV) or scalar (unpolarized) sources. Our theorem uses neither of these assumptions, which are seldom fulfiled in practice in radio astronomy, and treats the full electromagnetic field. To handle wide, partially polarized fields, we extend the two-dimensional (2D) electric field (Jones vector) formalism of the standard `Measurement Equation' (ME) of radio astronomical interferometry to the full three-dimensional (3D) formalism developed in optical coherence theory. The resulting vC-Z theorem enables full-sky imaging in a single telescope pointing, and imaging based not only on standard dual-polarized interferometers (that measure 2D electric fields) but also electric tripoles and electromagnetic vector-sensor interferometers. We show that the standard 2D ME is easily obtained from our formalism in the case of dual-polarized antenna element interferometers. We also exploit an extended 2D ME to determine that dual-polarized interferometers can have polarimetric aberrations at the edges of a wide FoV. Our vC-Z theorem is particularly relevant to proposed, and recently developed, wide FoV interferometers such as Low Frequency Array (LOFAR) and Square Kilometer Array (SKA), for which direction-dependent effects will be important.

  10. Efficient and Adaptive Methods for Computing Accurate Potential Surfaces for Quantum Nuclear Effects: Applications to Hydrogen-Transfer Reactions.

    PubMed

    DeGregorio, Nicole; Iyengar, Srinivasan S

    2018-01-09

    We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen-bonded systems is demonstrated here.

  11. A proof of the Woodward-Lawson sampling method for a finite linear array

    NASA Technical Reports Server (NTRS)

    Somers, Gary A.

    1993-01-01

    An extension of the continuous aperture Woodward-Lawson sampling theorem has been developed for a finite linear array of equidistant identical elements with arbitrary excitations. It is shown that by sampling the array factor at a finite number of specified points in the far field, the exact array factor over all space can be efficiently reconstructed in closed form. The specified sample points lie in real space and hence are measurable provided that the interelement spacing is greater than approximately one half of a wavelength. This paper provides insight as to why the length parameter used in the sampling formulas for discrete arrays is larger than the physical span of the lattice points in contrast with the continuous aperture case where the length parameter is precisely the physical aperture length.

  12. Infinite Set of Soft Theorems in Gauge-Gravity Theories as Ward-Takahashi Identities

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Shiu, Gary

    2018-05-01

    We show that the soft photon, gluon, and graviton theorems can be understood as the Ward-Takahashi identities of large gauge transformation, i.e., diffeomorphism that does not fall off at spatial infinity. We found infinitely many new identities which constrain the higher order soft behavior of the gauge bosons and gravitons in scattering amplitudes of gauge and gravity theories. Diagrammatic representations of these soft theorems are presented.

  13. Teaching the Jahn-Teller Theorem: A Simple Exercise That Illustrates How the Magnitude of Distortion Depends on the Number of Electrons and Their Occupation of the Degenerate Energy Level

    ERIC Educational Resources Information Center

    Johansson, Adam Johannes

    2013-01-01

    Teaching the Jahn-Teller theorem offers several challenges. For many students, the first encounter comes in coordination chemistry, which can be difficult due to the already complicated nature of transition-metal complexes. Moreover, a deep understanding of the Jahn-Teller theorem requires that one is well acquainted with quantum mechanics and…

  14. Research on Quantum Algorithms at the Institute for Quantum Information

    DTIC Science & Technology

    2009-10-17

    accuracy threshold theorem for the one-way quantum computer. Their proof is based on a novel scheme, in which a noisy cluster state in three spatial...detected. The proof applies to independent stochastic noise but (in contrast to proofs of the quantum accuracy threshold theorem based on concatenated...proved quantum threshold theorems for long-range correlated non-Markovian noise, for leakage faults, for the one-way quantum computer, for postselected

  15. Deductive Synthesis of the Unification Algorithm,

    DTIC Science & Technology

    1981-06-01

    DEDUCTIVE SYNTHESIS OF THE I - UNIFICATION ALGORITHM Zohar Manna Richard Waldinger I F? Computer Science Department Artificial Intelligence Center...theorem proving," Artificial Intelligence Journal, Vol. 9, No. 1, pp. 1-35. Boyer, R. S. and J S. Moore [Jan. 19751, "Proving theorems about LISP...d’Intelligence Artificielle , U.E.R. de Luminy, Universit6 d’ Aix-Marseille II. Green, C. C. [May 1969], "Application of theorem proving to problem

  16. Generalized Synchronization in AN Array of Nonlinear Dynamic Systems with Applications to Chaotic Cnn

    NASA Astrophysics Data System (ADS)

    Min, Lequan; Chen, Guanrong

    This paper establishes some generalized synchronization (GS) theorems for a coupled discrete array of difference systems (CDADS) and a coupled continuous array of differential systems (CCADS). These constructive theorems provide general representations of GS in CDADS and CCADS. Based on these theorems, one can design GS-driven CDADS and CCADS via appropriate (invertible) transformations. As applications, the results are applied to autonomous and nonautonomous coupled Chen cellular neural network (CNN) CDADS and CCADS, discrete bidirectional Lorenz CNN CDADS, nonautonomous bidirectional Chua CNN CCADS, and nonautonomously bidirectional Chen CNN CDADS and CCADS, respectively. Extensive numerical simulations show their complex dynamic behaviors. These theorems provide new means for understanding the GS phenomena of complex discrete and continuously differentiable networks.

  17. Fixed-point theorems for families of weakly non-expansive maps

    NASA Astrophysics Data System (ADS)

    Mai, Jie-Hua; Liu, Xin-He

    2007-10-01

    In this paper, we present some fixed-point theorems for families of weakly non-expansive maps under some relatively weaker and more general conditions. Our results generalize and improve several results due to Jungck [G. Jungck, Fixed points via a generalized local commutativity, Int. J. Math. Math. Sci. 25 (8) (2001) 497-507], Jachymski [J. Jachymski, A generalization of the theorem by Rhoades and Watson for contractive type mappings, Math. Japon. 38 (6) (1993) 1095-1102], Guo [C. Guo, An extension of fixed point theorem of Krasnoselski, Chinese J. Math. (P.O.C.) 21 (1) (1993) 13-20], Rhoades [B.E. Rhoades, A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc. 226 (1977) 257-290], and others.

  18. Common Coupled Fixed Point Theorems for Two Hybrid Pairs of Mappings under φ-ψ Contraction

    PubMed Central

    Handa, Amrish

    2014-01-01

    We introduce the concept of (EA) property and occasional w-compatibility for hybrid pair F : X × X → 2X and f : X → X. We also introduce common (EA) property for two hybrid pairs F, G : X → 2X and f, g : X → X. We establish some common coupled fixed point theorems for two hybrid pairs of mappings under φ-ψ contraction on noncomplete metric spaces. An example is also given to validate our results. We improve, extend and generalize several known results. The results of this paper generalize the common fixed point theorems for hybrid pairs of mappings and essentially contain fixed point theorems for hybrid pair of mappings. PMID:27340688

  19. Transactions of the Conference of Army Mathematicians (25th).

    DTIC Science & Technology

    1980-01-01

    pothesis (see description of H in Theorem 1). It follows from (4.16) and (4.17) that CT v Hv(4.18) CFT < MCT V V and, since the greatest eigenvalue of H is...0 (3.15)’ 𔃺 2 (ar) = 0 -138- Tr1W 𔃾A WlO (0,T) = a + 2 t1 W ( , T) = - - 2 r H* f* (3.16)� 2 W12 ( CfT ) = f 2 O T at + (a212) Hi - 2 If* 12 3 W2...Theorem 8.10 and Theorem 8.11. For these tables, use of (8.36) to get bounds for I aml is not possible. It will be noted that Theorems 8.10 and 8.11 give

  20. Lindeberg theorem for Gibbs-Markov dynamics

    NASA Astrophysics Data System (ADS)

    Denker, Manfred; Senti, Samuel; Zhang, Xuan

    2017-12-01

    A dynamical array consists of a family of functions \\{ fn, i: 1≤slant i≤slant k_n, n≥slant 1\\} and a family of initial times \\{τn, i: 1≤slant i≤slant k_n, n≥slant 1\\} . For a dynamical system (X, T) we identify distributional limits for sums of the form for suitable (non-random) constants s_n>0 and an, i\\in { R} . We derive a Lindeberg-type central limit theorem for dynamical arrays. Applications include new central limit theorems for functions which are not locally Lipschitz continuous and central limit theorems for statistical functions of time series obtained from Gibbs-Markov systems. Our results, which hold for more general dynamics, are stated in the context of Gibbs-Markov dynamical systems for convenience.

  1. A reciprocal theorem for a mixture theory. [development of linearized theory of interacting media

    NASA Technical Reports Server (NTRS)

    Martin, C. J.; Lee, Y. M.

    1972-01-01

    A dynamic reciprocal theorem for a linearized theory of interacting media is developed. The constituents of the mixture are a linear elastic solid and a linearly viscous fluid. In addition to Steel's field equations, boundary conditions and inequalities on the material constants that have been shown by Atkin, Chadwick and Steel to be sufficient to guarantee uniqueness of solution to initial-boundary value problems are used. The elements of the theory are given and two different boundary value problems are considered. The reciprocal theorem is derived with the aid of the Laplace transform and the divergence theorem and this section is concluded with a discussion of the special cases which arise when one of the constituents of the mixture is absent.

  2. A Theorem on the Rank of a Product of Matrices with Illustration of Its Use in Goodness of Fit Testing.

    PubMed

    Satorra, Albert; Neudecker, Heinz

    2015-12-01

    This paper develops a theorem that facilitates computing the degrees of freedom of Wald-type chi-square tests for moment restrictions when there is rank deficiency of key matrices involved in the definition of the test. An if and only if (iff) condition is developed for a simple rule of difference of ranks to be used when computing the desired degrees of freedom of the test. The theorem is developed exploiting basics tools of matrix algebra. The theorem is shown to play a key role in proving the asymptotic chi-squaredness of a goodness of fit test in moment structure analysis, and in finding the degrees of freedom of this chi-square statistic.

  3. Stochastic stability properties of jump linear systems

    NASA Technical Reports Server (NTRS)

    Feng, Xiangbo; Loparo, Kenneth A.; Ji, Yuandong; Chizeck, Howard J.

    1992-01-01

    Jump linear systems are defined as a family of linear systems with randomly jumping parameters (usually governed by a Markov jump process) and are used to model systems subject to failures or changes in structure. The authors study stochastic stability properties in jump linear systems and the relationship among various moment and sample path stability properties. It is shown that all second moment stability properties are equivalent and are sufficient for almost sure sample path stability, and a testable necessary and sufficient condition for second moment stability is derived. The Lyapunov exponent method for the study of almost sure sample stability is discussed, and a theorem which characterizes the Lyapunov exponents of jump linear systems is presented.

  4. Manganese oxide micro-supercapacitors with ultra-high areal capacitance

    NASA Astrophysics Data System (ADS)

    Wang, Xu; Myers, Benjamin D.; Yan, Jian; Shekhawat, Gajendra; Dravid, Vinayak; Lee, Pooi See

    2013-05-01

    A symmetric micro-supercapacitor is constructed by electrochemically depositing manganese oxide onto micro-patterned current collectors. High surface-to-volume ratio of manganese oxide and short diffusion distance between electrodes give an ultra-high areal capacitance of 56.3 mF cm-2 at a current density of 27.2 μA cm-2.A symmetric micro-supercapacitor is constructed by electrochemically depositing manganese oxide onto micro-patterned current collectors. High surface-to-volume ratio of manganese oxide and short diffusion distance between electrodes give an ultra-high areal capacitance of 56.3 mF cm-2 at a current density of 27.2 μA cm-2. Electronic supplementary information (ESI) available: Experimental procedures; optical images of micro-supercapacitors; areal capacitances of samples M-0.3C, M-0.6C and M-0.9C; illustration of interdigital finger electrodes; Nyquist plot of Co(OH)2 deposited on micro-electrodes. See DOI: 10.1039/c3nr00210a

  5. A 0.9-V 12-bit 40-MSPS Pipeline ADC for Wireless Receivers

    NASA Astrophysics Data System (ADS)

    Ito, Tomohiko; Itakura, Tetsuro

    A 0.9-V 12-bit 40-MSPS pipeline ADC with I/Q amplifier sharing technique is presented for wireless receivers. To achieve high linearity even at 0.9-V supply, the clock signals to sampling switches are boosted over 0.9V in conversion stages. The clock-boosting circuit for lifting these clocks is shared between I-ch ADC and Q-ch ADC, reducing the area penalty. Low supply voltage narrows the available output range of the operational amplifier. A pseudo-differential (PD) amplifier with two-gain-stage common-mode feedback (CMFB) is proposed in views of its wide output range and power efficiency. This ADC is fabricated in 90-nm CMOS technology. At 40MS/s, the measured SNDR is 59.3dB and the corresponding effective number of bits (ENOB) is 9.6. Until Nyquist frequency, the ENOB is kept over 9.3. The ADC dissipates 17.3mW/ch, whose performances are suitable for ADCs for mobile wireless systems such as WLAN/WiMAX.

  6. Ar-40/Ar-39 age of the Shergotty achondrite and implications for its post-shock thermal history

    NASA Technical Reports Server (NTRS)

    Bogard, D. D.; Nyquist, L. E.; Husain, L.

    1979-01-01

    Ar-40/Ar-39 measurements are used to determine the age of the Shergotty achondrite and the chronology of the shock event responsible for the complete conversion of its plagioclase to maskelynite is discussed. Apparent ages are found to vary between 240 and 640 million years for the whole rock sample, with a plateau age of 254 million years for a maskelynite separate. The Rb-Sr age of 165 million years determined by Nyquist at al (1978) suggests that the maskelynite as well as the whole rock was incompletely degassed. Argon diffusion characteristics indicate a post-shock cooling time greater than 1000 years and a burial depth greater than 300 m for a thermal model of a cooling ejecta blanket of variable thickness. It is concluded that the shock event which degassed the argon and reset the Rb-Sr systematics occurred between 165 and 250 million years ago when the parent body experienced a collision in the asteroid belt.

  7. A 90GHz Bolometer Camera Detector System for the Green Bank Telescope

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest D.; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3mm) for the 100 m Green Bank Telescope (GBT) This system will provide high sensitivity (<1mjy in 1s rapid imaging (15'x15' to 250 microJy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close packed, Nyquist-sampled array of superconducting transition edge sensor bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approx. 2.10(exp 17) W/square root Hz, the TES bolometers will provide fast linear sensitive response for high performance imaging. The detectors are read out by and 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  8. A 90GHz Bolometer Camera Detector System for the Green

    NASA Technical Reports Server (NTRS)

    Benford, Dominic J.; Allen, Christine A.; Buchanan, Ernest; Chen, Tina C.; Chervenak, James A.; Devlin, Mark J.; Dicker, Simon R.; Forgione, Joshua B.

    2004-01-01

    We describe a close-packed, two-dimensional imaging detector system for operation at 90GHz (3.3 mm) for the 100m Green Bank Telescope (GBT). This system will provide high sensitivity (less than 1mJy in 1s) rapid imaging (15'x15' to 150 micron Jy in 1 hr) at the world's largest steerable aperture. The heart of this camera is an 8x8 close-packed, Nyquist-sampled array of superconducting transition edge sensor (TES) bolometers. We have designed and are producing a functional superconducting bolometer array system using a monolithic planar architecture and high-speed multiplexed readout electronics. With an NEP of approximately 2 x 10(exp -17) W/square root of Hz, the TES bolometers will provide fast, linear, sensitive response for high performance imaging. The detectors are read out by an 8x8 time domain SQUID multiplexer. A digital/analog electronics system has been designed to enable read out by SQUID multiplexers. First light for this instrument on the GBT is expected within a year.

  9. VizieR Online Data Catalog: 61 main-sequence and subgiant oscillations (Appourchaux+, 2012)

    NASA Astrophysics Data System (ADS)

    Appourchaux, T.; Chaplin, W. J.; Garcia, R. A.; Gruberbauer, M.; Verner, G. A.; Antia, H. M.; Benomar, O.; Campante, T. L.; Davies, G. R.; Deheuvels, S.; Handberg, R.; Hekker, S.; Howe, R.; Regulo, C.; Salabert, D.; Bedding, T. R.; White, T. R.; Ballot, J.; Mathur, S.; Silva Aguirre, V.; Elsworth, Y. P.; Basu, S.; Gilliland, R. L.; Christensen-Dalsgaard, J.; Kjeldsen, H.; Uddin, K.; Stumpe, M. C.; Barclay, T.

    2017-11-01

    Kepler observations are obtained in two different operating modes: long cadence (LC) and short cadence (SC) (Gilliland et al., 2010ApJ...713L.160G; Jenkins et al., 2010ApJ...713L..87J). This work is based on SC data. For the brightest stars (down to Kepler magnitude, Kp~=12), SC observations can be obtained for a limited number of stars (up to 512 at any given time) with a faster sampling cadence of 58.84876s (Nyquist frequency of ~8.5mHz), which permits a more precise transit timing and the performance of asteroseismology. Kepler observations are divided into three-month-long quarters (Q). A subset of 61 solar-type stars observed during quarters Q5-Q7 (March 22, 2010 to December 22, 2010) were chosen because they have oscillation modes with high signal-to-noise ratios. This length of data gives a frequency resolution of about 0.04uHz. (2 data files).

  10. Time-jittered marine seismic data acquisition via compressed sensing and sparsity-promoting wavefield reconstruction

    NASA Astrophysics Data System (ADS)

    Wason, H.; Herrmann, F. J.; Kumar, R.

    2016-12-01

    Current efforts towards dense shot (or receiver) sampling and full azimuthal coverage to produce high resolution images have led to the deployment of multiple source vessels (or streamers) across marine survey areas. Densely sampled marine seismic data acquisition, however, is expensive, and hence necessitates the adoption of sampling schemes that save acquisition costs and time. Compressed sensing is a sampling paradigm that aims to reconstruct a signal--that is sparse or compressible in some transform domain--from relatively fewer measurements than required by the Nyquist sampling criteria. Leveraging ideas from the field of compressed sensing, we show how marine seismic acquisition can be setup as a compressed sensing problem. A step ahead from multi-source seismic acquisition is simultaneous source acquisition--an emerging technology that is stimulating both geophysical research and commercial efforts--where multiple source arrays/vessels fire shots simultaneously resulting in better coverage in marine surveys. Following the design principles of compressed sensing, we propose a pragmatic simultaneous time-jittered time-compressed marine acquisition scheme where single or multiple source vessels sail across an ocean-bottom array firing airguns at jittered times and source locations, resulting in better spatial sampling and speedup acquisition. Our acquisition is low cost since our measurements are subsampled. Simultaneous source acquisition generates data with overlapping shot records, which need to be separated for further processing. We can significantly impact the reconstruction quality of conventional seismic data from jittered data and demonstrate successful recovery by sparsity promotion. In contrast to random (sub)sampling, acquisition via jittered (sub)sampling helps in controlling the maximum gap size, which is a practical requirement of wavefield reconstruction with localized sparsifying transforms. We illustrate our results with simulations of simultaneous time-jittered marine acquisition for 2D and 3D ocean-bottom cable survey.

  11. Generalization of the Ehrenfest theorem to quantum systems with periodical boundary conditions

    NASA Astrophysics Data System (ADS)

    Sanin, Andrey L.; Bagmanov, Andrey T.

    2005-04-01

    A generalization of Ehrenfest's theorem is discussed. For this purpose the quantum systems with periodical boundary conditions are being revised. The relations for time derivations of mean coordinate and momentum are derived once again. In comparison with Ehrenfest's theorem and its conventional quantities, the additional local terms occur which are caused boundaries. Because of this, the obtained new relations can be named as generalized. An example for using these relations is given.

  12. Tomographic Processing of Synthetic Aperture Radar Signals for Enhanced Resolution

    DTIC Science & Technology

    1989-11-01

    to image 3 larger scenes, this problem becomes more important. A byproduct of this investigation is a duality theorem which is a generalization of the...well-known Projection-Slice Theorem . The second prob- - lem proposed is that of imaging a rapidly-spinning object, for example in inverse SAR mode...slices is absent. There is a possible connection of the word to the Projection-Slice Theorem , but, as seen in Chapter 4, even this is absent in the

  13. Existence and discrete approximation for optimization problems governed by fractional differential equations

    NASA Astrophysics Data System (ADS)

    Bai, Yunru; Baleanu, Dumitru; Wu, Guo-Cheng

    2018-06-01

    We investigate a class of generalized differential optimization problems driven by the Caputo derivative. Existence of weak Carathe ´odory solution is proved by using Weierstrass existence theorem, fixed point theorem and Filippov implicit function lemma etc. Then a numerical approximation algorithm is introduced, and a convergence theorem is established. Finally, a nonlinear programming problem constrained by the fractional differential equation is illustrated and the results verify the validity of the algorithm.

  14. Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woolgar, Eric, E-mail: ewoolgar@ualberta.ca; Wylie, William, E-mail: wwylie@syr.edu

    We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the “pure Bakry-Émery” N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able tomore » extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (−∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (−∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatesan, R.C., E-mail: ravi@systemsresearchcorp.com; Plastino, A., E-mail: plastino@fisica.unlp.edu.ar

    The (i) reciprocity relations for the relative Fisher information (RFI, hereafter) and (ii) a generalized RFI–Euler theorem are self-consistently derived from the Hellmann–Feynman theorem. These new reciprocity relations generalize the RFI–Euler theorem and constitute the basis for building up a mathematical Legendre transform structure (LTS, hereafter), akin to that of thermodynamics, that underlies the RFI scenario. This demonstrates the possibility of translating the entire mathematical structure of thermodynamics into a RFI-based theoretical framework. Virial theorems play a prominent role in this endeavor, as a Schrödinger-like equation can be associated to the RFI. Lagrange multipliers are determined invoking the RFI–LTS linkmore » and the quantum mechanical virial theorem. An appropriate ansatz allows for the inference of probability density functions (pdf’s, hereafter) and energy-eigenvalues of the above mentioned Schrödinger-like equation. The energy-eigenvalues obtained here via inference are benchmarked against established theoretical and numerical results. A principled theoretical basis to reconstruct the RFI-framework from the FIM framework is established. Numerical examples for exemplary cases are provided. - Highlights: • Legendre transform structure for the RFI is obtained with the Hellmann–Feynman theorem. • Inference of the energy-eigenvalues of the SWE-like equation for the RFI is accomplished. • Basis for reconstruction of the RFI framework from the FIM-case is established. • Substantial qualitative and quantitative distinctions with prior studies are discussed.« less

  16. Generalized Fourier slice theorem for cone-beam image reconstruction.

    PubMed

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  17. Anomaly manifestation of Lieb-Schultz-Mattis theorem and topological phases

    NASA Astrophysics Data System (ADS)

    Cho, Gil Young; Hsieh, Chang-Tse; Ryu, Shinsei

    2017-11-01

    The Lieb-Schultz-Mattis (LSM) theorem dictates that emergent low-energy states from a lattice model cannot be a trivial symmetric insulator if the filling per unit cell is not integral and if the lattice translation symmetry and particle number conservation are strictly imposed. In this paper, we compare the one-dimensional gapless states enforced by the LSM theorem and the boundaries of one-higher dimensional strong symmetry-protected topological (SPT) phases from the perspective of quantum anomalies. We first note that they can both be described by the same low-energy effective field theory with the same effective symmetry realizations on low-energy modes, wherein non-on-site lattice translation symmetry is encoded as if it were an internal symmetry. In spite of the identical form of the low-energy effective field theories, we show that the quantum anomalies of the theories play different roles in the two systems. In particular, we find that the chiral anomaly is equivalent to the LSM theorem, whereas there is another anomaly that is not related to the LSM theorem but is intrinsic to the SPT states. As an application, we extend the conventional LSM theorem to multiple-charge multiple-species problems and construct several exotic symmetric insulators. We also find that the (3+1)d chiral anomaly provides only the perturbative stability of the gaplessness local in the parameter space.

  18. Cosmological singularity theorems and splitting theorems for N-Bakry-Émery spacetimes

    NASA Astrophysics Data System (ADS)

    Woolgar, Eric; Wylie, William

    2016-02-01

    We study Lorentzian manifolds with a weight function such that the N-Bakry-Émery tensor is bounded below. Such spacetimes arise in the physics of scalar-tensor gravitation theories, including Brans-Dicke theory, theories with Kaluza-Klein dimensional reduction, and low-energy approximations to string theory. In the "pure Bakry-Émery" N = ∞ case with f uniformly bounded above and initial data suitably bounded, cosmological-type singularity theorems are known, as are splitting theorems which determine the geometry of timelike geodesically complete spacetimes for which the bound on the initial data is borderline violated. We extend these results in a number of ways. We are able to extend the singularity theorems to finite N-values N ∈ (n, ∞) and N ∈ (-∞, 1]. In the N ∈ (n, ∞) case, no bound on f is required, while for N ∈ (-∞, 1] and N = ∞, we are able to replace the boundedness of f by a weaker condition on the integral of f along future-inextendible timelike geodesics. The splitting theorems extend similarly, but when N = 1, the splitting is only that of a warped product for all cases considered. A similar limited loss of rigidity has been observed in a prior work on the N-Bakry-Émery curvature in Riemannian signature when N = 1 and appears to be a general feature.

  19. Stable sequential Kuhn-Tucker theorem in iterative form or a regularized Uzawa algorithm in a regular nonlinear programming problem

    NASA Astrophysics Data System (ADS)

    Sumin, M. I.

    2015-06-01

    A parametric nonlinear programming problem in a metric space with an operator equality constraint in a Hilbert space is studied assuming that its lower semicontinuous value function at a chosen individual parameter value has certain subdifferentiability properties in the sense of nonlinear (nonsmooth) analysis. Such subdifferentiability can be understood as the existence of a proximal subgradient or a Fréchet subdifferential. In other words, an individual problem has a corresponding generalized Kuhn-Tucker vector. Under this assumption, a stable sequential Kuhn-Tucker theorem in nondifferential iterative form is proved and discussed in terms of minimizing sequences on the basis of the dual regularization method. This theorem provides necessary and sufficient conditions for the stable construction of a minimizing approximate solution in the sense of Warga in the considered problem, whose initial data can be approximately specified. A substantial difference of the proved theorem from its classical same-named analogue is that the former takes into account the possible instability of the problem in the case of perturbed initial data and, as a consequence, allows for the inherited instability of classical optimality conditions. This theorem can be treated as a regularized generalization of the classical Uzawa algorithm to nonlinear programming problems. Finally, the theorem is applied to the "simplest" nonlinear optimal control problem, namely, to a time-optimal control problem.

  20. Advanced Topics in Space Situational Awareness

    DTIC Science & Technology

    2007-11-07

    34super-resolution." Such optical superresolution is characteristic of many model-based image processing algorithms, and reflects the incorporation of...Sampling Theorem," J. Opt. Soc. Am. A, vol. 24, 311-325 (2007). [39] S. Prasad, "Digital and Optical Superresolution of Low-Resolution Image Sequences," Un...wavefront coding for the specific application of extension of image depth well beyond what is possible in a standard imaging system. The problem of optical

  1. Waves and instabilities in plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, L.

    1987-01-01

    The contents of this book are: Plasma as a Dielectric Medium; Nyquist Technique; Absolute and Convective Instabilities; Landau Damping and Phase Mixing; Particle Trapping and Breakdown of Linear Theory; Solution of Viasov Equation via Guilding-Center Transformation; Kinetic Theory of Magnetohydrodynamic Waves; Geometric Optics; Wave-Kinetic Equation; Cutoff and Resonance; Resonant Absorption; Mode Conversion; Gyrokinetic Equation; Drift Waves; Quasi-Linear Theory; Ponderomotive Force; Parametric Instabilities; Problem Sets for Homework, Midterm and Final Examinations.

  2. Analysis of Data Contained in "School District Basic Fiscal Data, 1974-1975" and "New York State Consolidated Data Base, 1974-1975." Revised Edition.

    ERIC Educational Resources Information Center

    Berks, Joel S.; Moskowitz, Jay H.

    A revision of a report introduced as evidence in the school finance case Levittown v. Nyquist, this report analyzes the way educational revenues are raised and distributed in New York State and demonstrates the impact of these methods on educational services. The study was based on 1974-75 official New York State data and utilized analytic…

  3. Carbon-Based Materials for Lithium-Ion Batteries, Electrochemical Capacitors, and Their Hybrid Devices.

    PubMed

    Yao, Fei; Pham, Duy Tho; Lee, Young Hee

    2015-07-20

    A rapidly developing market for portable electronic devices and hybrid electrical vehicles requires an urgent supply of mature energy-storage systems. As a result, lithium-ion batteries and electrochemical capacitors have lately attracted broad attention. Nevertheless, it is well known that both devices have their own drawbacks. With the fast development of nanoscience and nanotechnology, various structures and materials have been proposed to overcome the deficiencies of both devices to improve their electrochemical performance further. In this Review, electrochemical storage mechanisms based on carbon materials for both lithium-ion batteries and electrochemical capacitors are introduced. Non-faradic processes (electric double-layer capacitance) and faradic reactions (pseudocapacitance and intercalation) are generally explained. Electrochemical performance based on different types of electrolytes is briefly reviewed. Furthermore, impedance behavior based on Nyquist plots is discussed. We demonstrate the influence of cell conductivity, electrode/electrolyte interface, and ion diffusion on impedance performance. We illustrate that relaxation time, which is closely related to ion diffusion, can be extracted from Nyquist plots and compared between lithium-ion batteries and electrochemical capacitors. Finally, recent progress in the design of anodes for lithium-ion batteries, electrochemical capacitors, and their hybrid devices based on carbonaceous materials are reviewed. Challenges and future perspectives are further discussed. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Mucosal wave characteristics in three voice modes (fry, hiss & overpressure) produced by a female speaker: a preliminary study using stroboscopy, HSDI and analyzed by kymography, P-FFT & Nyquist plots

    NASA Astrophysics Data System (ADS)

    Izdebski, Krzysztof; Ward, Ronald R.; Yan, Yuling

    2012-02-01

    HSDI provides a whole new way to investigate visually intra-laryngeal behavior and posturing during phonation by providing detailed real-time information about laryngeal biomechanics that include observations about mucosal wave, wave motion directionality, glottic area wave form, asymmetry of vibrations within and across vocal folds and contact area of the glottis including posterior commissure closure. These observations are fundamental to our understanding and modeling of both normal and disordered phonation. In this preliminary report we focus on direct HSDI in vivo observations of not only the glottic region, but also on the entire supraglottic laryngeal posturing during fry, breathy/hiss and over-pressured phonation modes produced in a non-pathological settings. Analysis included spatio-temporal vibration patterns of vocal folds, multi-line kymograms, spectral PFFT analysis, and Nyquist spatio-temporal plots. The presented examples reveal that supraglottic contraction assists in prolonged closed phase of the vibratory cycle, and that prolonged closed phase is longest in fry and overpressure and shortest albeit complex in hiss. Hiss also allows for vocal fold vibration despite glottis separation. These findings need to be compared to pathologic phonation representing the three voice modes to derive at better differential diagnosis.

  5. Using non-linear analogue of Nyquist diagrams for analysis of the equation describing the hemodynamics in blood vessels near pathologies

    NASA Astrophysics Data System (ADS)

    Cherevko, A. A.; Bord, E. E.; Khe, A. K.; Panarin, V. A.; Orlov, K. J.; Chupakhin, A. P.

    2016-06-01

    This article considers method of describing the behaviour of hemodynamic parameters near vascular pathologies. We study the influence of arterial aneurysms and arteriovenous malformations on the vascular system. The proposed method involves using generalized model of Van der Pol-Duffing to find out the characteristic behaviour of blood flow parameters. These parameters are blood velocity and pressure in the vessel. The velocity and pressure are obtained during the neurosurgery measurements. It is noted that substituting velocity on the right side of the equation gives good pressure approximation. Thus, the model reproduces clinical data well enough. In regard to the right side of the equation, it means external impact on the system. The harmonic functions with various frequencies and amplitudes are substituted on the right side of the equation to investigate its properties. Besides, variation of the right side parameters provides additional information about pressure. Non-linear analogue of Nyquist diagrams is used to find out how the properties of solution depend on the parameter values. We have analysed 60 cases with aneurysms and 14 cases with arteriovenous malformations. It is shown that the diagrams are divided into classes. Also, the classes are replaced by another one in the definite order with increasing of the right side amplitude.

  6. Mechanistic slumber vs. statistical insomnia: the early history of Boltzmann's H-theorem (1868-1877)

    NASA Astrophysics Data System (ADS)

    Badino, M.

    2011-11-01

    An intricate, long, and occasionally heated debate surrounds Boltzmann's H-theorem (1872) and his combinatorial interpretation of the second law (1877). After almost a century of devoted and knowledgeable scholarship, there is still no agreement as to whether Boltzmann changed his view of the second law after Loschmidt's 1876 reversibility argument or whether he had already been holding a probabilistic conception for some years at that point. In this paper, I argue that there was no abrupt statistical turn. In the first part, I discuss the development of Boltzmann's research from 1868 to the formulation of the H-theorem. This reconstruction shows that Boltzmann adopted a pluralistic strategy based on the interplay between a kinetic and a combinatorial approach. Moreover, it shows that the extensive use of asymptotic conditions allowed Boltzmann to bracket the problem of exceptions. In the second part I suggest that both Loschmidt's challenge and Boltzmann's response to it did not concern the H-theorem. The close relation between the theorem and the reversibility argument is a consequence of later investigations on the subject.

  7. Special ergodic theorems and dynamical large deviations

    NASA Astrophysics Data System (ADS)

    Kleptsyn, Victor; Ryzhov, Dmitry; Minkov, Stanislav

    2012-11-01

    Let f : M → M be a self-map of a compact Riemannian manifold M, admitting a global SRB measure μ. For a continuous test function \\varphi\\colon M\\to R and a constant α > 0, consider the set Kφ,α of the initial points for which the Birkhoff time averages of the function φ differ from its μ-space average by at least α. As the measure μ is a global SRB one, the set Kφ,α should have zero Lebesgue measure. The special ergodic theorem, whenever it holds, claims that, moreover, this set has a Hausdorff dimension less than the dimension of M. We prove that for Lipschitz maps, the special ergodic theorem follows from the dynamical large deviations principle. We also define and prove analogous result for flows. Applying the theorems of Young and of Araújo and Pacifico, we conclude that the special ergodic theorem holds for transitive hyperbolic attractors of C2-diffeomorphisms, as well as for some other known classes of maps (including the one of partially hyperbolic non-uniformly expanding maps) and flows.

  8. Heuristic analogy in Ars Conjectandi: From Archimedes' De Circuli Dimensione to Bernoulli's theorem.

    PubMed

    Campos, Daniel G

    2018-02-01

    This article investigates the way in which Jacob Bernoulli proved the main mathematical theorem that undergirds his art of conjecturing-the theorem that founded, historically, the field of mathematical probability. It aims to contribute a perspective into the question of problem-solving methods in mathematics while also contributing to the comprehension of the historical development of mathematical probability. It argues that Bernoulli proved his theorem by a process of mathematical experimentation in which the central heuristic strategy was analogy. In this context, the analogy functioned as an experimental hypothesis. The article expounds, first, Bernoulli's reasoning for proving his theorem, describing it as a process of experimentation in which hypothesis-making is crucial. Next, it investigates the analogy between his reasoning and Archimedes' approximation of the value of π, by clarifying both Archimedes' own experimental approach to the said approximation and its heuristic influence on Bernoulli's problem-solving strategy. The discussion includes some general considerations about analogy as a heuristic technique to make experimental hypotheses in mathematics. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. A Stochastic Version of the Noether Theorem

    NASA Astrophysics Data System (ADS)

    González Lezcano, Alfredo; Cabo Montes de Oca, Alejandro

    2018-06-01

    A stochastic version of the Noether theorem is derived for systems under the action of external random forces. The concept of moment generating functional is employed to describe the symmetry of the stochastic forces. The theorem is applied to two kinds of random covariant forces. One of them generated in an electrodynamic way and the other is defined in the rest frame of the particle as a function of the proper time. For both of them, it is shown the conservation of the mean value of a random drift momentum. The validity of the theorem makes clear that random systems can produce causal stochastic correlations between two faraway separated systems, that had interacted in the past. In addition possible connections of the discussion with the Ives Couder's experimental results are remarked.

  10. Noether’s second theorem and Ward identities for gauge symmetries

    DOE PAGES

    Avery, Steven G.; Schwab, Burkhard U. W.

    2016-02-04

    Recently, a number of new Ward identities for large gauge transformations and large diffeomorphisms have been discovered. Some of the identities are reinterpretations of previously known statements, while some appear to be genuinely new. We present and use Noether’s second theorem with the path integral as a powerful way of generating these kinds of Ward identities. We reintroduce Noether’s second theorem and discuss how to work with the physical remnant of gauge symmetry in gauge fixed systems. We illustrate our mechanism in Maxwell theory, Yang-Mills theory, p-form field theory, and Einstein-Hilbert gravity. We comment on multiple connections between Noether’s secondmore » theorem and known results in the recent literature. Finally, our approach suggests a novel point of view with important physical consequences.« less

  11. A mathematical proof and example that Bayes's Theorem is fundamental to actuarial estimates of sexual recidivism risk.

    PubMed

    Donaldson, Theodore; Wollert, Richard

    2008-06-01

    Expert witnesses in sexually violent predator (SVP) cases often rely on actuarial instruments to make risk determinations. Many questions surround their use, however. Bayes's Theorem holds much promise for addressing these questions. Some experts nonetheless claim that Bayesian analyses are inadmissible in SVP cases because they are not accepted by the relevant scientific community. This position is illogical because Bayes's Theorem is simply a probabilistic restatement of the way that frequency data are combined to arrive at whatever recidivism rates are paired with each test score in an actuarial table. This article presents a mathematical proof and example validating this assertion. The advantages and implications of a logic model that combines Bayes's Theorem and the null hypothesis are also discussed.

  12. Sharp comparison theorems for the Klein-Gordon equation in d dimensions

    NASA Astrophysics Data System (ADS)

    Hall, Richard L.; Zorin, Petr

    2016-06-01

    We establish sharp (or ’refined’) comparison theorems for the Klein-Gordon equation. We show that the condition Va ≤ Vb, which leads to Ea ≤ Eb, can be replaced by the weaker assumption Ua ≤ Ub which still implies the spectral ordering Ea ≤ Eb. In the simplest case, for d = 1, Ui(x) =∫0xV i(t)dt, i = a or b and for d > 1, Ui(r) =∫0rV i(t)td-1dt, i = a or b. We also consider sharp comparison theorems in the presence of a scalar potential S (a ‘variable mass’) in addition to the vector term V (the time component of a four-vector). The theorems are illustrated by a variety of explicit detailed examples.

  13. Logical errors on proving theorem

    NASA Astrophysics Data System (ADS)

    Sari, C. K.; Waluyo, M.; Ainur, C. M.; Darmaningsih, E. N.

    2018-01-01

    In tertiary level, students of mathematics education department attend some abstract courses, such as Introduction to Real Analysis which needs an ability to prove mathematical statements almost all the time. In fact, many students have not mastered this ability appropriately. In their Introduction to Real Analysis tests, even though they completed their proof of theorems, they achieved an unsatisfactory score. They thought that they succeeded, but their proof was not valid. In this study, a qualitative research was conducted to describe logical errors that students made in proving the theorem of cluster point. The theorem was given to 54 students. Misconceptions on understanding the definitions seem to occur within cluster point, limit of function, and limit of sequences. The habit of using routine symbol might cause these misconceptions. Suggestions to deal with this condition are described as well.

  14. Effect of Sr substitution on the room temperature electrical properties of La1-xSrxFeO3 nano-crystalline materials

    NASA Astrophysics Data System (ADS)

    Kafa, C. A.; Triyono, D.; Laysandra, H.

    2017-07-01

    LaFeO3 is a material with Perovskite structure which electrical properties got investigated a lot, because as a p-type semiconductor it showed good gas sensing behavior through resistivity comparison. Sr doping on LaFeO3 is able to improve the electrical conductivity through structural modification. Using the Sr atoms doping concentration (x) from 0.1 to 0.4, La1-xSrxFeO3 nanocrystal pellets were synthesized using sol-gel method, followed by gradual heat treatment and uniaxial compaction. Structural analysis from XRD characterization shows that the structure of the materials is Orthorhombic Perovskite. The topography of the sample by SEM reveals grain and grain boundary existence with emerging agglomeration. The electrical properties of the material, as functions of frequency, were measured by Impedance Spectroscopy method using RLC meter. Through the Nyquist plot and Bode plot, the electrical conductivity of La1-xSrxFeO3 is contributed by grain and grain boundaries. It is reported that La0.6Sr0.4FeO3 sample has the most superior electrical conductivity of all samples, and the electrical permittivity of both La0.8Sr0.2FeO3 and La0.7Sr0.3FeO3 are the most stable.

  15. Quantized Spectral Compressed Sensing: Cramer–Rao Bounds and Recovery Algorithms

    NASA Astrophysics Data System (ADS)

    Fu, Haoyu; Chi, Yuejie

    2018-06-01

    Efficient estimation of wideband spectrum is of great importance for applications such as cognitive radio. Recently, sub-Nyquist sampling schemes based on compressed sensing have been proposed to greatly reduce the sampling rate. However, the important issue of quantization has not been fully addressed, particularly for high-resolution spectrum and parameter estimation. In this paper, we aim to recover spectrally-sparse signals and the corresponding parameters, such as frequency and amplitudes, from heavy quantizations of their noisy complex-valued random linear measurements, e.g. only the quadrant information. We first characterize the Cramer-Rao bound under Gaussian noise, which highlights the trade-off between sample complexity and bit depth under different signal-to-noise ratios for a fixed budget of bits. Next, we propose a new algorithm based on atomic norm soft thresholding for signal recovery, which is equivalent to proximal mapping of properly designed surrogate signals with respect to the atomic norm that motivates spectral sparsity. The proposed algorithm can be applied to both the single measurement vector case, as well as the multiple measurement vector case. It is shown that under the Gaussian measurement model, the spectral signals can be reconstructed accurately with high probability, as soon as the number of quantized measurements exceeds the order of K log n, where K is the level of spectral sparsity and $n$ is the signal dimension. Finally, numerical simulations are provided to validate the proposed approaches.

  16. The dependence of the modulation transfer function on the blocking layer thickness in amorphous selenium x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, David M.; Belev, Gueorgi; DeCrescenzo, Giovanni

    2007-08-15

    Blocking layers are used to reduce leakage current in amorphous selenium detectors. The effect of the thickness of the blocking layer on the presampling modulation transfer function (MTF) and on dark current was experimentally determined in prototype single-line CCD-based amorphous selenium (a-Se) x-ray detectors. The sampling pitch of the detectors evaluated was 25 {mu}m and the blocking layer thicknesses varied from 1 to 51 {mu}m. The blocking layers resided on the signal collection electrodes which, in this configuration, were used to collect electrons. The combined thickness of the blocking layer and a-Se bulk in each detector was {approx}200 {mu}m. Asmore » expected, the dark current increased monotonically as the thickness of the blocking layer was decreased. It was found that if the blocking layer thickness was small compared to the sampling pitch, it caused a negligible reduction in MTF. However, the MTF was observed to decrease dramatically at spatial frequencies near the Nyquist frequency as the blocking layer thickness approached or exceeded the electrode sampling pitch. This observed reduction in MTF is shown to be consistent with predictions of an electrostatic model wherein the image charge from the a-Se is trapped at a characteristic depth within the blocking layer, generally near the interface between the blocking layer and the a-Se bulk.« less

  17. Necessary and sufficient conditions for the stability of a sleeping top described by three forms of dynamic equations

    NASA Astrophysics Data System (ADS)

    Ge, Zheng-Ming

    2008-04-01

    Necessary and sufficient conditions for the stability of a sleeping top described by dynamic equations of six state variables, Euler equations, and Poisson equations, by a two-degree-of-freedom system, Krylov equations, and by a one-degree-of-freedom system, nutation angle equation, is obtained by the Lyapunov direct method, Ge-Liu second instability theorem, an instability theorem, and a Ge-Yao-Chen partial region stability theorem without using the first approximation theory altogether.

  18. Twelve years before the quantum no-cloning theorem

    NASA Astrophysics Data System (ADS)

    Ortigoso, Juan

    2018-03-01

    The celebrated quantum no-cloning theorem establishes the impossibility of making a perfect copy of an unknown quantum state. The discovery of this important theorem for the field of quantum information is currently dated 1982. I show here that an article published in 1970 [J. L. Park, Found. Phys. 1, 23-33 (1970)] contained an explicit mathematical proof of the impossibility of cloning quantum states. I analyze Park's demonstration in the light of published explanations concerning the genesis of the better-known papers on no-cloning.

  19. Analytic solution and pulse area theorem for three-level atoms

    NASA Astrophysics Data System (ADS)

    Shchedrin, Gavriil; O'Brien, Chris; Rostovtsev, Yuri; Scully, Marlan O.

    2015-12-01

    We report an analytic solution for a three-level atom driven by arbitrary time-dependent electromagnetic pulses. In particular, we consider far-detuned driving pulses and show an excellent match between our analytic result and the numerical simulations. We use our solution to derive a pulse area theorem for three-level V and Λ systems without making the rotating wave approximation. Formulated as an energy conservation law, this pulse area theorem can be used to understand pulse propagation through three-level media.

  20. A Pseudo-Reversing Theorem for Rotation and its Application to Orientation Theory

    DTIC Science & Technology

    2012-03-01

    approach to the task of constructing the appropriate course a ship must steer in order for the wind to appear to come from some given direction with some...axes, although the theorem doesn’t actually require such axes. The Pseudo-Reversing Theorem can often be invoked to give a different pedagogical basis to...of validity will quickly become obvious when it’s implemented on a computer. It does not seem to me that a great deal of pedagogical effort has found

Top