Sample records for maximum-entropy method mem

  1. Development and application of the maximum entropy method and other spectral estimation techniques

    NASA Astrophysics Data System (ADS)

    King, W. R.

    1980-09-01

    This summary report is a collection of four separate progress reports prepared under three contracts, which are all sponsored by the Office of Naval Research in Arlington, Virginia. This report contains the results of investigations into the application of the maximum entropy method (MEM), a high resolution, frequency and wavenumber estimation technique. The report also contains a description of two, new, stable, high resolution spectral estimation techniques that is provided in the final report section. Many examples of wavenumber spectral patterns for all investigated techniques are included throughout the report. The maximum entropy method is also known as the maximum entropy spectral analysis (MESA) technique, and both names are used in the report. Many MEM wavenumber spectral patterns are demonstrated using both simulated and measured radar signal and noise data. Methods for obtaining stable MEM wavenumber spectra are discussed, broadband signal detection using the MEM prediction error transform (PET) is discussed, and Doppler radar narrowband signal detection is demonstrated using the MEM technique. It is also shown that MEM cannot be applied to randomly sampled data. The two new, stable, high resolution, spectral estimation techniques discussed in the final report section, are named the Wiener-King and the Fourier spectral estimation techniques. The two new techniques have a similar derivation based upon the Wiener prediction filter, but the two techniques are otherwise quite different. Further development of the techniques and measurement of the technique spectral characteristics is recommended for subsequent investigation.

  2. REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    UMEDA, T.; MATSUFURU, H.

    2005-07-25

    We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.

  3. Maximum Entropy Approach in Dynamic Contrast-Enhanced Magnetic Resonance Imaging.

    PubMed

    Farsani, Zahra Amini; Schmid, Volker J

    2017-01-01

    In the estimation of physiological kinetic parameters from Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE-MRI) data, the determination of the arterial input function (AIF) plays a key role. This paper proposes a Bayesian method to estimate the physiological parameters of DCE-MRI along with the AIF in situations, where no measurement of the AIF is available. In the proposed algorithm, the maximum entropy method (MEM) is combined with the maximum a posterior approach (MAP). To this end, MEM is used to specify a prior probability distribution of the unknown AIF. The ability of this method to estimate the AIF is validated using the Kullback-Leibler divergence. Subsequently, the kinetic parameters can be estimated with MAP. The proposed algorithm is evaluated with a data set from a breast cancer MRI study. The application shows that the AIF can reliably be determined from the DCE-MRI data using MEM. Kinetic parameters can be estimated subsequently. The maximum entropy method is a powerful tool to reconstructing images from many types of data. This method is useful for generating the probability distribution based on given information. The proposed method gives an alternative way to assess the input function from the existing data. The proposed method allows a good fit of the data and therefore a better estimation of the kinetic parameters. In the end, this allows for a more reliable use of DCE-MRI. Schattauer GmbH.

  4. Maximum entropy analysis of polarized fluorescence decay of (E)GFP in aqueous solution

    NASA Astrophysics Data System (ADS)

    Novikov, Eugene G.; Skakun, Victor V.; Borst, Jan Willem; Visser, Antonie J. W. G.

    2018-01-01

    The maximum entropy method (MEM) was used for the analysis of polarized fluorescence decays of enhanced green fluorescent protein (EGFP) in buffered water/glycerol mixtures, obtained with time-correlated single-photon counting (Visser et al 2016 Methods Appl. Fluoresc. 4 035002). To this end, we used a general-purpose software module of MEM that was earlier developed to analyze (complex) laser photolysis kinetics of ligand rebinding reactions in oxygen binding proteins. We demonstrate that the MEM software provides reliable results and is easy to use for the analysis of both total fluorescence decay and fluorescence anisotropy decay of aqueous solutions of EGFP. The rotational correlation times of EGFP in water/glycerol mixtures, obtained by MEM as maxima of the correlation-time distributions, are identical to the single correlation times determined by global analysis of parallel and perpendicular polarized decay components. The MEM software is also able to determine homo-FRET in another dimeric GFP, for which the transfer correlation time is an order of magnitude shorter than the rotational correlation time. One important advantage utilizing MEM analysis is that no initial guesses of parameters are required, since MEM is able to select the least correlated solution from the feasible set of solutions.

  5. Propane spectral resolution enhancement by the maximum entropy method

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Stewart, K. P.; Hurley, E. J.; Yeh, K. C.; Inguva, R.

    1990-01-01

    The Burg algorithm for maximum entropy power spectral density estimation is applied to a time series of data obtained from a Michelson interferometer and compared with a standard FFT estimate for resolution capability. The propane transmittance spectrum was estimated by use of the FFT with a 2 to the 18th data sample interferogram, giving a maximum unapodized resolution of 0.06/cm. This estimate was then interpolated by zero filling an additional 2 to the 18th points, and the final resolution was taken to be 0.06/cm. Comparison of the maximum entropy method (MEM) estimate with the FFT was made over a 45/cm region of the spectrum for several increasing record lengths of interferogram data beginning at 2 to the 10th. It is found that over this region the MEM estimate with 2 to the 16th data samples is in close agreement with the FFT estimate using 2 to the 18th samples.

  6. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  7. MEM application to IRAS CPC images

    NASA Technical Reports Server (NTRS)

    Marston, A. P.

    1994-01-01

    A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.

  8. Multi-GPU maximum entropy image synthesis for radio astronomy

    NASA Astrophysics Data System (ADS)

    Cárcamo, M.; Román, P. E.; Casassus, S.; Moral, V.; Rannou, F. R.

    2018-01-01

    The maximum entropy method (MEM) is a well known deconvolution technique in radio-interferometry. This method solves a non-linear optimization problem with an entropy regularization term. Other heuristics such as CLEAN are faster but highly user dependent. Nevertheless, MEM has the following advantages: it is unsupervised, it has a statistical basis, it has a better resolution and better image quality under certain conditions. This work presents a high performance GPU version of non-gridding MEM, which is tested using real and simulated data. We propose a single-GPU and a multi-GPU implementation for single and multi-spectral data, respectively. We also make use of the Peer-to-Peer and Unified Virtual Addressing features of newer GPUs which allows to exploit transparently and efficiently multiple GPUs. Several ALMA data sets are used to demonstrate the effectiveness in imaging and to evaluate GPU performance. The results show that a speedup from 1000 to 5000 times faster than a sequential version can be achieved, depending on data and image size. This allows to reconstruct the HD142527 CO(6-5) short baseline data set in 2.1 min, instead of 2.5 days that takes a sequential version on CPU.

  9. The MEM of spectral analysis applied to L.O.D.

    NASA Astrophysics Data System (ADS)

    Fernandez, L. I.; Arias, E. F.

    The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.

  10. Resolution of Closely Spaced Optical Targets Using Maximum Likelihood Estimator and Maximum Entropy Method: A Comparison Study

    DTIC Science & Technology

    1981-03-03

    Government Agencies. The views and conclusions contained in this document are those of the contractor and should not be interpreted as necessarily...resolving closely spaced j optical point targets are compared using Monte Carlo simulation ,esults for three different examples. It is found that the MEM is...although no direct compari- son was given. The objective of this report is to compare the capabilities of MLE and MEM in resolving two optical CSO’s

  11. Kinetic analysis of hyperpolarized data with minimum a priori knowledge: Hybrid maximum entropy and nonlinear least squares method (MEM/NLS).

    PubMed

    Mariotti, Erika; Veronese, Mattia; Dunn, Joel T; Southworth, Richard; Eykyn, Thomas R

    2015-06-01

    To assess the feasibility of using a hybrid Maximum-Entropy/Nonlinear Least Squares (MEM/NLS) method for analyzing the kinetics of hyperpolarized dynamic data with minimum a priori knowledge. A continuous distribution of rates obtained through the Laplace inversion of the data is used as a constraint on the NLS fitting to derive a discrete spectrum of rates. Performance of the MEM/NLS algorithm was assessed through Monte Carlo simulations and validated by fitting the longitudinal relaxation time curves of hyperpolarized [1-(13) C] pyruvate acquired at 9.4 Tesla and at three different flip angles. The method was further used to assess the kinetics of hyperpolarized pyruvate-lactate exchange acquired in vitro in whole blood and to re-analyze the previously published in vitro reaction of hyperpolarized (15) N choline with choline kinase. The MEM/NLS method was found to be adequate for the kinetic characterization of hyperpolarized in vitro time-series. Additional insights were obtained from experimental data in blood as well as from previously published (15) N choline experimental data. The proposed method informs on the compartmental model that best approximate the biological system observed using hyperpolarized (13) C MR especially when the metabolic pathway assessed is complex or a new hyperpolarized probe is used. © 2014 The Authors. Magnetic Resonance in Medicine published by Wiley Periodicals, Inc.

  12. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  13. Maximum Entropy Method applied to Real-time Time-Dependent Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Zempo, Yasunari; Toogoshi, Mitsuki; Kano, Satoru S.

    Maximum Entropy Method (MEM) is widely used for the analysis of a time-series data such as an earthquake, which has fairly long-periodicity but short observable data. We have examined MEM to apply to the optical analysis of the time-series data from the real-time TDDFT. In the analysis, usually Fourier Transform (FT) is used, and we have to pay our attention to the lower energy part such as the band gap, which requires the long time evolution. The computational cost naturally becomes quite expensive. Since MEM is based on the autocorrelation of the signal, in which the periodicity can be described as the difference of time-lags, its value in the lower energy naturally gets small compared to that in the higher energy. To improve the difficulty, our MEM has the two features: the raw data is repeated it many times and concatenated, which provides the lower energy resolution in high resolution; together with the repeated data, an appropriate phase for the target frequency is introduced to reduce the side effect of the artificial periodicity. We have compared our improved MEM and FT spectrum using small-to-medium size molecules. We can see the clear spectrum of MEM, compared to that of FT. Our new technique provides higher resolution in fewer steps, compared to that of FT. This work was partially supported by JSPS Grants-in-Aid for Scientific Research (C) Grant number 16K05047, Sumitomo Chemical, Co. Ltd., and Simulatio Corp.

  14. [Evaluation of a simplified index (spectral entropy) about sleep state of electrocardiogram recorded by a simplified polygraph, MemCalc-Makin2].

    PubMed

    Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi

    2010-02-01

    Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.

  15. Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories

    NASA Astrophysics Data System (ADS)

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.

  16. Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.

    PubMed

    Burnier, Yannis; Rothkopf, Alexander

    2013-11-01

    We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).

  17. Global Ray Tracing Simulations of the SABER Gravity Wave Climatology

    DTIC Science & Technology

    2009-01-01

    atmosphere , the residual temperature profiles are analyzed by a combi- nation of maximum entropy method (MEM) and harmonic analysis, thus providing the...accepted 24 February 2009; published 30 April 2009. [1] Since February 2002, the SABER (sounding of the atmosphere using broadband emission radiometry...satellite instrument has measured temperatures throughout the entire middle atmosphere . Employing the same techniques as previously used for CRISTA

  18. Stochastic reconstructions of spectral functions: Application to lattice QCD

    NASA Astrophysics Data System (ADS)

    Ding, H.-T.; Kaczmarek, O.; Mukherjee, Swagato; Ohno, H.; Shu, H.-T.

    2018-05-01

    We present a detailed study of the applications of two stochastic approaches, stochastic optimization method (SOM) and stochastic analytical inference (SAI), to extract spectral functions from Euclidean correlation functions. SOM has the advantage that it does not require prior information. On the other hand, SAI is a more generalized method based on Bayesian inference. Under mean field approximation SAI reduces to the often-used maximum entropy method (MEM) and for a specific choice of the prior SAI becomes equivalent to SOM. To test the applicability of these two stochastic methods to lattice QCD, firstly, we apply these methods to various reasonably chosen model correlation functions and present detailed comparisons of the reconstructed spectral functions obtained from SOM, SAI and MEM. Next, we present similar studies for charmonia correlation functions obtained from lattice QCD computations using clover-improved Wilson fermions on large, fine, isotropic lattices at 0.75 and 1.5 Tc, Tc being the deconfinement transition temperature of a pure gluon plasma. We find that SAI and SOM give consistent results to MEM at these two temperatures.

  19. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  20. MEM spectral analysis for predicting influenza epidemics in Japan.

    PubMed

    Sumi, Ayako; Kamo, Ken-ichi

    2012-03-01

    The prediction of influenza epidemics has long been the focus of attention in epidemiology and mathematical biology. In this study, we tested whether time series analysis was useful for predicting the incidence of influenza in Japan. The method of time series analysis we used consists of spectral analysis based on the maximum entropy method (MEM) in the frequency domain and the nonlinear least squares method in the time domain. Using this time series analysis, we analyzed the incidence data of influenza in Japan from January 1948 to December 1998; these data are unique in that they covered the periods of pandemics in Japan in 1957, 1968, and 1977. On the basis of the MEM spectral analysis, we identified the periodic modes explaining the underlying variations of the incidence data. The optimum least squares fitting (LSF) curve calculated with the periodic modes reproduced the underlying variation of the incidence data. An extension of the LSF curve could be used to predict the incidence of influenza quantitatively. Our study suggested that MEM spectral analysis would allow us to model temporal variations of influenza epidemics with multiple periodic modes much more effectively than by using the method of conventional time series analysis, which has been used previously to investigate the behavior of temporal variations in influenza data.

  1. Adaptive filtering and maximum entropy spectra with application to changes in atmospheric angular momentum

    NASA Technical Reports Server (NTRS)

    Penland, Cecile; Ghil, Michael; Weickmann, Klaus M.

    1991-01-01

    The spectral resolution and statistical significance of a harmonic analysis obtained by low-order MEM can be improved by subjecting the data to an adaptive filter. This adaptive filter consists of projecting the data onto the leading temporal empirical orthogonal functions obtained from singular spectrum analysis (SSA). The combined SSA-MEM method is applied both to a synthetic time series and a time series of AAM data. The procedure is very effective when the background noise is white and less so when the background noise is red. The latter case obtains in the AAM data. Nevertheless, reliable evidence for intraseasonal and interannual oscillations in AAM is detected. The interannual periods include a quasi-biennial one and an LF one, of 5 years, both related to the El Nino/Southern Oscillation. In the intraseasonal band, separate oscillations of about 48.5 and 51 days are ascertained.

  2. System for uncollimated digital radiography

    DOEpatents

    Wang, Han; Hall, James M.; McCarrick, James F.; Tang, Vincent

    2015-08-11

    The inversion algorithm based on the maximum entropy method (MEM) removes unwanted effects in high energy imaging resulting from an uncollimated source interacting with a finitely thick scintillator. The algorithm takes as input the image from the thick scintillator (TS) and the radiography setup geometry. The algorithm then outputs a restored image which appears as if taken with an infinitesimally thin scintillator (ITS). Inversion is accomplished by numerically generating a probabilistic model relating the ITS image to the TS image and then inverting this model on the TS image through MEM. This reconstruction technique can reduce the exposure time or the required source intensity without undesirable object blurring on the image by allowing the use of both thicker scintillators with higher efficiencies and closer source-to-detector distances to maximize incident radiation flux. The technique is applicable in radiographic applications including fast neutron, high-energy gamma and x-ray radiography using thick scintillators.

  3. HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previousmore » work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.« less

  4. High-resolution Linear Polarimetric Imaging for the Event Horizon Telescope

    NASA Astrophysics Data System (ADS)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S.; Wardle, John F. C.; Bouman, Katherine L.

    2016-09-01

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.

  5. Localization Accuracy of Distributed Inverse Solutions for Electric and Magnetic Source Imaging of Interictal Epileptic Discharges in Patients with Focal Epilepsy.

    PubMed

    Heers, Marcel; Chowdhury, Rasheda A; Hedrich, Tanguy; Dubeau, François; Hall, Jeffery A; Lina, Jean-Marc; Grova, Christophe; Kobayashi, Eliane

    2016-01-01

    Distributed inverse solutions aim to realistically reconstruct the origin of interictal epileptic discharges (IEDs) from noninvasively recorded electroencephalography (EEG) and magnetoencephalography (MEG) signals. Our aim was to compare the performance of different distributed inverse solutions in localizing IEDs: coherent maximum entropy on the mean (cMEM), hierarchical Bayesian implementations of independent identically distributed sources (IID, minimum norm prior) and spatially coherent sources (COH, spatial smoothness prior). Source maxima (i.e., the vertex with the maximum source amplitude) of IEDs in 14 EEG and 19 MEG studies from 15 patients with focal epilepsy were analyzed. We visually compared their concordance with intracranial EEG (iEEG) based on 17 cortical regions of interest and their spatial dispersion around source maxima. Magnetic source imaging (MSI) maxima from cMEM were most often confirmed by iEEG (cMEM: 14/19, COH: 9/19, IID: 8/19 studies). COH electric source imaging (ESI) maxima co-localized best with iEEG (cMEM: 8/14, COH: 11/14, IID: 10/14 studies). In addition, cMEM was less spatially spread than COH and IID for ESI and MSI (p < 0.001 Bonferroni-corrected post hoc t test). Highest positive predictive values for cortical regions with IEDs in iEEG could be obtained with cMEM for MSI and with COH for ESI. Additional realistic EEG/MEG simulations confirmed our findings. Accurate spatially extended sources, as found in cMEM (ESI and MSI) and COH (ESI) are desirable for source imaging of IEDs because this might influence surgical decision. Our simulations suggest that COH and IID overestimate the spatial extent of the generators compared to cMEM.

  6. MEG-EEG Information Fusion and Electromagnetic Source Imaging: From Theory to Clinical Application in Epilepsy.

    PubMed

    Chowdhury, Rasheda Arman; Zerouali, Younes; Hedrich, Tanguy; Heers, Marcel; Kobayashi, Eliane; Lina, Jean-Marc; Grova, Christophe

    2015-11-01

    The purpose of this study is to develop and quantitatively assess whether fusion of EEG and MEG (MEEG) data within the maximum entropy on the mean (MEM) framework increases the spatial accuracy of source localization, by yielding better recovery of the spatial extent and propagation pathway of the underlying generators of inter-ictal epileptic discharges (IEDs). The key element in this study is the integration of the complementary information from EEG and MEG data within the MEM framework. MEEG was compared with EEG and MEG when localizing single transient IEDs. The fusion approach was evaluated using realistic simulation models involving one or two spatially extended sources mimicking propagation patterns of IEDs. We also assessed the impact of the number of EEG electrodes required for an efficient EEG-MEG fusion. MEM was compared with minimum norm estimate, dynamic statistical parametric mapping, and standardized low-resolution electromagnetic tomography. The fusion approach was finally assessed on real epileptic data recorded from two patients showing IEDs simultaneously in EEG and MEG. Overall the localization of MEEG data using MEM provided better recovery of the source spatial extent, more sensitivity to the source depth and more accurate detection of the onset and propagation of IEDs than EEG or MEG alone. MEM was more accurate than the other methods. MEEG proved more robust than EEG and MEG for single IED localization in low signal-to-noise ratio conditions. We also showed that only few EEG electrodes are required to bring additional relevant information to MEG during MEM fusion.

  7. An adaptive cubature formula for efficient reliability assessment of nonlinear structural dynamic systems

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Kong, Fan

    2018-05-01

    Extreme value distribution (EVD) evaluation is a critical topic in reliability analysis of nonlinear structural dynamic systems. In this paper, a new method is proposed to obtain the EVD. The maximum entropy method (MEM) with fractional moments as constraints is employed to derive the entire range of EVD. Then, an adaptive cubature formula is proposed for fractional moments assessment involved in MEM, which is closely related to the efficiency and accuracy for reliability analysis. Three point sets, which include a total of 2d2 + 1 integration points in the dimension d, are generated in the proposed formula. In this regard, the efficiency of the proposed formula is ensured. Besides, a "free" parameter is introduced, which makes the proposed formula adaptive with the dimension. The "free" parameter is determined by arranging one point set adjacent to the boundary of the hyper-sphere which contains the bulk of total probability. In this regard, the tail distribution may be better reproduced and the fractional moments could be evaluated with accuracy. Finally, the proposed method is applied to a ten-storey shear frame structure under seismic excitations, which exhibits strong nonlinearity. The numerical results demonstrate the efficacy of the proposed method.

  8. Gravity wave momentum flux estimation from CRISTA satellite data

    NASA Astrophysics Data System (ADS)

    Ern, M.; Preusse, P.; Alexander, M. J.; Offermann, D.

    2003-04-01

    Temperature altitude profiles measured by the CRISTA satellite were analyzed for gravity waves (GWs). Amplitudes, vertical and horizontal wavelengths of GWs are retrieved by applying a combination of maximum entropy method (MEM) and harmonic analysis (HA) to the temperature height profiles and subsequently comparing the so retrieved GW phases of adjacent altitude profiles. From these results global maps of the absolute value of the vertical flux of horizontal momentum have been estimated. Significant differences between distributions of the temperature variance and distributions of the momentum flux exist. For example, global maps of the momentum flux show a pronounced northward shift of the equatorial maximum whereas temperature variance maps of the tropics/subtropics are nearly symmetric with respect to the equator. This indicates the importance of the influence of horizontal and vertical wavelength distribution on global structures of the momentum flux.

  9. Elemental Study on Auscultaiting Diagnosis Support System of Hemodialysis Shunt Stenosis by ANN

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaka; Fukasawa, Mizuya; Mori, Takahiro; Sakata, Osamu; Hattori, Asobu; Kato, Takaya

    It is desired to detect stenosis at an early stage to use hemodailysis shunt for longer time. Stethoscope auscultation of vascular murmurs is useful noninvasive diagnostic approach, but an experienced expert operator is necessary. Some experts often say that the high-pitch murmurs exist if the shunt becomes stenosed, and some studies report that there are some features detected at high frequency by time-frequency analysis. However, some of the murmurs are difficult to detect, and the final judgment is difficult. This study proposes a new diagnosis support system to screen stenosis by using vascular murmurs. The system is performed using artificial neural networks (ANN) with the analyzed frequency data by maximum entropy method (MEM). The author recorded vascular murmurs both before percutaneous transluminal angioplasty (PTA) and after. Examining the MEM spectral characteristics of the high-pitch stenosis murmurs, three features could be classified, which covered 85 percent of stenosis vascular murmurs. The features were learnt by the ANN, and judged. As a result, a percentage of judging the classified stenosis murmurs was 100%, and that of normal was 86%.

  10. In Vivo potassium-39 NMR spectra by the burg maximum-entropy method

    NASA Astrophysics Data System (ADS)

    Uchiyama, Takanori; Minamitani, Haruyuki

    The Burg maximum-entropy method was applied to estimate 39K NMR spectra of mung bean root tips. The maximum-entropy spectra have as good a linearity between peak areas and potassium concentrations as those obtained by fast Fourier transform and give a better estimation of intracellular potassium concentrations. Therefore potassium uptake and loss processes of mung bean root tips are shown to be more clearly traced by the maximum-entropy method.

  11. Electronic structure and chemical bonding in La1-x Sr x MnO3 perovskite ceramics

    NASA Astrophysics Data System (ADS)

    Thenmozhi, N.; Sasikumar, S.; Sonai, S.; Saravanan, R.

    2017-04-01

    This study reports on the synthesis of La1-x Sr x MnO3 (x  =  0.3, 0.4 and 0.5) manganites by high temperature solid state reaction method using lanthanum oxide, strontium carbonate and manganese oxide as starting materials. The synthesized samples were characterized by XRD, UV-vis, SEM/EDS and VSM. Structural characterization shows that all the prepared samples have the perovskite rhombohedral structure. Influence of Sr doping on electron density distributions in the lattice structure of LaMnO3 were analyzed through maximum entropy method (MEM). Cell parameters are found to be decreasing with the addition of Sr content. The qualitative and quantitative analysis by MEM reveals that, incorporation of Sr into LaMnO3 lattice enhances the ionic nature between La and O ions and decreases the covalent nature between Mn and O ions. Optical band gap values are determined from the UV-visible absorption spectra. Particles with polygonal form are observed from the SEM micrographs. The elemental compositions of the synthesized samples are confirmed by EDS. The magnetic properties studied from the M-H plot taken at room temperature indicated that, the prepared samples are exhibited ferromagnetic behavior.

  12. Maximum entropy method applied to deblurring images on a MasPar MP-1 computer

    NASA Technical Reports Server (NTRS)

    Bonavito, N. L.; Dorband, John; Busse, Tim

    1991-01-01

    A statistical inference method based on the principle of maximum entropy is developed for the purpose of enhancing and restoring satellite images. The proposed maximum entropy image restoration method is shown to overcome the difficulties associated with image restoration and provide the smoothest and most appropriate solution consistent with the measured data. An implementation of the method on the MP-1 computer is described, and results of tests on simulated data are presented.

  13. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  14. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  15. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  16. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    PubMed

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  17. Nonadditive entropy maximization is inconsistent with Bayesian updating.

    PubMed

    Pressé, Steve

    2014-11-01

    The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  18. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  19. Random versus maximum entropy models of neural population activity

    NASA Astrophysics Data System (ADS)

    Ferrari, Ulisse; Obuchi, Tomoyuki; Mora, Thierry

    2017-04-01

    The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here, data from the collective spiking activity of retinal neurons is reanalyzed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. For most of the tested networks, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as eight being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.

  20. Electronic structure and bonding interactions in Ba1- x Sr x Zr0.1Ti0.9O3 ceramics

    NASA Astrophysics Data System (ADS)

    Mangaiyarkkarasi, Jegannathan; Sasikumar, Subramanian; Saravanan, Olai Vasu; Saravanan, Ramachandran

    2017-06-01

    An investigation on the precise electronic structure and bonding interactions has been carried out on Ba1- x Sr x Zr0.1Ti0.9O3 (short for BSZT, x = 0, 0.05, 0.07 and 0.14) ceramic systems prepared via high-temperature solid state reaction technique. The influence of Sr doping on the BSZT structure has been examined by characterizing the prepared samples using PXRD, UV-visible spectrophotometry, SEM and EDS. Powder profile refinement of X-ray data confirms that all the synthesized samples have been crystallized in cubic perovskite structure with single phase. Charge density distribution of the BSZT systems has been completely analyzed by the maximum entropy method (MEM). Co-substitution of Sr at the Ba site and Zr at the Ti site into the BaTiO3 structure presents the ionic nature between Ba and O ions and the covalent nature between Ti and O ions, revealed from MEM calculations. Optical band gap values have been evaluated from UV-visible absorption spectra. Particles with irregular shapes and well defined grain boundaries are clearly visualized from SEM images. The phase purity of the prepared samples is further confirmed by EDS qualitative spectral analysis.

  1. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  2. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  3. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  4. Consistent maximum entropy representations of pipe flow networks

    NASA Astrophysics Data System (ADS)

    Waldrip, Steven H.; Niven, Robert K.; Abel, Markus; Schlegel, Michael

    2017-06-01

    The maximum entropy method is used to predict flows on water distribution networks. This analysis extends the water distribution network formulation of Waldrip et al. (2016) Journal of Hydraulic Engineering (ASCE), by the use of a continuous relative entropy defined on a reduced parameter set. This reduction in the parameters that the entropy is defined over ensures consistency between different representations of the same network. The performance of the proposed reduced parameter method is demonstrated with a one-loop network case study.

  5. Application of Bayesian Maximum Entropy Filter in parameter calibration of groundwater flow model in PingTung Plain

    NASA Astrophysics Data System (ADS)

    Cheung, Shao-Yong; Lee, Chieh-Han; Yu, Hwa-Lung

    2017-04-01

    Due to the limited hydrogeological observation data and high levels of uncertainty within, parameter estimation of the groundwater model has been an important issue. There are many methods of parameter estimation, for example, Kalman filter provides a real-time calibration of parameters through measurement of groundwater monitoring wells, related methods such as Extended Kalman Filter and Ensemble Kalman Filter are widely applied in groundwater research. However, Kalman Filter method is limited to linearity. This study propose a novel method, Bayesian Maximum Entropy Filtering, which provides a method that can considers the uncertainty of data in parameter estimation. With this two methods, we can estimate parameter by given hard data (certain) and soft data (uncertain) in the same time. In this study, we use Python and QGIS in groundwater model (MODFLOW) and development of Extended Kalman Filter and Bayesian Maximum Entropy Filtering in Python in parameter estimation. This method may provide a conventional filtering method and also consider the uncertainty of data. This study was conducted through numerical model experiment to explore, combine Bayesian maximum entropy filter and a hypothesis for the architecture of MODFLOW groundwater model numerical estimation. Through the virtual observation wells to simulate and observe the groundwater model periodically. The result showed that considering the uncertainty of data, the Bayesian maximum entropy filter will provide an ideal result of real-time parameters estimation.

  6. Monitoring of Time-Dependent System Profiles by Multiplex Gas Chromatography with Maximum Entropy Demodulation

    NASA Technical Reports Server (NTRS)

    Becker, Joseph F.; Valentin, Jose

    1996-01-01

    The maximum entropy technique was successfully applied to the deconvolution of overlapped chromatographic peaks. An algorithm was written in which the chromatogram was represented as a vector of sample concentrations multiplied by a peak shape matrix. Simulation results demonstrated that there is a trade off between the detector noise and peak resolution in the sense that an increase of the noise level reduced the peak separation that could be recovered by the maximum entropy method. Real data originated from a sample storage column was also deconvoluted using maximum entropy. Deconvolution is useful in this type of system because the conservation of time dependent profiles depends on the band spreading processes in the chromatographic column, which might smooth out the finer details in the concentration profile. The method was also applied to the deconvolution of previously interpretted Pioneer Venus chromatograms. It was found in this case that the correct choice of peak shape function was critical to the sensitivity of maximum entropy in the reconstruction of these chromatograms.

  7. Intracranial EEG potentials estimated from MEG sources: A new approach to correlate MEG and iEEG data in epilepsy.

    PubMed

    Grova, Christophe; Aiguabella, Maria; Zelmann, Rina; Lina, Jean-Marc; Hall, Jeffery A; Kobayashi, Eliane

    2016-05-01

    Detection of epileptic spikes in MagnetoEncephaloGraphy (MEG) requires synchronized neuronal activity over a minimum of 4cm2. We previously validated the Maximum Entropy on the Mean (MEM) as a source localization able to recover the spatial extent of the epileptic spike generators. The purpose of this study was to evaluate quantitatively, using intracranial EEG (iEEG), the spatial extent recovered from MEG sources by estimating iEEG potentials generated by these MEG sources. We evaluated five patients with focal epilepsy who had a pre-operative MEG acquisition and iEEG with MRI-compatible electrodes. Individual MEG epileptic spikes were localized along the cortical surface segmented from a pre-operative MRI, which was co-registered with the MRI obtained with iEEG electrodes in place for identification of iEEG contacts. An iEEG forward model estimated the influence of every dipolar source of the cortical surface on each iEEG contact. This iEEG forward model was applied to MEG sources to estimate iEEG potentials that would have been generated by these sources. MEG-estimated iEEG potentials were compared with measured iEEG potentials using four source localization methods: two variants of MEM and two standard methods equivalent to minimum norm and LORETA estimates. Our results demonstrated an excellent MEG/iEEG correspondence in the presumed focus for four out of five patients. In one patient, the deep generator identified in iEEG could not be localized in MEG. MEG-estimated iEEG potentials is a promising method to evaluate which MEG sources could be retrieved and validated with iEEG data, providing accurate results especially when applied to MEM localizations. Hum Brain Mapp 37:1661-1683, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  8. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  9. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  10. An approach to optimal semi-active control of vibration energy harvesting based on MEMS

    NASA Astrophysics Data System (ADS)

    Rojas, Rafael A.; Carcaterra, Antonio

    2018-07-01

    In this paper the energy harvesting problem involving typical MEMS technology is reduced to an optimal control problem, where the objective function is the absorption of the maximum amount of energy in a given time interval from a vibrating environment. The interest here is to identify a physical upper bound for this energy storage. The mathematical tool is a new optimal control called Krotov's method, that has not yet been applied to engineering problems, except in quantum dynamics. This approach leads to identify new maximum bounds to the energy harvesting performance. Novel MEMS-based device control configurations for vibration energy harvesting are proposed with particular emphasis to piezoelectric, electromagnetic and capacitive circuits.

  11. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    NASA Astrophysics Data System (ADS)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  12. Nonadditive entropy maximization is inconsistent with Bayesian updating

    NASA Astrophysics Data System (ADS)

    Pressé, Steve

    2014-11-01

    The maximum entropy method—used to infer probabilistic models from data—is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.

  13. Classic maximum entropy recovery of the average joint distribution of apparent FRET efficiency and fluorescence photons for single-molecule burst measurements.

    PubMed

    DeVore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2012-04-05

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions.

  14. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  15. Inverting ion images without Abel inversion: maximum entropy reconstruction of velocity maps.

    PubMed

    Dick, Bernhard

    2014-01-14

    A new method for the reconstruction of velocity maps from ion images is presented, which is based on the maximum entropy concept. In contrast to other methods used for Abel inversion the new method never applies an inversion or smoothing to the data. Instead, it iteratively finds the map which is the most likely cause for the observed data, using the correct likelihood criterion for data sampled from a Poissonian distribution. The entropy criterion minimizes the information content in this map, which hence contains no information for which there is no evidence in the data. Two implementations are proposed, and their performance is demonstrated with simulated and experimental data: Maximum Entropy Velocity Image Reconstruction (MEVIR) obtains a two-dimensional slice through the velocity distribution and can be compared directly to Abel inversion. Maximum Entropy Velocity Legendre Reconstruction (MEVELER) finds one-dimensional distribution functions Q(l)(v) in an expansion of the velocity distribution in Legendre polynomials P((cos θ) for the angular dependence. Both MEVIR and MEVELER can be used for the analysis of ion images with intensities as low as 0.01 counts per pixel, with MEVELER performing significantly better than MEVIR for images with low intensity. Both methods perform better than pBASEX, in particular for images with less than one average count per pixel.

  16. An Auscultaiting Diagnosis Support System for Assessing Hemodialysis Shunt Stenosis by Using Self-organizing Map

    NASA Astrophysics Data System (ADS)

    Suzuki, Yutaka; Fukasawa, Mizuya; Sakata, Osamu; Kato, Hatsuhiro; Hattori, Asobu; Kato, Takaya

    Vascular access for hemodialysis is a lifeline for over 280,000 chronic renal failure patients in Japan. Early detection of stenosis may facilitate long-term use of hemodialysis shunts. Stethoscope auscultation of vascular murmurs has some utility in the assessment of access patency; however, the sensitivity of this diagnostic approach is skill dependent. This study proposes a novel diagnosis support system to detect stenosis by using vascular murmurs. The system is based on a self-organizing map (SOM) and short-time maximum entropy method (STMEM) for data analysis. SOM is an artificial neural network, which is trained using unsupervised learning to produce a feature map that is useful for visualizing the analogous relationship between input data. The author recorded vascular murmurs before and after percutaneous transluminal angioplasty (PTA). The SOM-based classification was consistent with to the classification based on MEM spectral and spectrogram characteristics. The ratio of pre-PTA murmurs in the stenosis category was much higher than the post-PTA murmurs. The results suggest that the proposed method may be an effective tool in the determination of shunt stenosis.

  17. Classic Maximum Entropy Recovery of the Average Joint Distribution of Apparent FRET Efficiency and Fluorescence Photons for Single-molecule Burst Measurements

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2012-01-01

    We describe a method for analysis of single-molecule Förster resonance energy transfer (FRET) burst measurements using classic maximum entropy. Classic maximum entropy determines the Bayesian inference for the joint probability describing the total fluorescence photons and the apparent FRET efficiency. The method was tested with simulated data and then with DNA labeled with fluorescent dyes. The most probable joint distribution can be marginalized to obtain both the overall distribution of fluorescence photons and the apparent FRET efficiency distribution. This method proves to be ideal for determining the distance distribution of FRET-labeled biomolecules, and it successfully predicts the shape of the recovered distributions. PMID:22338694

  18. Maximum entropy PDF projection: A review

    NASA Astrophysics Data System (ADS)

    Baggenstoss, Paul M.

    2017-06-01

    We review maximum entropy (MaxEnt) PDF projection, a method with wide potential applications in statistical inference. The method constructs a sampling distribution for a high-dimensional vector x based on knowing the sampling distribution p(z) of a lower-dimensional feature z = T (x). Under mild conditions, the distribution p(x) having highest possible entropy among all distributions consistent with p(z) may be readily found. Furthermore, the MaxEnt p(x) may be sampled, making the approach useful in Monte Carlo methods. We review the theorem and present a case study in model order selection and classification for handwritten character recognition.

  19. Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics

    NASA Astrophysics Data System (ADS)

    Abe, Sumiyoshi

    2014-11-01

    The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown, in particular, how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.

  20. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  1. The maximum entropy method of moments and Bayesian probability theory

    NASA Astrophysics Data System (ADS)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  2. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  3. Wavelet-based localization of oscillatory sources from magnetoencephalography data.

    PubMed

    Lina, J M; Chowdhury, R; Lemay, E; Kobayashi, E; Grova, C

    2014-08-01

    Transient brain oscillatory activities recorded with Eelectroencephalography (EEG) or magnetoencephalography (MEG) are characteristic features in physiological and pathological processes. This study is aimed at describing, evaluating, and illustrating with clinical data a new method for localizing the sources of oscillatory cortical activity recorded by MEG. The method combines time-frequency representation and an entropic regularization technique in a common framework, assuming that brain activity is sparse in time and space. Spatial sparsity relies on the assumption that brain activity is organized among cortical parcels. Sparsity in time is achieved by transposing the inverse problem in the wavelet representation, for both data and sources. We propose an estimator of the wavelet coefficients of the sources based on the maximum entropy on the mean (MEM) principle. The full dynamics of the sources is obtained from the inverse wavelet transform, and principal component analysis of the reconstructed time courses is applied to extract oscillatory components. This methodology is evaluated using realistic simulations of single-trial signals, combining fast and sudden discharges (spike) along with bursts of oscillating activity. The method is finally illustrated with a clinical application using MEG data acquired on a patient with a right orbitofrontal epilepsy.

  4. Investigation of Structures of Microwave Microelectromechanical-System Switches by Taguchi Method

    NASA Astrophysics Data System (ADS)

    Lai, Yeong-Lin; Lin, Chien-Hung

    2007-10-01

    The optimal design of microwave microelectromechanical-system (MEMS) switches by the Taguchi method is presented. The structures of the switches are analyzed and optimized in terms of the effective stiffness constant, the maximum von Mises stress, and the natural frequency in order to improve the reliability and the performance of the MEMS switches. There are four factors, each of which has three levels in the Taguchi method for the MEMS switches. An L9(34) orthogonal array is used for the matrix experiments. The characteristics of the experiments are studied by the finite-element method and the analytical method. The responses of the signal-to-noise (S/N) ratios of the characteristics of the switches are investigated. The statistical analysis of variance (ANOVA) is used to interpret the experimental results and decide the significant factors. The final optimum setting, A1B3C1D2, predicts that the effective stiffness constant is 1.06 N/m, the maximum von Mises stress is 76.9 MPa, and the natural frequency is 29.331 kHz. The corresponding switching time is 34 μs, and the pull-down voltage is 9.8 V.

  5. A unified approach to computational drug discovery.

    PubMed

    Tseng, Chih-Yuan; Tuszynski, Jack

    2015-11-01

    It has been reported that a slowdown in the development of new medical therapies is affecting clinical outcomes. The FDA has thus initiated the Critical Path Initiative project investigating better approaches. We review the current strategies in drug discovery and focus on the advantages of the maximum entropy method being introduced in this area. The maximum entropy principle is derived from statistical thermodynamics and has been demonstrated to be an inductive inference tool. We propose a unified method to drug discovery that hinges on robust information processing using entropic inductive inference. Increasingly, applications of maximum entropy in drug discovery employ this unified approach and demonstrate the usefulness of the concept in the area of pharmaceutical sciences. Copyright © 2015. Published by Elsevier Ltd.

  6. Optimized mixed Markov models for motif identification

    PubMed Central

    Huang, Weichun; Umbach, David M; Ohler, Uwe; Li, Leping

    2006-01-01

    Background Identifying functional elements, such as transcriptional factor binding sites, is a fundamental step in reconstructing gene regulatory networks and remains a challenging issue, largely due to limited availability of training samples. Results We introduce a novel and flexible model, the Optimized Mixture Markov model (OMiMa), and related methods to allow adjustment of model complexity for different motifs. In comparison with other leading methods, OMiMa can incorporate more than the NNSplice's pairwise dependencies; OMiMa avoids model over-fitting better than the Permuted Variable Length Markov Model (PVLMM); and OMiMa requires smaller training samples than the Maximum Entropy Model (MEM). Testing on both simulated and actual data (regulatory cis-elements and splice sites), we found OMiMa's performance superior to the other leading methods in terms of prediction accuracy, required size of training data or computational time. Our OMiMa system, to our knowledge, is the only motif finding tool that incorporates automatic selection of the best model. OMiMa is freely available at [1]. Conclusion Our optimized mixture of Markov models represents an alternative to the existing methods for modeling dependent structures within a biological motif. Our model is conceptually simple and effective, and can improve prediction accuracy and/or computational speed over other leading methods. PMID:16749929

  7. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  8. Maximum Relative Entropy of Coherence: An Operational Coherence Measure.

    PubMed

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-13

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  9. Estimation of Lithological Classification in Taipei Basin: A Bayesian Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Wu, Meng-Ting; Lin, Yuan-Chien; Yu, Hwa-Lung

    2015-04-01

    In environmental or other scientific applications, we must have a certain understanding of geological lithological composition. Because of restrictions of real conditions, only limited amount of data can be acquired. To find out the lithological distribution in the study area, many spatial statistical methods used to estimate the lithological composition on unsampled points or grids. This study applied the Bayesian Maximum Entropy (BME method), which is an emerging method of the geological spatiotemporal statistics field. The BME method can identify the spatiotemporal correlation of the data, and combine not only the hard data but the soft data to improve estimation. The data of lithological classification is discrete categorical data. Therefore, this research applied Categorical BME to establish a complete three-dimensional Lithological estimation model. Apply the limited hard data from the cores and the soft data generated from the geological dating data and the virtual wells to estimate the three-dimensional lithological classification in Taipei Basin. Keywords: Categorical Bayesian Maximum Entropy method, Lithological Classification, Hydrogeological Setting

  10. How multiplicity determines entropy and the derivation of the maximum entropy principle for complex systems.

    PubMed

    Hanel, Rudolf; Thurner, Stefan; Gell-Mann, Murray

    2014-05-13

    The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there has been an ongoing controversy over whether the notion of the maximum entropy principle can be extended in a meaningful way to nonextensive, nonergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for nonergodic and complex statistical systems if their relative entropy can be factored into a generalized multiplicity and a constraint term. The problem of finding such a factorization reduces to finding an appropriate representation of relative entropy in a linear basis. In a particular example we show that path-dependent random processes with memory naturally require specific generalized entropies. The example is to our knowledge the first exact derivation of a generalized entropy from the microscopic properties of a path-dependent random process.

  11. A multiwavelength study of the Eridanus soft X-ray enhancement

    NASA Technical Reports Server (NTRS)

    Burrows, D. N.; Singh, K. P.; Nousek, J. A.; Garmire, G. P.; Good, J.

    1993-01-01

    We present soft X-ray, N(H), and IR maps of the Eridanus soft X-ray enhancement. Soft X-ray maps from the HEAO 1 A-2 LED experiment, processed with a maximum entropy method (MEM) algorithm, show that the enhancement consists of two distinct components: a large hook-shaped component and a small circular component at different temperatures. Both of these are located in 'holes' in the IR emission, and they correspond to N(H) features at very different velocities. The dust surrounding the X-ray enhancements appears to be associated with several high-latitude molecular clouds, which allow us to obtain a probable distance of about 130 pc to the near edge of the main enhancement. The total power emitted by the hot gas is then about 10 exp 35 to 10 exp 36 ergs/s. We consider alternative interpretations of these objects as adiabatic supernova remnants or as stellar wind bubbles and conclude that they are more likely to be stellar wind bubbles, possibly reheated by a SN explosion in the case of the main, hook-shaped object.

  12. Diurnal variation of eye movement and heart rate variability in the human fetus at term.

    PubMed

    Morokuma, S; Horimoto, N; Satoh, S; Nakano, H

    2001-07-01

    To elucidate diurnal variations in eye movement and fetal heart rate (FHR) variability in the term fetus, we observed these two parameters continuously for 24 h, using real-time ultrasound and Doppler cardiotocograph, respectively. Studied were five uncomplicated fetuses at term. The time series data of the presence and absence of eye movement and mean FHR value for each 1 min were analyzed using the maximum entropy method (MEM) and subsequent nonlinear least squares fitting. According to the power value of eye movement, all five cases were classified into two groups: three cases in the large power group and two cases in the small power group. The acrophases of eye movement and FHR variability in the large power group were close, thereby implying the existence of a diurnal rhythm in both these parameters and also that they are synchronized. In the small power group, the acrophases were separated. The synchronization of eye movement and FHR variability in the large power group suggests that these phenomena are governed by a common central mechanism related to diurnal rhythm generation.

  13. Low Streamflow Forcasting using Minimum Relative Entropy

    NASA Astrophysics Data System (ADS)

    Cui, H.; Singh, V. P.

    2013-12-01

    Minimum relative entropy spectral analysis is derived in this study, and applied to forecast streamflow time series. Proposed method extends the autocorrelation in the manner that the relative entropy of underlying process is minimized so that time series data can be forecasted. Different prior estimation, such as uniform, exponential and Gaussian assumption, is taken to estimate the spectral density depending on the autocorrelation structure. Seasonal and nonseasonal low streamflow series obtained from Colorado River (Texas) under draught condition is successfully forecasted using proposed method. Minimum relative entropy determines spectral of low streamflow series with higher resolution than conventional method. Forecasted streamflow is compared to the prediction using Burg's maximum entropy spectral analysis (MESA) and Configurational entropy. The advantage and disadvantage of each method in forecasting low streamflow is discussed.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khosla, D.; Singh, M.

    The estimation of three-dimensional dipole current sources on the cortical surface from the measured magnetoencephalogram (MEG) is a highly under determined inverse problem as there are many {open_quotes}feasible{close_quotes} images which are consistent with the MEG data. Previous approaches to this problem have concentrated on the use of weighted minimum norm inverse methods. While these methods ensure a unique solution, they often produce overly smoothed solutions and exhibit severe sensitivity to noise. In this paper we explore the maximum entropy approach to obtain better solutions to the problem. This estimation technique selects that image from the possible set of feasible imagesmore » which has the maximum entropy permitted by the information available to us. In order to account for the presence of noise in the data, we have also incorporated a noise rejection or likelihood term into our maximum entropy method. This makes our approach mirror a Bayesian maximum a posteriori (MAP) formulation. Additional information from other functional techniques like functional magnetic resonance imaging (fMRI) can be incorporated in the proposed method in the form of a prior bias function to improve solutions. We demonstrate the method with experimental phantom data from a clinical 122 channel MEG system.« less

  15. Entropic Inference

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2011-03-01

    In this tutorial we review the essential arguments behing entropic inference. We focus on the epistemological notion of information and its relation to the Bayesian beliefs of rational agents. The problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), includes as special cases both MaxEnt and Bayes' rule, and therefore unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme.

  16. [Maximum entropy model versus remote sensing-based methods for extracting Oncomelania hupensis snail habitats].

    PubMed

    Cong-Cong, Xia; Cheng-Fang, Lu; Si, Li; Tie-Jun, Zhang; Sui-Heng, Lin; Yi, Hu; Ying, Liu; Zhi-Jie, Zhang

    2016-12-02

    To explore the technique of maximum entropy model for extracting Oncomelania hupensis snail habitats in Poyang Lake zone. The information of snail habitats and related environment factors collected in Poyang Lake zone were integrated to set up the maximum entropy based species model and generate snail habitats distribution map. Two Landsat 7 ETM+ remote sensing images of both wet and drought seasons in Poyang Lake zone were obtained, where the two indices of modified normalized difference water index (MNDWI) and normalized difference vegetation index (NDVI) were applied to extract snail habitats. The ROC curve, sensitivities and specificities were applied to assess their results. Furthermore, the importance of the variables for snail habitats was analyzed by using Jackknife approach. The evaluation results showed that the area under receiver operating characteristic curve (AUC) of testing data by the remote sensing-based method was only 0.56, and the sensitivity and specificity were 0.23 and 0.89 respectively. Nevertheless, those indices above-mentioned of maximum entropy model were 0.876, 0.89 and 0.74 respectively. The main concentration of snail habitats in Poyang Lake zone covered the northeast part of Yongxiu County, northwest of Yugan County, southwest of Poyang County and middle of Xinjian County, and the elevation was the most important environment variable affecting the distribution of snails, and the next was land surface temperature (LST). The maximum entropy model is more reliable and accurate than the remote sensing-based method for the sake of extracting snail habitats, which has certain guiding significance for the relevant departments to carry out measures to prevent and control high-risk snail habitats.

  17. Block entropy and quantum phase transition in the anisotropic Kondo necklace model

    NASA Astrophysics Data System (ADS)

    Mendoza-Arenas, J. J.; Franco, R.; Silva-Valencia, J.

    2010-06-01

    We study the von Neumann block entropy in the Kondo necklace model for different anisotropies η in the XY interaction between conduction spins using the density matrix renormalization group method. It was found that the block entropy presents a maximum for each η considered, and, comparing it with the results of the quantum criticality of the model based on the behavior of the energy gap, we observe that the maximum block entropy occurs at the quantum critical point between an antiferromagnetic and a Kondo singlet state, so this measure of entanglement is useful for giving information about where a quantum phase transition occurs in this model. We observe that the block entropy also presents a maximum at the quantum critical points that are obtained when an anisotropy Δ is included in the Kondo exchange between localized and conduction spins; when Δ diminishes for a fixed value of η, the critical point increases, favoring the antiferromagnetic phase.

  18. Chapman Enskog-maximum entropy method on time-dependent neutron transport equation

    NASA Astrophysics Data System (ADS)

    Abdou, M. A.

    2006-09-01

    The time-dependent neutron transport equation in semi and infinite medium with linear anisotropic and Rayleigh scattering is proposed. The problem is solved by means of the flux-limited, Chapman Enskog-maximum entropy for obtaining the solution of the time-dependent neutron transport. The solution gives the neutron distribution density function which is used to compute numerically the radiant energy density E(x,t), net flux F(x,t) and reflectivity Rf. The behaviour of the approximate flux-limited maximum entropy neutron density function are compared with those found by other theories. Numerical calculations for the radiant energy, net flux and reflectivity of the proposed medium are calculated at different time and space.

  19. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  20. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  1. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  2. A Maximum Entropy Method for Particle Filtering

    NASA Astrophysics Data System (ADS)

    Eyink, Gregory L.; Kim, Sangil

    2006-06-01

    Standard ensemble or particle filtering schemes do not properly represent states of low priori probability when the number of available samples is too small, as is often the case in practical applications. We introduce here a set of parametric resampling methods to solve this problem. Motivated by a general H-theorem for relative entropy, we construct parametric models for the filter distributions as maximum-entropy/minimum-information models consistent with moments of the particle ensemble. When the prior distributions are modeled as mixtures of Gaussians, our method naturally generalizes the ensemble Kalman filter to systems with highly non-Gaussian statistics. We apply the new particle filters presented here to two simple test cases: a one-dimensional diffusion process in a double-well potential and the three-dimensional chaotic dynamical system of Lorenz.

  3. Interatomic potentials in condensed matter via the maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Carlsson, A. E.

    1987-09-01

    A general method is described for the calculation of interatomic potentials in condensed-matter systems by use of a maximum-entropy Ansatz for the interatomic correlation functions. The interatomic potentials are given explicitly in terms of statistical correlation functions involving the potential energy and the structure factor of a ``reference medium.'' Illustrations are given for Al-Cu alloys and a model transition metal.

  4. Maximum entropy, fluctuations and priors

    NASA Astrophysics Data System (ADS)

    Caticha, A.

    2001-05-01

    The method of maximum entropy (ME) is extended to address the following problem: Once one accepts that the ME distribution is to be preferred over all others, the question is to what extent are distributions with lower entropy supposed to be ruled out. Two applications are given. The first is to the theory of thermodynamic fluctuations. The formulation is exact, covariant under changes of coordinates, and allows fluctuations of both the extensive and the conjugate intensive variables. The second application is to the construction of an objective prior for Bayesian inference. The prior obtained by following the ME method to its inevitable conclusion turns out to be a special case (α=1) of what are currently known under the name of entropic priors. .

  5. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering

    PubMed Central

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-01-01

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods. PMID:27258276

  6. A Noise Reduction Method for Dual-Mass Micro-Electromechanical Gyroscopes Based on Sample Entropy Empirical Mode Decomposition and Time-Frequency Peak Filtering.

    PubMed

    Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun

    2016-05-31

    The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.

  7. Maximum Tsallis entropy with generalized Gini and Gini mean difference indices constraints

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2017-04-01

    Using the maximum entropy principle with Tsallis entropy, some distribution families for modeling income distribution are obtained. By considering income inequality measures, maximum Tsallis entropy distributions under the constraint on generalized Gini and Gini mean difference indices are derived. It is shown that the Tsallis entropy maximizers with the considered constraints belong to generalized Pareto family.

  8. Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization

    NASA Astrophysics Data System (ADS)

    Li, Li

    2018-03-01

    In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.

  9. Vertical electrostatic force in MEMS cantilever IR sensor

    NASA Astrophysics Data System (ADS)

    Rezadad, Imen; Boroumand Azad, Javaneh; Smith, Evan M.; Alhasan, Ammar; Peale, Robert E.

    2014-06-01

    A MEMS cantilever IR detector that repetitively lifts from the surface under the influence of a saw-tooth electrostatic force, where the contact duty cycle is a measure of the absorbed IR radiation, is analyzed. The design is comprised of three parallel conducting plates. Fixed buried and surface plates are held at opposite potential. A moveable cantilever is biased the same as the surface plate. Calculations based on energy methods with position-dependent capacity and electrostatic induction coefficients demonstrate the upward sign of the force on the cantilever and determine the force magnitude. 2D finite element method calculations of the local fields confirm the sign of the force and determine its distribution across the cantilever. The upward force is maximized when the surface plate is slightly larger than the other two. The electrostatic repulsion is compared with Casimir sticking force to determine the maximum useful contact area. MEMS devices were fabricated and the vertical displacement of the cantilever was observed in a number of experiments. The approach may be applied also to MEMS actuators and micromirrors.

  10. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  11. Maximum entropy formalism for the analytic continuation of matrix-valued Green's functions

    NASA Astrophysics Data System (ADS)

    Kraberger, Gernot J.; Triebl, Robert; Zingl, Manuel; Aichhorn, Markus

    2017-10-01

    We present a generalization of the maximum entropy method to the analytic continuation of matrix-valued Green's functions. To treat off-diagonal elements correctly based on Bayesian probability theory, the entropy term has to be extended for spectral functions that are possibly negative in some frequency ranges. In that way, all matrix elements of the Green's function matrix can be analytically continued; we introduce a computationally cheap element-wise method for this purpose. However, this method cannot ensure important constraints on the mathematical properties of the resulting spectral functions, namely positive semidefiniteness and Hermiticity. To improve on this, we present a full matrix formalism, where all matrix elements are treated simultaneously. We show the capabilities of these methods using insulating and metallic dynamical mean-field theory (DMFT) Green's functions as test cases. Finally, we apply the methods to realistic material calculations for LaTiO3, where off-diagonal matrix elements in the Green's function appear due to the distorted crystal structure.

  12. Information and Entropy

    NASA Astrophysics Data System (ADS)

    Caticha, Ariel

    2007-11-01

    What is information? Is it physical? We argue that in a Bayesian theory the notion of information must be defined in terms of its effects on the beliefs of rational agents. Information is whatever constrains rational beliefs and therefore it is the force that induces us to change our minds. This problem of updating from a prior to a posterior probability distribution is tackled through an eliminative induction process that singles out the logarithmic relative entropy as the unique tool for inference. The resulting method of Maximum relative Entropy (ME), which is designed for updating from arbitrary priors given information in the form of arbitrary constraints, includes as special cases both MaxEnt (which allows arbitrary constraints) and Bayes' rule (which allows arbitrary priors). Thus, ME unifies the two themes of these workshops—the Maximum Entropy and the Bayesian methods—into a single general inference scheme that allows us to handle problems that lie beyond the reach of either of the two methods separately. I conclude with a couple of simple illustrative examples.

  13. Application of a multiscale maximum entropy image restoration algorithm to HXMT observations

    NASA Astrophysics Data System (ADS)

    Guan, Ju; Song, Li-Ming; Huo, Zhuo-Xi

    2016-08-01

    This paper introduces a multiscale maximum entropy (MSME) algorithm for image restoration of the Hard X-ray Modulation Telescope (HXMT), which is a collimated scan X-ray satellite mainly devoted to a sensitive all-sky survey and pointed observations in the 1-250 keV range. The novelty of the MSME method is to use wavelet decomposition and multiresolution support to control noise amplification at different scales. Our work is focused on the application and modification of this method to restore diffuse sources detected by HXMT scanning observations. An improved method, the ensemble multiscale maximum entropy (EMSME) algorithm, is proposed to alleviate the problem of mode mixing exiting in MSME. Simulations have been performed on the detection of the diffuse source Cen A by HXMT in all-sky survey mode. The results show that the MSME method is adapted to the deconvolution task of HXMT for diffuse source detection and the improved method could suppress noise and improve the correlation and signal-to-noise ratio, thus proving itself a better algorithm for image restoration. Through one all-sky survey, HXMT could reach a capacity of detecting a diffuse source with maximum differential flux of 0.5 mCrab. Supported by Strategic Priority Research Program on Space Science, Chinese Academy of Sciences (XDA04010300) and National Natural Science Foundation of China (11403014)

  14. The equivalence of minimum entropy production and maximum thermal efficiency in endoreversible heat engines.

    PubMed

    Haseli, Y

    2016-05-01

    The objective of this study is to investigate the thermal efficiency and power production of typical models of endoreversible heat engines at the regime of minimum entropy generation rate. The study considers the Curzon-Ahlborn engine, the Novikov's engine, and the Carnot vapor cycle. The operational regimes at maximum thermal efficiency, maximum power output and minimum entropy production rate are compared for each of these engines. The results reveal that in an endoreversible heat engine, a reduction in entropy production corresponds to an increase in thermal efficiency. The three criteria of minimum entropy production, the maximum thermal efficiency, and the maximum power may become equivalent at the condition of fixed heat input.

  15. Pareto versus lognormal: A maximum entropy test

    NASA Astrophysics Data System (ADS)

    Bee, Marco; Riccaboni, Massimo; Schiavo, Stefano

    2011-08-01

    It is commonly found that distributions that seem to be lognormal over a broad range change to a power-law (Pareto) distribution for the last few percentiles. The distributions of many physical, natural, and social events (earthquake size, species abundance, income and wealth, as well as file, city, and firm sizes) display this structure. We present a test for the occurrence of power-law tails in statistical distributions based on maximum entropy. This methodology allows one to identify the true data-generating processes even in the case when it is neither lognormal nor Pareto. The maximum entropy approach is then compared with other widely used methods and applied to different levels of aggregation of complex systems. Our results provide support for the theory that distributions with lognormal body and Pareto tail can be generated as mixtures of lognormally distributed units.

  16. Optimized Kernel Entropy Components.

    PubMed

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  17. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  18. The calculation of transport properties in quantum liquids using the maximum entropy numerical analytic continuation method: Application to liquid para-hydrogen

    PubMed Central

    Rabani, Eran; Reichman, David R.; Krilov, Goran; Berne, Bruce J.

    2002-01-01

    We present a method based on augmenting an exact relation between a frequency-dependent diffusion constant and the imaginary time velocity autocorrelation function, combined with the maximum entropy numerical analytic continuation approach to study transport properties in quantum liquids. The method is applied to the case of liquid para-hydrogen at two thermodynamic state points: a liquid near the triple point and a high-temperature liquid. Good agreement for the self-diffusion constant and for the real-time velocity autocorrelation function is obtained in comparison to experimental measurements and other theoretical predictions. Improvement of the methodology and future applications are discussed. PMID:11830656

  19. n-Order and maximum fuzzy similarity entropy for discrimination of signals of different complexity: Application to fetal heart rate signals.

    PubMed

    Zaylaa, Amira; Oudjemia, Souad; Charara, Jamal; Girault, Jean-Marc

    2015-09-01

    This paper presents two new concepts for discrimination of signals of different complexity. The first focused initially on solving the problem of setting entropy descriptors by varying the pattern size instead of the tolerance. This led to the search for the optimal pattern size that maximized the similarity entropy. The second paradigm was based on the n-order similarity entropy that encompasses the 1-order similarity entropy. To improve the statistical stability, n-order fuzzy similarity entropy was proposed. Fractional Brownian motion was simulated to validate the different methods proposed, and fetal heart rate signals were used to discriminate normal from abnormal fetuses. In all cases, it was found that it was possible to discriminate time series of different complexity such as fractional Brownian motion and fetal heart rate signals. The best levels of performance in terms of sensitivity (90%) and specificity (90%) were obtained with the n-order fuzzy similarity entropy. However, it was shown that the optimal pattern size and the maximum similarity measurement were related to intrinsic features of the time series. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Numerical optimization using flow equations.

    PubMed

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  1. Numerical optimization using flow equations

    NASA Astrophysics Data System (ADS)

    Punk, Matthias

    2014-12-01

    We develop a method for multidimensional optimization using flow equations. This method is based on homotopy continuation in combination with a maximum entropy approach. Extrema of the optimizing functional correspond to fixed points of the flow equation. While ideas based on Bayesian inference such as the maximum entropy method always depend on a prior probability, the additional step in our approach is to perform a continuous update of the prior during the homotopy flow. The prior probability thus enters the flow equation only as an initial condition. We demonstrate the applicability of this optimization method for two paradigmatic problems in theoretical condensed matter physics: numerical analytic continuation from imaginary to real frequencies and finding (variational) ground states of frustrated (quantum) Ising models with random or long-range antiferromagnetic interactions.

  2. Analysis of rapid eye movement periodicity in narcoleptics based on maximum entropy method.

    PubMed

    Honma, H; Ohtomo, N; Kohsaka, M; Fukuda, N; Kobayashi, R; Sakakibara, S; Nakamura, F; Koyama, T

    1999-04-01

    We examined REM sleep periodicity in typical narcoleptics and patients who had shown signs of a narcoleptic tetrad without HLA-DRB1*1501/DQB1*0602 or DR2 antigens, using spectral analysis based on the maximum entropy method. The REM sleep period of typical narcoleptics showed two peaks, one at 70-90 min and one at 110-130 min at night, and a single peak at around 70-90 min during the daytime. The nocturnal REM sleep period of typical narcoleptics may be composed of several different periods, one of which corresponds to that of their daytime REM sleep.

  3. Entropy and equilibrium via games of complexity

    NASA Astrophysics Data System (ADS)

    Topsøe, Flemming

    2004-09-01

    It is suggested that thermodynamical equilibrium equals game theoretical equilibrium. Aspects of this thesis are discussed. The philosophy is consistent with maximum entropy thinking of Jaynes, but goes one step deeper by deriving the maximum entropy principle from an underlying game theoretical principle. The games introduced are based on measures of complexity. Entropy is viewed as minimal complexity. It is demonstrated that Tsallis entropy ( q-entropy) and Kaniadakis entropy ( κ-entropy) can be obtained in this way, based on suitable complexity measures. A certain unifying effect is obtained by embedding these measures in a two-parameter family of entropy functions.

  4. Entropy and climate. I - ERBE observations of the entropy production of the earth

    NASA Technical Reports Server (NTRS)

    Stephens, G. L.; O'Brien, D. M.

    1993-01-01

    An approximate method for estimating the global distributions of the entropy fluxes flowing through the upper boundary of the climate system is introduced, and an estimate of the entropy exchange between the earth and space and the entropy production of the planet is provided. Entropy fluxes calculated from the Earth Radiation Budget Experiment measurements show how the long-wave entropy flux densities dominate the total entropy fluxes at all latitudes compared with the entropy flux densities associated with reflected sunlight, although the short-wave flux densities are important in the context of clear sky-cloudy sky net entropy flux differences. It is suggested that the entropy production of the planet is both constant for the 36 months of data considered and very near its maximum possible value. The mean value of this production is 0.68 x 10 exp 15 W/K, and the amplitude of the annual cycle is approximately 1 to 2 percent of this value.

  5. Moisture sorption isotherms and thermodynamic properties of bovine leather

    NASA Astrophysics Data System (ADS)

    Fakhfakh, Rihab; Mihoubi, Daoued; Kechaou, Nabil

    2018-04-01

    This study was aimed at the determination of bovine leather moisture sorption characteristics using a static gravimetric method at 30, 40, 50, 60 and 70 °C. The curves exhibit type II behaviour according to the BET classification. The sorption isotherms fitting by seven equations shows that GAB model is able to reproduce the equilibrium moisture content evolution with water activity for moisture range varying from 0.02 to 0.83 kg/kg d.b (0.9898 < R2 < 0.999). The sorption isotherms exhibit hysteresis effect. Additionally, sorption isotherms data were used to determine the thermodynamic properties such as isosteric heat of sorption, sorption entropy, spreading pressure, net integral enthalpy and entropy. Net isosteric heat of sorption and differential entropy were evaluated through direct use of moisture isotherms by applying the Clausius-Clapeyron equation and used to investigate the enthalpy-entropy compensation theory. Both sorption enthalpy and entropy for desorption increase to a maximum with increasing moisture content, and then decrease sharply with rising moisture content. Adsorption enthalpy decreases with increasing moisture content. Whereas, adsorption entropy increases smoothly with increasing moisture content to a maximum of 6.29 J/K.mol. Spreading pressure increases with rising water activity. The net integral enthalpy seemed to decrease and then increase to become asymptotic. The net integral entropy decreased with moisture content increase.

  6. Research on Sustainable Development Level Evaluation of Resource-based Cities Based on Shapely Entropy and Chouqet Integral

    NASA Astrophysics Data System (ADS)

    Zhao, Hui; Qu, Weilu; Qiu, Weiting

    2018-03-01

    In order to evaluate sustainable development level of resource-based cities, an evaluation method with Shapely entropy and Choquet integral is proposed. First of all, a systematic index system is constructed, the importance of each attribute is calculated based on the maximum Shapely entropy principle, and then the Choquet integral is introduced to calculate the comprehensive evaluation value of each city from the bottom up, finally apply this method to 10 typical resource-based cities in China. The empirical results show that the evaluation method is scientific and reasonable, which provides theoretical support for the sustainable development path and reform direction of resource-based cities.

  7. Single-particle spectral density of the unitary Fermi gas: Novel approach based on the operator product expansion, sum rules and the maximum entropy method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gubler, Philipp, E-mail: pgubler@riken.jp; RIKEN Nishina Center, Wako, Saitama 351-0198; Yamamoto, Naoki

    2015-05-15

    Making use of the operator product expansion, we derive a general class of sum rules for the imaginary part of the single-particle self-energy of the unitary Fermi gas. The sum rules are analyzed numerically with the help of the maximum entropy method, which allows us to extract the single-particle spectral density as a function of both energy and momentum. These spectral densities contain basic information on the properties of the unitary Fermi gas, such as the dispersion relation and the superfluid pairing gap, for which we obtain reasonable agreement with the available results based on quantum Monte-Carlo simulations.

  8. A probability space for quantum models

    NASA Astrophysics Data System (ADS)

    Lemmens, L. F.

    2017-06-01

    A probability space contains a set of outcomes, a collection of events formed by subsets of the set of outcomes and probabilities defined for all events. A reformulation in terms of propositions allows to use the maximum entropy method to assign the probabilities taking some constraints into account. The construction of a probability space for quantum models is determined by the choice of propositions, choosing the constraints and making the probability assignment by the maximum entropy method. This approach shows, how typical quantum distributions such as Maxwell-Boltzmann, Fermi-Dirac and Bose-Einstein are partly related with well-known classical distributions. The relation between the conditional probability density, given some averages as constraints and the appropriate ensemble is elucidated.

  9. Estimating parameter of Rayleigh distribution by using Maximum Likelihood method and Bayes method

    NASA Astrophysics Data System (ADS)

    Ardianti, Fitri; Sutarman

    2018-01-01

    In this paper, we use Maximum Likelihood estimation and Bayes method under some risk function to estimate parameter of Rayleigh distribution to know the best method. The prior knowledge which used in Bayes method is Jeffrey’s non-informative prior. Maximum likelihood estimation and Bayes method under precautionary loss function, entropy loss function, loss function-L 1 will be compared. We compare these methods by bias and MSE value using R program. After that, the result will be displayed in tables to facilitate the comparisons.

  10. Efficient reliability analysis of structures with the rotational quasi-symmetric point- and the maximum entropy methods

    NASA Astrophysics Data System (ADS)

    Xu, Jun; Dang, Chao; Kong, Fan

    2017-10-01

    This paper presents a new method for efficient structural reliability analysis. In this method, a rotational quasi-symmetric point method (RQ-SPM) is proposed for evaluating the fractional moments of the performance function. Then, the derivation of the performance function's probability density function (PDF) is carried out based on the maximum entropy method in which constraints are specified in terms of fractional moments. In this regard, the probability of failure can be obtained by a simple integral over the performance function's PDF. Six examples, including a finite element-based reliability analysis and a dynamic system with strong nonlinearity, are used to illustrate the efficacy of the proposed method. All the computed results are compared with those by Monte Carlo simulation (MCS). It is found that the proposed method can provide very accurate results with low computational effort.

  11. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  12. Critical issues for the application of integrated MEMS/CMOS technologies to inertial measurement units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, J.H.; Ellis, J.R.; Montague, S.

    1997-03-01

    One of the principal applications of monolithically integrated micromechanical/microelectronic systems has been accelerometers for automotive applications. As integrated MEMS/CMOS technologies such as those developed by U.C. Berkeley, Analog Devices, and Sandia National Laboratories mature, additional systems for more sensitive inertial measurements will enter the commercial marketplace. In this paper, the authors will examine key technology design rules which impact the performance and cost of inertial measurement devices manufactured in integrated MEMS/CMOS technologies. These design parameters include: (1) minimum MEMS feature size, (2) minimum CMOS feature size, (3) maximum MEMS linear dimension, (4) number of mechanical MEMS layers, (5) MEMS/CMOS spacing.more » In particular, the embedded approach to integration developed at Sandia will be examined in the context of these technology features. Presently, this technology offers MEMS feature sizes as small as 1 {micro}m, CMOS critical dimensions of 1.25 {micro}m, MEMS linear dimensions of 1,000 {micro}m, a single mechanical level of polysilicon, and a 100 {micro}m space between MEMS and CMOS. This is applicable to modern precision guided munitions.« less

  13. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    PubMed

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  14. Predicting protein β-sheet contacts using a maximum entropy-based correlated mutation measure.

    PubMed

    Burkoff, Nikolas S; Várnai, Csilla; Wild, David L

    2013-03-01

    The problem of ab initio protein folding is one of the most difficult in modern computational biology. The prediction of residue contacts within a protein provides a more tractable immediate step. Recently introduced maximum entropy-based correlated mutation measures (CMMs), such as direct information, have been successful in predicting residue contacts. However, most correlated mutation studies focus on proteins that have large good-quality multiple sequence alignments (MSA) because the power of correlated mutation analysis falls as the size of the MSA decreases. However, even with small autogenerated MSAs, maximum entropy-based CMMs contain information. To make use of this information, in this article, we focus not on general residue contacts but contacts between residues in β-sheets. The strong constraints and prior knowledge associated with β-contacts are ideally suited for prediction using a method that incorporates an often noisy CMM. Using contrastive divergence, a statistical machine learning technique, we have calculated a maximum entropy-based CMM. We have integrated this measure with a new probabilistic model for β-contact prediction, which is used to predict both residue- and strand-level contacts. Using our model on a standard non-redundant dataset, we significantly outperform a 2D recurrent neural network architecture, achieving a 5% improvement in true positives at the 5% false-positive rate at the residue level. At the strand level, our approach is competitive with the state-of-the-art single methods achieving precision of 61.0% and recall of 55.4%, while not requiring residue solvent accessibility as an input. http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/

  15. Use of Electrochemical Noise (EN) Technique to Study the Effect of sulfate and Chloride Ions on Passivation and Pitting Corrosion Behavior of 316 Stainless Steel

    NASA Astrophysics Data System (ADS)

    Pujar, M. G.; Anita, T.; Shaikh, H.; Dayal, R. K.; Khatak, H. S.

    2007-08-01

    In the present paper, studies were conducted on AISI Type 316 stainless steel (SS) in deaerated solutions of sodium sulfate as well as sodium chloride to establish the effect of sulfate and chloride ions on the electrochemical corrosion behavior of the material. The experiments were conducted in deaerated solutions of 0.5 M sodium sulfate as well as 0.5 M sodium chloride using electrochemical noise (EN) technique at open circuit potential (OCP) to collect the correlated current and potential signals. Visual records of the current and potential, analysis of data to arrive at the statistical parameters, spectral density estimation using the maximum entropy method (MEM) showed that sulfate ions were incorporated in the passive film to strengthen the same. However, the adsorption of chloride ions resulted in pitting corrosion thereby adversely affecting noise resistance ( R N). Distinct current and potential signals were observed for metastable pitting, stable pitting and passive film build-up. Distinct changes in the values of the statistical parameters like R N and the spectral noise resistance at zero frequency ( R°SN) revealed adsorption and incorporation of sulfate and chloride ions on the passive film/solution interface.

  16. MEMS-based thermally-actuated image stabilizer for cellular phone camera

    NASA Astrophysics Data System (ADS)

    Lin, Chun-Ying; Chiou, Jin-Chern

    2012-11-01

    This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.

  17. Respiration-Averaged CT for Attenuation Correction of PET Images – Impact on PET Texture Features in Non-Small Cell Lung Cancer Patients

    PubMed Central

    Cheng, Nai-Ming; Fang, Yu-Hua Dean; Tsan, Din-Li

    2016-01-01

    Purpose We compared attenuation correction of PET images with helical CT (PET/HCT) and respiration-averaged CT (PET/ACT) in patients with non-small-cell lung cancer (NSCLC) with the goal of investigating the impact of respiration-averaged CT on 18F FDG PET texture parameters. Materials and Methods A total of 56 patients were enrolled. Tumors were segmented on pretreatment PET images using the adaptive threshold. Twelve different texture parameters were computed: standard uptake value (SUV) entropy, uniformity, entropy, dissimilarity, homogeneity, coarseness, busyness, contrast, complexity, grey-level nonuniformity, zone-size nonuniformity, and high grey-level large zone emphasis. Comparisons of PET/HCT and PET/ACT were performed using Wilcoxon signed-rank tests, intraclass correlation coefficients, and Bland-Altman analysis. Receiver operating characteristic (ROC) curves as well as univariate and multivariate Cox regression analyses were used to identify the parameters significantly associated with disease-specific survival (DSS). A fixed threshold at 45% of the maximum SUV (T45) was used for validation. Results SUV maximum and total lesion glycolysis (TLG) were significantly higher in PET/ACT. However, texture parameters obtained with PET/ACT and PET/HCT showed a high degree of agreement. The lowest levels of variation between the two modalities were observed for SUV entropy (9.7%) and entropy (9.8%). SUV entropy, entropy, and coarseness from both PET/ACT and PET/HCT were significantly associated with DSS. Validation analyses using T45 confirmed the usefulness of SUV entropy and entropy in both PET/HCT and PET/ACT for the prediction of DSS, but only coarseness from PET/ACT achieved the statistical significance threshold. Conclusions Our results indicate that 1) texture parameters from PET/ACT are clinically useful in the prediction of survival in NSCLC patients and 2) SUV entropy and entropy are robust to attenuation correction methods. PMID:26930211

  18. Overcoming urban GPS navigation challenges through the use of MEMS inertial sensors and proper verification of navigation system performance

    NASA Astrophysics Data System (ADS)

    Vinande, Eric T.

    This research proposes several means to overcome challenges in the urban environment to ground vehicle global positioning system (GPS) receiver navigation performance through the integration of external sensor information. The effects of narrowband radio frequency interference and signal attenuation, both common in the urban environment, are examined with respect to receiver signal tracking processes. Low-cost microelectromechanical systems (MEMS) inertial sensors, suitable for the consumer market, are the focus of receiver augmentation as they provide an independent measure of motion and are independent of vehicle systems. A method for estimating the mounting angles of an inertial sensor cluster utilizing typical urban driving maneuvers is developed and is able to provide angular measurements within two degrees of truth. The integration of GPS and MEMS inertial sensors is developed utilizing a full state navigation filter. Appropriate statistical methods are developed to evaluate the urban environment navigation improvement due to the addition of MEMS inertial sensors. A receiver evaluation metric that combines accuracy, availability, and maximum error measurements is presented and evaluated over several drive tests. Following a description of proper drive test techniques, record and playback systems are evaluated as the optimal way of testing multiple receivers and/or integrated navigation systems in the urban environment as they simplify vehicle testing requirements.

  19. Combining Experiments and Simulations Using the Maximum Entropy Principle

    PubMed Central

    Boomsma, Wouter; Ferkinghoff-Borg, Jesper; Lindorff-Larsen, Kresten

    2014-01-01

    A key component of computational biology is to compare the results of computer modelling with experimental measurements. Despite substantial progress in the models and algorithms used in many areas of computational biology, such comparisons sometimes reveal that the computations are not in quantitative agreement with experimental data. The principle of maximum entropy is a general procedure for constructing probability distributions in the light of new data, making it a natural tool in cases when an initial model provides results that are at odds with experiments. The number of maximum entropy applications in our field has grown steadily in recent years, in areas as diverse as sequence analysis, structural modelling, and neurobiology. In this Perspectives article, we give a broad introduction to the method, in an attempt to encourage its further adoption. The general procedure is explained in the context of a simple example, after which we proceed with a real-world application in the field of molecular simulations, where the maximum entropy procedure has recently provided new insight. Given the limited accuracy of force fields, macromolecular simulations sometimes produce results that are at not in complete and quantitative accordance with experiments. A common solution to this problem is to explicitly ensure agreement between the two by perturbing the potential energy function towards the experimental data. So far, a general consensus for how such perturbations should be implemented has been lacking. Three very recent papers have explored this problem using the maximum entropy approach, providing both new theoretical and practical insights to the problem. We highlight each of these contributions in turn and conclude with a discussion on remaining challenges. PMID:24586124

  20. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations

    NASA Astrophysics Data System (ADS)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-01

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  1. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    PubMed

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  2. Review of MEMS differential scanning calorimetry for biomolecular study

    NASA Astrophysics Data System (ADS)

    Yu, Shifeng; Wang, Shuyu; Lu, Ming; Zuo, Lei

    2017-12-01

    Differential scanning calorimetry (DSC) is one of the few techniques that allow direct determination of enthalpy values for binding reactions and conformational transitions in biomolecules. It provides the thermodynamics information of the biomolecules which consists of Gibbs free energy, enthalpy and entropy in a straightforward manner that enables deep understanding of the structure function relationship in biomolecules such as the folding/unfolding of protein and DNA, and ligand bindings. This review provides an up to date overview of the applications of DSC in biomolecular study such as the bovine serum albumin denaturation study, the relationship between the melting point of lysozyme and the scanning rate. We also introduce the recent advances of the development of micro-electro-mechanic-system (MEMS) based DSCs.

  3. Maximum entropy methods for extracting the learned features of deep neural networks.

    PubMed

    Finnegan, Alex; Song, Jun S

    2017-10-01

    New architectures of multilayer artificial neural networks and new methods for training them are rapidly revolutionizing the application of machine learning in diverse fields, including business, social science, physical sciences, and biology. Interpreting deep neural networks, however, currently remains elusive, and a critical challenge lies in understanding which meaningful features a network is actually learning. We present a general method for interpreting deep neural networks and extracting network-learned features from input data. We describe our algorithm in the context of biological sequence analysis. Our approach, based on ideas from statistical physics, samples from the maximum entropy distribution over possible sequences, anchored at an input sequence and subject to constraints implied by the empirical function learned by a network. Using our framework, we demonstrate that local transcription factor binding motifs can be identified from a network trained on ChIP-seq data and that nucleosome positioning signals are indeed learned by a network trained on chemical cleavage nucleosome maps. Imposing a further constraint on the maximum entropy distribution also allows us to probe whether a network is learning global sequence features, such as the high GC content in nucleosome-rich regions. This work thus provides valuable mathematical tools for interpreting and extracting learned features from feed-forward neural networks.

  4. An improved method for predicting the evolution of the characteristic parameters of an information system

    NASA Astrophysics Data System (ADS)

    Dushkin, A. V.; Kasatkina, T. I.; Novoseltsev, V. I.; Ivanov, S. V.

    2018-03-01

    The article proposes a forecasting method that allows, based on the given values of entropy and error level of the first and second kind, to determine the allowable time for forecasting the development of the characteristic parameters of a complex information system. The main feature of the method under consideration is the determination of changes in the characteristic parameters of the development of the information system in the form of the magnitude of the increment in the ratios of its entropy. When a predetermined value of the prediction error ratio is reached, that is, the entropy of the system, the characteristic parameters of the system and the depth of the prediction in time are estimated. The resulting values of the characteristics and will be optimal, since at that moment the system possessed the best ratio of entropy as a measure of the degree of organization and orderliness of the structure of the system. To construct a method for estimating the depth of prediction, it is expedient to use the maximum principle of the value of entropy.

  5. Maximum-entropy probability distributions under Lp-norm constraints

    NASA Technical Reports Server (NTRS)

    Dolinar, S.

    1991-01-01

    Continuous probability density functions and discrete probability mass functions are tabulated which maximize the differential entropy or absolute entropy, respectively, among all probability distributions with a given L sub p norm (i.e., a given pth absolute moment when p is a finite integer) and unconstrained or constrained value set. Expressions for the maximum entropy are evaluated as functions of the L sub p norm. The most interesting results are obtained and plotted for unconstrained (real valued) continuous random variables and for integer valued discrete random variables. The maximum entropy expressions are obtained in closed form for unconstrained continuous random variables, and in this case there is a simple straight line relationship between the maximum differential entropy and the logarithm of the L sub p norm. Corresponding expressions for arbitrary discrete and constrained continuous random variables are given parametrically; closed form expressions are available only for special cases. However, simpler alternative bounds on the maximum entropy of integer valued discrete random variables are obtained by applying the differential entropy results to continuous random variables which approximate the integer valued random variables in a natural manner. All the results are presented in an integrated framework that includes continuous and discrete random variables, constraints on the permissible value set, and all possible values of p. Understanding such as this is useful in evaluating the performance of data compression schemes.

  6. Estimation of depth to magnetic source using maximum entropy power spectra, with application to the Peru-Chile Trench

    USGS Publications Warehouse

    Blakely, Richard J.

    1981-01-01

    Estimations of the depth to magnetic sources using the power spectrum of magnetic anomalies generally require long magnetic profiles. The method developed here uses the maximum entropy power spectrum (MEPS) to calculate depth to source on short windows of magnetic data; resolution is thereby improved. The method operates by dividing a profile into overlapping windows, calculating a maximum entropy power spectrum for each window, linearizing the spectra, and calculating with least squares the various depth estimates. The assumptions of the method are that the source is two dimensional and that the intensity of magnetization includes random noise; knowledge of the direction of magnetization is not required. The method is applied to synthetic data and to observed marine anomalies over the Peru-Chile Trench. The analyses indicate a continuous magnetic basement extending from the eastern margin of the Nazca plate and into the subduction zone. The computed basement depths agree with acoustic basement seaward of the trench axis, but deepen as the plate approaches the inner trench wall. This apparent increase in the computed depths may result from the deterioration of magnetization in the upper part of the ocean crust, possibly caused by compressional disruption of the basaltic layer. Landward of the trench axis, the depth estimates indicate possible thrusting of the oceanic material into the lower slope of the continental margin.

  7. Perspective: Maximum caliber is a general variational principle for dynamical systems

    NASA Astrophysics Data System (ADS)

    Dixit, Purushottam D.; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A.

    2018-01-01

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics—such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production—are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  8. Perspective: Maximum caliber is a general variational principle for dynamical systems.

    PubMed

    Dixit, Purushottam D; Wagoner, Jason; Weistuch, Corey; Pressé, Steve; Ghosh, Kingshuk; Dill, Ken A

    2018-01-07

    We review here Maximum Caliber (Max Cal), a general variational principle for inferring distributions of paths in dynamical processes and networks. Max Cal is to dynamical trajectories what the principle of maximum entropy is to equilibrium states or stationary populations. In Max Cal, you maximize a path entropy over all possible pathways, subject to dynamical constraints, in order to predict relative path weights. Many well-known relationships of non-equilibrium statistical physics-such as the Green-Kubo fluctuation-dissipation relations, Onsager's reciprocal relations, and Prigogine's minimum entropy production-are limited to near-equilibrium processes. Max Cal is more general. While it can readily derive these results under those limits, Max Cal is also applicable far from equilibrium. We give examples of Max Cal as a method of inference about trajectory distributions from limited data, finding reaction coordinates in bio-molecular simulations, and modeling the complex dynamics of non-thermal systems such as gene regulatory networks or the collective firing of neurons. We also survey its basis in principle and some limitations.

  9. An entropy-based method for determining the flow depth distribution in natural channels

    NASA Astrophysics Data System (ADS)

    Moramarco, Tommaso; Corato, Giovanni; Melone, Florisa; Singh, Vijay P.

    2013-08-01

    A methodology for determining the bathymetry of river cross-sections during floods by the sampling of surface flow velocity and existing low flow hydraulic data is developed . Similar to Chiu (1988) who proposed an entropy-based velocity distribution, the flow depth distribution in a cross-section of a natural channel is derived by entropy maximization. The depth distribution depends on one parameter, whose estimate is straightforward, and on the maximum flow depth. Applying to a velocity data set of five river gage sites, the method modeled the flow area observed during flow measurements and accurately assessed the corresponding discharge by coupling the flow depth distribution and the entropic relation between mean velocity and maximum velocity. The methodology unfolds a new perspective for flow monitoring by remote sensing, considering that the two main quantities on which the methodology is based, i.e., surface flow velocity and flow depth, might be potentially sensed by new sensors operating aboard an aircraft or satellite.

  10. A secure image encryption method based on dynamic harmony search (DHS) combined with chaotic map

    NASA Astrophysics Data System (ADS)

    Mirzaei Talarposhti, Khadijeh; Khaki Jamei, Mehrzad

    2016-06-01

    In recent years, there has been increasing interest in the security of digital images. This study focuses on the gray scale image encryption using dynamic harmony search (DHS). In this research, first, a chaotic map is used to create cipher images, and then the maximum entropy and minimum correlation coefficient is obtained by applying a harmony search algorithm on them. This process is divided into two steps. In the first step, the diffusion of a plain image using DHS to maximize the entropy as a fitness function will be performed. However, in the second step, a horizontal and vertical permutation will be applied on the best cipher image, which is obtained in the previous step. Additionally, DHS has been used to minimize the correlation coefficient as a fitness function in the second step. The simulation results have shown that by using the proposed method, the maximum entropy and the minimum correlation coefficient, which are approximately 7.9998 and 0.0001, respectively, have been obtained.

  11. The maximum entropy production principle: two basic questions.

    PubMed

    Martyushev, Leonid M

    2010-05-12

    The overwhelming majority of maximum entropy production applications to ecological and environmental systems are based on thermodynamics and statistical physics. Here, we discuss briefly maximum entropy production principle and raises two questions: (i) can this principle be used as the basis for non-equilibrium thermodynamics and statistical mechanics and (ii) is it possible to 'prove' the principle? We adduce one more proof which is most concise today.

  12. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  13. Nonlinear Dynamics, Poor Data, and What to Make of Them?

    NASA Astrophysics Data System (ADS)

    Ghil, M.; Zaliapin, I. V.

    2005-12-01

    The analysis of univariate or multivariate time series provides crucial information to describe, understand, and predict variability in the geosciences. The discovery and implementation of a number of novel methods for extracting useful information from time series has recently revitalized this classical field of study. Considerable progress has also been made in interpreting the information so obtained in terms of dynamical systems theory. In this talk we will describe the connections between time series analysis and nonlinear dynamics, discuss signal-to-noise enhancement, and present some of the novel methods for spectral analysis. These fall into two broad categories: (i) methods that try to ferret out regularities of the time series; and (ii) methods aimed at describing the characteristics of irregular processes. The former include singular-spectrum analysis (SSA), the multi-taper method (MTM), and the maximum-entropy method (MEM). The various steps, as well as the advantages and disadvantages of these methods, will be illustrated by their application to several important climatic time series, such as the Southern Oscillation Index (SOI), paleoclimatic time series, and instrumental temperature time series. The SOI index captures major features of interannual climate variability and is used extensively in its prediction. The other time series cover interdecadal and millennial time scales. The second category includes the calculation of fractional dimension, leading Lyapunov exponents, and Hurst exponents. More recently, multi-trend analysis (MTA), binary-decomposition analysis (BDA), and related methods have attempted to describe the structure of time series that include both regular and irregular components. Within the time available, I will try to give a feeling for how these methods work, and how well.

  14. Identification of a Threshold Value for the DEMATEL Method: Using the Maximum Mean De-Entropy Algorithm

    NASA Astrophysics Data System (ADS)

    Chung-Wei, Li; Gwo-Hshiung, Tzeng

    To deal with complex problems, structuring them through graphical representations and analyzing causal influences can aid in illuminating complex issues, systems, or concepts. The DEMATEL method is a methodology which can be used for researching and solving complicated and intertwined problem groups. The end product of the DEMATEL process is a visual representation—the impact-relations map—by which respondents organize their own actions in the world. The applicability of the DEMATEL method is widespread, ranging from analyzing world problematique decision making to industrial planning. The most important property of the DEMATEL method used in the multi-criteria decision making (MCDM) field is to construct interrelations between criteria. In order to obtain a suitable impact-relations map, an appropriate threshold value is needed to obtain adequate information for further analysis and decision-making. In this paper, we propose a method based on the entropy approach, the maximum mean de-entropy algorithm, to achieve this purpose. Using real cases to find the interrelationships between the criteria for evaluating effects in E-learning programs as an examples, we will compare the results obtained from the respondents and from our method, and discuss that the different impact-relations maps from these two methods.

  15. DECONV-TOOL: An IDL based deconvolution software package

    NASA Technical Reports Server (NTRS)

    Varosi, F.; Landsman, W. B.

    1992-01-01

    There are a variety of algorithms for deconvolution of blurred images, each having its own criteria or statistic to be optimized in order to estimate the original image data. Using the Interactive Data Language (IDL), we have implemented the Maximum Likelihood, Maximum Entropy, Maximum Residual Likelihood, and sigma-CLEAN algorithms in a unified environment called DeConv_Tool. Most of the algorithms have as their goal the optimization of statistics such as standard deviation and mean of residuals. Shannon entropy, log-likelihood, and chi-square of the residual auto-correlation are computed by DeConv_Tool for the purpose of determining the performance and convergence of any particular method and comparisons between methods. DeConv_Tool allows interactive monitoring of the statistics and the deconvolved image during computation. The final results, and optionally, the intermediate results, are stored in a structure convenient for comparison between methods and review of the deconvolution computation. The routines comprising DeConv_Tool are available via anonymous FTP through the IDL Astronomy User's Library.

  16. Entropic criterion for model selection

    NASA Astrophysics Data System (ADS)

    Tseng, Chih-Yuan

    2006-10-01

    Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.

  17. Epidemiological and Economic Evaluation of Alternative On-Farm Management Scenarios for Ovine Footrot in Switzerland

    PubMed Central

    Zingg, Dana; Steinbach, Sandro; Kuhlgatz, Christian; Rediger, Matthias; Schüpbach-Regula, Gertraud; Aepli, Matteo; Grøneng, Gry M.; Dürr, Salome

    2017-01-01

    Footrot is a multifactorial infectious disease mostly affecting sheep, caused by the bacteria Dichelobacter nodosus. It causes painful feet lesions resulting in animal welfare issues, weight loss, and reduced wool production, which leads to a considerable economic burden in animal production. In Switzerland, the disease is endemic and mandatory coordinated control programs exist only in some parts of the country. This study aimed to compare two nationwide control strategies and a no intervention scenario with the current situation, and to quantify their net economic effect. This was done by sequential application of a maximum entropy model (MEM), epidemiological simulation, and calculation of net economic effect using the net present value method. Building upon data from a questionnaire, the MEM revealed a nationwide footrot prevalence of 40.2%. Regional prevalence values were used as inputs for the epidemiological model. Under the application of the nationwide coordinated control program without (scenario B) and with (scenario C) improved diagnostics [polymerase chain reaction (PCR) test], the Swiss-wide prevalence decreased within 10 years to 14 and 5%, respectively. Contrary, an increase to 48% prevalence was observed when terminating the current control strategies (scenario D). Management costs included labor and material costs. Management benefits included reduction of fattening time and improved animal welfare, which is valued by Swiss consumers and therefore reduces societal costs. The net economic effect of the alternative scenarios B and C was positive, the one of scenario D was negative and over a period of 17 years quantified at CHF 422.3, 538.3, and −172.3 million (1 CHF = 1.040 US$), respectively. This implies that a systematic Swiss-wide management program under the application of the PCR diagnostic test is the most recommendable strategy for a cost-effective control of footrot in Switzerland. PMID:28560223

  18. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  19. Dynamic metasurface lens based on MEMS technology

    NASA Astrophysics Data System (ADS)

    Roy, Tapashree; Zhang, Shuyan; Jung, Il Woong; Troccoli, Mariano; Capasso, Federico; Lopez, Daniel

    2018-02-01

    In the recent years, metasurfaces, being flat and lightweight, have been designed to replace bulky optical components with various functions. We demonstrate a monolithic Micro-Electro-Mechanical System (MEMS) integrated with a metasurface-based flat lens that focuses light in the mid-infrared spectrum. A two-dimensional scanning MEMS platform controls the angle of the lens along two orthogonal axes by ±9°, thus enabling dynamic beam steering. The device could be used to compensate for off-axis incident light and thus correct for aberrations such as coma. We show that for low angular displacements, the integrated lens-on-MEMS system does not affect the mechanical performance of the MEMS actuators and preserves the focused beam profile as well as the measured full width at half maximum. We envision a new class of flat optical devices with active control provided by the combination of metasurfaces and MEMS for a wide range of applications, such as miniaturized MEMS-based microscope systems, LIDAR scanners, and projection systems.

  20. Simultaneous Multi-Scale Diffusion Estimation and Tractography Guided by Entropy Spectrum Pathways

    PubMed Central

    Galinsky, Vitaly L.; Frank, Lawrence R.

    2015-01-01

    We have developed a method for the simultaneous estimation of local diffusion and the global fiber tracts based upon the information entropy flow that computes the maximum entropy trajectories between locations and depends upon the global structure of the multi-dimensional and multi-modal diffusion field. Computation of the entropy spectrum pathways requires only solving a simple eigenvector problem for the probability distribution for which efficient numerical routines exist, and a straight forward integration of the probability conservation through ray tracing of the convective modes guided by a global structure of the entropy spectrum coupled with a small scale local diffusion. The intervoxel diffusion is sampled by multi b-shell multi q-angle DWI data expanded in spherical waves. This novel approach to fiber tracking incorporates global information about multiple fiber crossings in every individual voxel and ranks it in the most scientifically rigorous way. This method has potential significance for a wide range of applications, including studies of brain connectivity. PMID:25532167

  1. Multiple Diffusion Mechanisms Due to Nanostructuring in Crowded Environments

    PubMed Central

    Sanabria, Hugo; Kubota, Yoshihisa; Waxham, M. Neal

    2007-01-01

    One of the key questions regarding intracellular diffusion is how the environment affects molecular mobility. Mostly, intracellular diffusion has been described as hindered, and the physical reasons for this behavior are: immobile barriers, molecular crowding, and binding interactions with immobile or mobile molecules. Using results from multi-photon fluorescence correlation spectroscopy, we describe how immobile barriers and crowding agents affect translational mobility. To study the hindrance produced by immobile barriers, we used sol-gels (silica nanostructures) that consist of a continuous solid phase and aqueous phase in which fluorescently tagged molecules diffuse. In the case of molecular crowding, translational mobility was assessed in increasing concentrations of 500 kDa dextran solutions. Diffusion of fluorescent tracers in both sol-gels and dextran solutions shows clear evidence of anomalous subdiffusion. In addition, data from the autocorrelation function were analyzed using the maximum entropy method as adapted to fluorescence correlation spectroscopy data and compared with the standard model that incorporates anomalous diffusion. The maximum entropy method revealed evidence of different diffusion mechanisms that had not been revealed using the anomalous diffusion model. These mechanisms likely correspond to nanostructuring in crowded environments and to the relative dimensions of the crowding agent with respect to the tracer molecule. Analysis with the maximum entropy method also revealed information about the degree of heterogeneity in the environment as reported by the behavior of diffusive molecules. PMID:17040979

  2. High-resolution observations of small-scale gravity waves and turbulence features in the OH airglow layer

    NASA Astrophysics Data System (ADS)

    Sedlak, René; Hannawald, Patrick; Schmidt, Carsten; Wüst, Sabine; Bittner, Michael

    2016-12-01

    A new version of the Fast Airglow Imager (FAIM) for the detection of atmospheric waves in the OH airglow layer has been set up at the German Remote Sensing Data Center (DFD) of the German Aerospace Center (DLR) at Oberpfaffenhofen (48.09° N, 11.28° E), Germany. The spatial resolution of the instrument is 17 m pixel-1 in zenith direction with a field of view (FOV) of 11.1 km × 9.0 km at the OH layer height of ca. 87 km. Since November 2015, the system has been in operation in two different setups (zenith angles 46 and 0°) with a temporal resolution of 2.5 to 2.8 s. In a first case study we present observations of two small wave-like features that might be attributed to gravity wave instabilities. In order to spectrally analyse harmonic structures even on small spatial scales down to 550 m horizontal wavelength, we made use of the maximum entropy method (MEM) since this method exhibits an excellent wavelength resolution. MEM further allows analysing relatively short data series, which considerably helps to reduce problems such as stationarity of the underlying data series from a statistical point of view. We present an observation of the subsequent decay of well-organized wave fronts into eddies, which we tentatively interpret in terms of an indication for the onset of turbulence. Another remarkable event which demonstrates the technical capabilities of the instrument was observed during the night of 4-5 April 2016. It reveals the disintegration of a rather homogenous brightness variation into several filaments moving in different directions and with different speeds. It resembles the formation of a vortex with a horizontal axis of rotation likely related to a vertical wind shear. This case shows a notable similarity to what is expected from theoretical modelling of Kelvin-Helmholtz instabilities (KHIs). The comparatively high spatial resolution of the presented new version of the FAIM provides new insights into the structure of atmospheric wave instability and turbulent processes. Infrared imaging of wave dynamics on the sub-kilometre scale in the airglow layer supports the findings of theoretical simulations and modellings.

  3. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation.

    PubMed

    Bergeron, Dominic; Tremblay, A-M S

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ^{2} with respect to α, and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  4. Algorithms for optimized maximum entropy and diagnostic tools for analytic continuation

    NASA Astrophysics Data System (ADS)

    Bergeron, Dominic; Tremblay, A.-M. S.

    2016-08-01

    Analytic continuation of numerical data obtained in imaginary time or frequency has become an essential part of many branches of quantum computational physics. It is, however, an ill-conditioned procedure and thus a hard numerical problem. The maximum-entropy approach, based on Bayesian inference, is the most widely used method to tackle that problem. Although the approach is well established and among the most reliable and efficient ones, useful developments of the method and of its implementation are still possible. In addition, while a few free software implementations are available, a well-documented, optimized, general purpose, and user-friendly software dedicated to that specific task is still lacking. Here we analyze all aspects of the implementation that are critical for accuracy and speed and present a highly optimized approach to maximum entropy. Original algorithmic and conceptual contributions include (1) numerical approximations that yield a computational complexity that is almost independent of temperature and spectrum shape (including sharp Drude peaks in broad background, for example) while ensuring quantitative accuracy of the result whenever precision of the data is sufficient, (2) a robust method of choosing the entropy weight α that follows from a simple consistency condition of the approach and the observation that information- and noise-fitting regimes can be identified clearly from the behavior of χ2 with respect to α , and (3) several diagnostics to assess the reliability of the result. Benchmarks with test spectral functions of different complexity and an example with an actual physical simulation are presented. Our implementation, which covers most typical cases for fermions, bosons, and response functions, is available as an open source, user-friendly software.

  5. Study and characterization of a MEMS micromirror device

    NASA Astrophysics Data System (ADS)

    Furlong, Cosme; Pryputniewicz, Ryszard J.

    2004-08-01

    In this paper, advances in our study and characterization of a MEMS micromirror device are presented. The micromirror device, of 510 mm characteristic length, operates in a dynamic mode with a maximum displacement on the order of 10 mm along its principal optical axis and oscillation frequencies of up to 1.3 kHz. Developments are carried on by analytical, computational, and experimental methods. Analytical and computational nonlinear geometrical models are developed in order to determine the optimal loading-displacement operational characteristics of the micromirror. Due to the operational mode of the micromirror, the experimental characterization of its loading-displacement transfer function requires utilization of advanced optical metrology methods. Optoelectronic holography (OEH) methodologies based on multiple wavelengths that we are developing to perform such characterization are described. It is shown that the analytical, computational, and experimental approach is effective in our developments.

  6. LensEnt2: Maximum-entropy weak lens reconstruction

    NASA Astrophysics Data System (ADS)

    Marshall, P. J.; Hobson, M. P.; Gull, S. F.; Bridle, S. L.

    2013-08-01

    LensEnt2 is a maximum entropy reconstructor of weak lensing mass maps. The method takes each galaxy shape as an independent estimator of the reduced shear field and incorporates an intrinsic smoothness, determined by Bayesian methods, into the reconstruction. The uncertainties from both the intrinsic distribution of galaxy shapes and galaxy shape estimation are carried through to the final mass reconstruction, and the mass within arbitrarily shaped apertures are calculated with corresponding uncertainties. The input is a galaxy ellipticity catalog with each measured galaxy shape treated as a noisy tracer of the reduced shear field, which is inferred on a fine pixel grid assuming positivity, and smoothness on scales of w arcsec where w is an input parameter. The ICF width w can be chosen by computing the evidence for it.

  7. A maximum entropy model for chromatin structure

    NASA Astrophysics Data System (ADS)

    Farre, Pau; Emberly, Eldon; Emberly Group Team

    The DNA inside the nucleus of eukaryotic cells shows a variety of conserved structures at different length scales These structures are formed by interactions between protein complexes that bind to the DNA and regulate gene activity. Recent high throughput sequencing techniques allow for the measurement both of the genome wide contact map of the folded DNA within a cell (HiC) and where various proteins are bound to the DNA (ChIP-seq). In this talk I will present a maximum-entropy method capable of both predicting HiC contact maps from binding data, and binding data from HiC contact maps. This method results in an intuitive Ising-type model that is able to predict how altering the presence of binding factors can modify chromosome conformation, without the need of polymer simulations.

  8. Prediction of Metabolite Concentrations, Rate Constants and Post-Translational Regulation Using Maximum Entropy-Based Simulations with Application to Central Metabolism of Neurospora crassa

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cannon, William; Zucker, Jeremy; Baxter, Douglas

    We report the application of a recently proposed approach for modeling biological systems using a maximum entropy production rate principle in lieu of having in vivo rate constants. The method is applied in four steps: (1) a new ODE-based optimization approach based on Marcelin’s 1910 mass action equation is used to obtain the maximum entropy distribution, (2) the predicted metabolite concentrations are compared to those generally expected from experiment using a loss function from which post-translational regulation of enzymes is inferred, (3) the system is re-optimized with the inferred regulation from which rate constants are determined from the metabolite concentrationsmore » and reaction fluxes, and finally (4) a full ODE-based, mass action simulation with rate parameters and allosteric regulation is obtained. From the last step, the power characteristics and resistance of each reaction can be determined. The method is applied to the central metabolism of Neurospora crassa and the flow of material through the three competing pathways of upper glycolysis, the non-oxidative pentose phosphate pathway, and the oxidative pentose phosphate pathway are evaluated as a function of the NADP/NADPH ratio. It is predicted that regulation of phosphofructokinase (PFK) and flow through the pentose phosphate pathway are essential for preventing an extreme level of fructose 1, 6-bisphophate accumulation. Such an extreme level of fructose 1,6-bisphophate would otherwise result in a glassy cytoplasm with limited diffusion, dramatically decreasing the entropy and energy production rate and, consequently, biological competitiveness.« less

  9. Sample entropy analysis of cervical neoplasia gene-expression signatures

    PubMed Central

    Botting, Shaleen K; Trzeciakowski, Jerome P; Benoit, Michelle F; Salama, Salama A; Diaz-Arrastia, Concepcion R

    2009-01-01

    Background We introduce Approximate Entropy as a mathematical method of analysis for microarray data. Approximate entropy is applied here as a method to classify the complex gene expression patterns resultant of a clinical sample set. Since Entropy is a measure of disorder in a system, we believe that by choosing genes which display minimum entropy in normal controls and maximum entropy in the cancerous sample set we will be able to distinguish those genes which display the greatest variability in the cancerous set. Here we describe a method of utilizing Approximate Sample Entropy (ApSE) analysis to identify genes of interest with the highest probability of producing an accurate, predictive, classification model from our data set. Results In the development of a diagnostic gene-expression profile for cervical intraepithelial neoplasia (CIN) and squamous cell carcinoma of the cervix, we identified 208 genes which are unchanging in all normal tissue samples, yet exhibit a random pattern indicative of the genetic instability and heterogeneity of malignant cells. This may be measured in terms of the ApSE when compared to normal tissue. We have validated 10 of these genes on 10 Normal and 20 cancer and CIN3 samples. We report that the predictive value of the sample entropy calculation for these 10 genes of interest is promising (75% sensitivity, 80% specificity for prediction of cervical cancer over CIN3). Conclusion The success of the Approximate Sample Entropy approach in discerning alterations in complexity from biological system with such relatively small sample set, and extracting biologically relevant genes of interest hold great promise. PMID:19232110

  10. Maximum entropy reconstruction of poloidal magnetic field and radial electric field profiles in tokamaks

    NASA Astrophysics Data System (ADS)

    Chen, Yihang; Xiao, Chijie; Yang, Xiaoyi; Wang, Tianbo; Xu, Tianchao; Yu, Yi; Xu, Min; Wang, Long; Lin, Chen; Wang, Xiaogang

    2017-10-01

    The Laser-driven Ion beam trace probe (LITP) is a new diagnostic method for measuring poloidal magnetic field (Bp) and radial electric field (Er) in tokamaks. LITP injects a laser-driven ion beam into the tokamak, and Bp and Er profiles can be reconstructed using tomography methods. A reconstruction code has been developed to validate the LITP theory, and both 2D reconstruction of Bp and simultaneous reconstruction of Bp and Er have been attained. To reconstruct from experimental data with noise, Maximum Entropy and Gaussian-Bayesian tomography methods were applied and improved according to the characteristics of the LITP problem. With these improved methods, a reconstruction error level below 15% has been attained with a data noise level of 10%. These methods will be further tested and applied in the following LITP experiments. Supported by the ITER-CHINA program 2015GB120001, CHINA MOST under 2012YQ030142 and National Natural Science Foundation Abstract of China under 11575014 and 11375053.

  11. Long-term variation of radar-auroral backscatter and the interplanetary sector structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yeoman, T.K.; Burrage, M.D.; Lester, M.

    Recurrent variation of geomagnetic activity at the {approximately}27-day solar rotation period and higher harmonics is a well-documented phenomenon. Auroral radar backscatter data from the Sweden and Britain Radar-Auroral Experiment (SABRE) radar provide a continuous time series from 1981 to the present which is a highly sensitive monitor of geomagnetic activity. In this study, Maximum Entropy Method (MEM) dynamic power spectra of SABRE backscatter data from 1981 to 1989, concurrent interplanetary magnetic field (IMF) and solar wind parameters from 1981 to 1987, and the Kp index since 1932 are examined. Data since 1977 are compared with previously published heliospheric current sheetmore » measurements mapped out from the solar photosphere. Stong periodic behavior is observed in the radar backscatter during the declining phase of solar cycle 21, but this periodicity disappears at the start of solar cycle 22. Similar behavior is observed in earlier solar cycles in the Kp spectra. Details of the radar backscatter, IMF, and solar wind spectra indicate that the solar wind momentum density is the dominant parameter in determining the backscatter periodicity. The temporal evolution of two- and four-sector structures, as predicted by SABRE backscatter spectra, throughout solar cycle 21 generally still agree well with heliospheric current sheet measurements. For one interval, however, there is evidence that evolution of the current sheet has occurred between the photospheric source surface and the Earth.« less

  12. Maximum entropy production: Can it be used to constrain conceptual hydrological models?

    Treesearch

    M.C. Westhoff; E. Zehe

    2013-01-01

    In recent years, optimality principles have been proposed to constrain hydrological models. The principle of maximum entropy production (MEP) is one of the proposed principles and is subject of this study. It states that a steady state system is organized in such a way that entropy production is maximized. Although successful applications have been reported in...

  13. Thresholding Based on Maximum Weighted Object Correlation for Rail Defect Detection

    NASA Astrophysics Data System (ADS)

    Li, Qingyong; Huang, Yaping; Liang, Zhengping; Luo, Siwei

    Automatic thresholding is an important technique for rail defect detection, but traditional methods are not competent enough to fit the characteristics of this application. This paper proposes the Maximum Weighted Object Correlation (MWOC) thresholding method, fitting the features that rail images are unimodal and defect proportion is small. MWOC selects a threshold by optimizing the product of object correlation and the weight term that expresses the proportion of thresholded defects. Our experimental results demonstrate that MWOC achieves misclassification error of 0.85%, and outperforms the other well-established thresholding methods, including Otsu, maximum correlation thresholding, maximum entropy thresholding and valley-emphasis method, for the application of rail defect detection.

  14. Using Data-Dependent Priors to Mitigate Small Sample Bias in Latent Growth Models: A Discussion and Illustration Using M"plus"

    ERIC Educational Resources Information Center

    McNeish, Daniel M.

    2016-01-01

    Mixed-effects models (MEMs) and latent growth models (LGMs) are often considered interchangeable save the discipline-specific nomenclature. Software implementations of these models, however, are not interchangeable, particularly with small sample sizes. Restricted maximum likelihood estimation that mitigates small sample bias in MEMs has not been…

  15. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns.

    PubMed

    Lezon, Timothy R; Banavar, Jayanth R; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V

    2006-12-12

    We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems.

  16. Maximum-entropy reconstruction method for moment-based solution of the Boltzmann equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2013-11-01

    We describe a method for a moment-based solution of the Boltzmann equation. This starts with moment equations for a 10 + 9 N , N = 0 , 1 , 2 . . . -moment representation. The partial-differential equations (PDEs) for these moments are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy construction of the velocity distribution function f (c , x , t) , using the known moments, within a finite-box domain of single-particle-velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using a Monte-Carlo method. This allows integration of the moment PDEs in time. Illustrative examples will include zero-space- dimensional relaxation of f (c , t) from a Mott-Smith-like initial condition toward equilibrium and one-space dimensional, finite Knudsen number, planar Couette flow. Comparison with results using the direct-simulation Monte-Carlo method will be presented.

  17. Comment and some questions on "Puzzles and the maximum effective moment (MEM) criterion in structural Geology"

    NASA Astrophysics Data System (ADS)

    Tong, Hengmao

    2012-03-01

    Zheng et al (Zheng and Wang, 2004; Zheng et al., 2011) proposed a new mechanism for ductile formation which is related to effective moment instead of shear stress, and the deformation zone develops along plane of maximum effective moment. The mathematical expression of maximum effective moment (The criterion of maximum effective moment, simplified as MEM criterion, Zheng and Wang, 2004; Zheng et al., 2011) is that Meff = 0.5 (σ1 - σ3) L sin2αsinα, where σ1 - σ3 is the yield strength of a material or rock, L is the unit length (of cleavage) in the σ1 direction, and α is the angle between σ1 and a certain plane. The effective moment reaches its maximum value when α is ±54.7° and deformation zones tend to appear in pairs with a conjugate angle of 2α, 109.4° facing to σ1. There is no remarkable Meff drop from the maximum values within the range of 54.7°±10°, where is favorable for the formation of ductile deformation zone. As a result, the origin of low-angle normal faults, high-angle reverse faults and certain types of conjugate strike-slip faults, which are incompatible with Mohr-Coulomb criterion, can be reasonably explained with MEM criterion (Zheng et al., 2011). Further more, lots of natural and experimental cases were found or collected to support the criterion.

  18. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    NASA Astrophysics Data System (ADS)

    Schaerer, Roman Pascal; Bansal, Pratyuksh; Torrilhon, Manuel

    2017-07-01

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) [13], we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropy distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.

  19. Maximum and minimum entropy states yielding local continuity bounds

    NASA Astrophysics Data System (ADS)

    Hanson, Eric P.; Datta, Nilanjana

    2018-04-01

    Given an arbitrary quantum state (σ), we obtain an explicit construction of a state ρɛ * ( σ ) [respectively, ρ * , ɛ ( σ ) ] which has the maximum (respectively, minimum) entropy among all states which lie in a specified neighborhood (ɛ-ball) of σ. Computing the entropy of these states leads to a local strengthening of the continuity bound of the von Neumann entropy, i.e., the Audenaert-Fannes inequality. Our bound is local in the sense that it depends on the spectrum of σ. The states ρɛ * ( σ ) and ρ * , ɛ (σ) depend only on the geometry of the ɛ-ball and are in fact optimizers for a larger class of entropies. These include the Rényi entropy and the minimum- and maximum-entropies, providing explicit formulas for certain smoothed quantities. This allows us to obtain local continuity bounds for these quantities as well. In obtaining this bound, we first derive a more general result which may be of independent interest, namely, a necessary and sufficient condition under which a state maximizes a concave and Gâteaux-differentiable function in an ɛ-ball around a given state σ. Examples of such a function include the von Neumann entropy and the conditional entropy of bipartite states. Our proofs employ tools from the theory of convex optimization under non-differentiable constraints, in particular Fermat's rule, and majorization theory.

  20. Identifying topological-band insulator transitions in silicene and other 2D gapped Dirac materials by means of Rényi-Wehrl entropy

    NASA Astrophysics Data System (ADS)

    Calixto, M.; Romera, E.

    2015-02-01

    We propose a new method to identify transitions from a topological insulator to a band insulator in silicene (the silicon equivalent of graphene) in the presence of perpendicular magnetic and electric fields, by using the Rényi-Wehrl entropy of the quantum state in phase space. Electron-hole entropies display an inversion/crossing behavior at the charge neutrality point for any Landau level, and the combined entropy of particles plus holes turns out to be maximum at this critical point. The result is interpreted in terms of delocalization of the quantum state in phase space. The entropic description presented in this work will be valid in general 2D gapped Dirac materials, with a strong intrinsic spin-orbit interaction, isostructural with silicene.

  1. Method for integrating microelectromechanical devices with electronic circuitry

    DOEpatents

    Montague, Stephen; Smith, James H.; Sniegowski, Jeffry J.; McWhorter, Paul J.

    1998-01-01

    A method for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry.

  2. 16QAM Blind Equalization via Maximum Entropy Density Approximation Technique and Nonlinear Lagrange Multipliers

    PubMed Central

    Mauda, R.; Pinchas, M.

    2014-01-01

    Recently a new blind equalization method was proposed for the 16QAM constellation input inspired by the maximum entropy density approximation technique with improved equalization performance compared to the maximum entropy approach, Godard's algorithm, and others. In addition, an approximated expression for the minimum mean square error (MSE) was obtained. The idea was to find those Lagrange multipliers that bring the approximated MSE to minimum. Since the derivation of the obtained MSE with respect to the Lagrange multipliers leads to a nonlinear equation for the Lagrange multipliers, the part in the MSE expression that caused the nonlinearity in the equation for the Lagrange multipliers was ignored. Thus, the obtained Lagrange multipliers were not those Lagrange multipliers that bring the approximated MSE to minimum. In this paper, we derive a new set of Lagrange multipliers based on the nonlinear expression for the Lagrange multipliers obtained from minimizing the approximated MSE with respect to the Lagrange multipliers. Simulation results indicate that for the high signal to noise ratio (SNR) case, a faster convergence rate is obtained for a channel causing a high initial intersymbol interference (ISI) while the same equalization performance is obtained for an easy channel (initial ISI low). PMID:24723813

  3. Forest Tree Species Distribution Mapping Using Landsat Satellite Imagery and Topographic Variables with the Maximum Entropy Method in Mongolia

    NASA Astrophysics Data System (ADS)

    Hao Chiang, Shou; Valdez, Miguel; Chen, Chi-Farn

    2016-06-01

    Forest is a very important ecosystem and natural resource for living things. Based on forest inventories, government is able to make decisions to converse, improve and manage forests in a sustainable way. Field work for forestry investigation is difficult and time consuming, because it needs intensive physical labor and the costs are high, especially surveying in remote mountainous regions. A reliable forest inventory can give us a more accurate and timely information to develop new and efficient approaches of forest management. The remote sensing technology has been recently used for forest investigation at a large scale. To produce an informative forest inventory, forest attributes, including tree species are unavoidably required to be considered. In this study the aim is to classify forest tree species in Erdenebulgan County, Huwsgul province in Mongolia, using Maximum Entropy method. The study area is covered by a dense forest which is almost 70% of total territorial extension of Erdenebulgan County and is located in a high mountain region in northern Mongolia. For this study, Landsat satellite imagery and a Digital Elevation Model (DEM) were acquired to perform tree species mapping. The forest tree species inventory map was collected from the Forest Division of the Mongolian Ministry of Nature and Environment as training data and also used as ground truth to perform the accuracy assessment of the tree species classification. Landsat images and DEM were processed for maximum entropy modeling, and this study applied the model with two experiments. The first one is to use Landsat surface reflectance for tree species classification; and the second experiment incorporates terrain variables in addition to the Landsat surface reflectance to perform the tree species classification. All experimental results were compared with the tree species inventory to assess the classification accuracy. Results show that the second one which uses Landsat surface reflectance coupled with terrain variables produced better result, with the higher overall accuracy and kappa coefficient than first experiment. The results indicate that the Maximum Entropy method is an applicable, and to classify tree species using satellite imagery data coupled with terrain information can improve the classification of tree species in the study area.

  4. A maximum entropy thermodynamics of small systems.

    PubMed

    Dixit, Purushottam D

    2013-05-14

    We present a maximum entropy approach to analyze the state space of a small system in contact with a large bath, e.g., a solvated macromolecular system. For the solute, the fluctuations around the mean values of observables are not negligible and the probability distribution P(r) of the state space depends on the intricate details of the interaction of the solute with the solvent. Here, we employ a superstatistical approach: P(r) is expressed as a marginal distribution summed over the variation in β, the inverse temperature of the solute. The joint distribution P(β, r) is estimated by maximizing its entropy. We also calculate the first order system-size corrections to the canonical ensemble description of the state space. We test the development on a simple harmonic oscillator interacting with two baths with very different chemical identities, viz., (a) Lennard-Jones particles and (b) water molecules. In both cases, our method captures the state space of the oscillator sufficiently well. Future directions and connections with traditional statistical mechanics are discussed.

  5. From Maximum Entropy Models to Non-Stationarity and Irreversibility

    NASA Astrophysics Data System (ADS)

    Cofre, Rodrigo; Cessac, Bruno; Maldonado, Cesar

    The maximum entropy distribution can be obtained from a variational principle. This is important as a matter of principle and for the purpose of finding approximate solutions. One can exploit this fact to obtain relevant information about the underlying stochastic process. We report here in recent progress in three aspects to this approach.1- Biological systems are expected to show some degree of irreversibility in time. Based on the transfer matrix technique to find the spatio-temporal maximum entropy distribution, we build a framework to quantify the degree of irreversibility of any maximum entropy distribution.2- The maximum entropy solution is characterized by a functional called Gibbs free energy (solution of the variational principle). The Legendre transformation of this functional is the rate function, which controls the speed of convergence of empirical averages to their ergodic mean. We show how the correct description of this functional is determinant for a more rigorous characterization of first and higher order phase transitions.3- We assess the impact of a weak time-dependent external stimulus on the collective statistics of spiking neuronal networks. We show how to evaluate this impact on any higher order spatio-temporal correlation. RC supported by ERC advanced Grant ``Bridges'', BC: KEOPS ANR-CONICYT, Renvision and CM: CONICYT-FONDECYT No. 3140572.

  6. Regularization of Grad’s 13 -Moment-Equations in Kinetic Gas Theory

    DTIC Science & Technology

    2011-01-01

    variant of the moment method has been proposed by Eu (1980) and is used, e.g., in Myong (2001). Recently, a maximum- entropy 10-moment system has been used...small amplitude linear waves, the R13 system is linearly stable in time for all modes and wave lengths. The instability of the Burnett system indicates...Boltzmann equation. Related to the problem of global hyperbolicity is the questions of the existence of an entropy law for the R13 system . In the linear

  7. Maximum Kolmogorov-Sinai Entropy Versus Minimum Mixing Time in Markov Chains

    NASA Astrophysics Data System (ADS)

    Mihelich, M.; Dubrulle, B.; Paillard, D.; Kral, Q.; Faranda, D.

    2018-01-01

    We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It could be interesting in computer sciences and statistical physics, for computations that use random walks on graphs that can be represented as Markov chains.

  8. LIBOR troubles: Anomalous movements detection based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Bariviera, Aurelio F.; Martín, María T.; Plastino, Angelo; Vampa, Victoria

    2016-05-01

    According to the definition of the London Interbank Offered Rate (LIBOR), contributing banks should give fair estimates of their own borrowing costs in the interbank market. Between 2007 and 2009, several banks made inappropriate submissions of LIBOR, sometimes motivated by profit-seeking from their trading positions. In 2012, several newspapers' articles began to cast doubt on LIBOR integrity, leading surveillance authorities to conduct investigations on banks' behavior. Such procedures resulted in severe fines imposed to involved banks, who recognized their financial inappropriate conduct. In this paper, we uncover such unfair behavior by using a forecasting method based on the Maximum Entropy principle. Our results are robust against changes in parameter settings and could be of great help for market surveillance.

  9. Decrease in heart rate variability response to task is related to anxiety and depressiveness in normal subjects.

    PubMed

    Shinba, Toshikazu; Kariya, Nobutoshi; Matsui, Yasue; Ozawa, Nobuyuki; Matsuda, Yoshiki; Yamamoto, Ken-Ichi

    2008-10-01

    Previous studies have shown that heart rate variability (HRV) measurement is useful in investigating the pathophysiology of various psychiatric disorders. The present study further examined its usefulness in evaluating the mental health of normal subjects with respect to anxiety and depressiveness. Heart rate (HR) and HRV were measured tonometrically at the wrist in 43 normal subjects not only in the resting condition but also during a task (random number generation) to assess the responsiveness. For HRV measurement, high-frequency (HF; 0.15-0.4 Hz) and low-frequency (LF; 0.04-0.15 Hz) components of HRV were obtained using MemCalc, a time series analysis technique that combines a non-linear least square method with maximum entropy method. For psychological evaluation of anxiety and depressiveness, two self-report questionnaires were used: State-Trait Anxiety Inventory (STAI) and Self-Rating Depression Scale (SDS). No significant relation was observed between HR and HRV indices, and the psychological scores both in the resting and task conditions. By task application, HF decreased, and LF/HF and HR increased, and significant correlation with psychological scores was found in the responsiveness to task measured by the ratio of HRV and HR indices during the task to that at rest (task/rest ratio). A positive relationship was found between task/rest ratio for HF, and STAI and SDS scores. Task/rest ratio of HR was negatively correlated with STAI-state score. Decreased HRV response to task application is related to anxiety and depressiveness. Decreased autonomic responsiveness could serve as a sign of psychological dysfunction.

  10. Investigation of radio astronomy image processing techniques for use in the passive millimetre-wave security screening environment

    NASA Astrophysics Data System (ADS)

    Taylor, Christopher T.; Hutchinson, Simon; Salmon, Neil A.; Wilkinson, Peter N.; Cameron, Colin D.

    2014-06-01

    Image processing techniques can be used to improve the cost-effectiveness of future interferometric Passive MilliMetre Wave (PMMW) imagers. The implementation of such techniques will allow for a reduction in the number of collecting elements whilst ensuring adequate image fidelity is maintained. Various techniques have been developed by the radio astronomy community to enhance the imaging capability of sparse interferometric arrays. The most prominent are Multi- Frequency Synthesis (MFS) and non-linear deconvolution algorithms, such as the Maximum Entropy Method (MEM) and variations of the CLEAN algorithm. This investigation focuses on the implementation of these methods in the defacto standard for radio astronomy image processing, the Common Astronomy Software Applications (CASA) package, building upon the discussion presented in Taylor et al., SPIE 8362-0F. We describe the image conversion process into a CASA suitable format, followed by a series of simulations that exploit the highlighted deconvolution and MFS algorithms assuming far-field imagery. The primary target application used for this investigation is an outdoor security scanner for soft-sided Heavy Goods Vehicles. A quantitative analysis of the effectiveness of the aforementioned image processing techniques is presented, with thoughts on the potential cost-savings such an approach could yield. Consideration is also given to how the implementation of these techniques in CASA might be adapted to operate in a near-field target environment. This may enable a much wider usability by the imaging community outside of radio astronomy and thus would be directly relevant to portal screening security systems in the microwave and millimetre wave bands.

  11. Using the principle of entropy maximization to infer genetic interaction networks from gene expression patterns

    PubMed Central

    Lezon, Timothy R.; Banavar, Jayanth R.; Cieplak, Marek; Maritan, Amos; Fedoroff, Nina V.

    2006-01-01

    We describe a method based on the principle of entropy maximization to identify the gene interaction network with the highest probability of giving rise to experimentally observed transcript profiles. In its simplest form, the method yields the pairwise gene interaction network, but it can also be extended to deduce higher-order interactions. Analysis of microarray data from genes in Saccharomyces cerevisiae chemostat cultures exhibiting energy metabolic oscillations identifies a gene interaction network that reflects the intracellular communication pathways that adjust cellular metabolic activity and cell division to the limiting nutrient conditions that trigger metabolic oscillations. The success of the present approach in extracting meaningful genetic connections suggests that the maximum entropy principle is a useful concept for understanding living systems, as it is for other complex, nonequilibrium systems. PMID:17138668

  12. Chemical vapor deposition techniques and related methods for manufacturing microminiature thermionic converters

    DOEpatents

    King, Donald B.; Sadwick, Laurence P.; Wernsman, Bernard R.

    2002-06-25

    Methods of manufacturing microminiature thermionic converters (MTCs) having high energy-conversion efficiencies and variable operating temperatures using MEMS manufacturing techniques including chemical vapor deposition. The MTCs made using the methods of the invention incorporate cathode to anode spacing of about 1 micron or less and use cathode and anode materials having work functions ranging from about 1 eV to about 3 eV. The MTCs also exhibit maximum efficiencies of just under 30%, and thousands of the devices can be fabricated at modest costs.

  13. Method for integrating microelectromechanical devices with electronic circuitry

    DOEpatents

    Montague, S.; Smith, J.H.; Sniegowski, J.J.; McWhorter, P.J.

    1998-08-25

    A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry. The method comprises the steps of forming each MEM device within a cavity below a device surface of the substrate; encapsulating the MEM device prior to forming electronic circuitry on the substrate; and releasing the MEM device for operation after fabrication of the electronic circuitry. Planarization of the encapsulated MEM device prior to formation of the electronic circuitry allows the use of standard processing steps for fabrication of the electronic circuitry. 13 figs.

  14. Entropy of adsorption of mixed surfactants from solutions onto the air/water interface

    USGS Publications Warehouse

    Chen, L.-W.; Chen, J.-H.; Zhou, N.-F.

    1995-01-01

    The partial molar entropy change for mixed surfactant molecules adsorbed from solution at the air/water interface has been investigated by surface thermodynamics based upon the experimental surface tension isotherms at various temperatures. Results for different surfactant mixtures of sodium dodecyl sulfate and sodium tetradecyl sulfate, decylpyridinium chloride and sodium alkylsulfonates have shown that the partial molar entropy changes for adsorption of the mixed surfactants were generally negative and decreased with increasing adsorption to a minimum near the maximum adsorption and then increased abruptly. The entropy decrease can be explained by the adsorption-orientation of surfactant molecules in the adsorbed monolayer and the abrupt entropy increase at the maximum adsorption is possible due to the strong repulsion between the adsorbed molecules.

  15. Learning Probabilities From Random Observables in High Dimensions: The Maximum Entropy Distribution and Others

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Cocco, Simona; Monasson, Rémi

    2015-11-01

    We consider the problem of learning a target probability distribution over a set of N binary variables from the knowledge of the expectation values (with this target distribution) of M observables, drawn uniformly at random. The space of all probability distributions compatible with these M expectation values within some fixed accuracy, called version space, is studied. We introduce a biased measure over the version space, which gives a boost increasing exponentially with the entropy of the distributions and with an arbitrary inverse `temperature' Γ . The choice of Γ allows us to interpolate smoothly between the unbiased measure over all distributions in the version space (Γ =0) and the pointwise measure concentrated at the maximum entropy distribution (Γ → ∞ ). Using the replica method we compute the volume of the version space and other quantities of interest, such as the distance R between the target distribution and the center-of-mass distribution over the version space, as functions of α =(log M)/N and Γ for large N. Phase transitions at critical values of α are found, corresponding to qualitative improvements in the learning of the target distribution and to the decrease of the distance R. However, for fixed α the distance R does not vary with Γ which means that the maximum entropy distribution is not closer to the target distribution than any other distribution compatible with the observable values. Our results are confirmed by Monte Carlo sampling of the version space for small system sizes (N≤ 10).

  16. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  17. Tsallis Entropy and the Transition to Scaling in Fragmentation

    NASA Astrophysics Data System (ADS)

    Sotolongo-Costa, Oscar; Rodriguez, Arezky H.; Rodgers, G. J.

    2000-12-01

    By using the maximum entropy principle with Tsallis entropy we obtain a fragment size distribution function which undergoes a transition to scaling. This distribution function reduces to those obtained by other authors using Shannon entropy. The treatment is easily generalisable to any process of fractioning with suitable constraints.

  18. Image restoration and superresolution as probes of small scale far-IR structure in star forming regions

    NASA Technical Reports Server (NTRS)

    Lester, D. F.; Harvey, P. M.; Joy, M.; Ellis, H. B., Jr.

    1986-01-01

    Far-infrared continuum studies from the Kuiper Airborne Observatory are described that are designed to fully exploit the small-scale spatial information that this facility can provide. This work gives the clearest picture to data on the structure of galactic and extragalactic star forming regions in the far infrared. Work is presently being done with slit scans taken simultaneously at 50 and 100 microns, yielding one-dimensional data. Scans of sources in different directions have been used to get certain information on two dimensional structure. Planned work with linear arrays will allow us to generalize our techniques to two dimensional image restoration. For faint sources, spatial information at the diffraction limit of the telescope is obtained, while for brighter sources, nonlinear deconvolution techniques have allowed us to improve over the diffraction limit by as much as a factor of four. Information on the details of the color temperature distribution is derived as well. This is made possible by the accuracy with which the instrumental point-source profile (PSP) is determined at both wavelengths. While these two PSPs are different, data at different wavelengths can be compared by proper spatial filtering. Considerable effort has been devoted to implementing deconvolution algorithms. Nonlinear deconvolution methods offer the potential of superresolution -- that is, inference of power at spatial frequencies that exceed D lambda. This potential is made possible by the implicit assumption by the algorithm of positivity of the deconvolved data, a universally justifiable constraint for photon processes. We have tested two nonlinear deconvolution algorithms on our data; the Richardson-Lucy (R-L) method and the Maximum Entropy Method (MEM). The limits of image deconvolution techniques for achieving spatial resolution are addressed.

  19. Two-dimensional fluorescence lifetime correlation spectroscopy. 2. Application.

    PubMed

    Ishii, Kunihiko; Tahara, Tahei

    2013-10-03

    In the preceding article, we introduced the theoretical framework of two-dimensional fluorescence lifetime correlation spectroscopy (2D FLCS). In this article, we report the experimental implementation of 2D FLCS. In this method, two-dimensional emission-delay correlation maps are constructed from the photon data obtained with the time-correlated single photon counting (TCSPC), and then they are converted to 2D lifetime correlation maps by the inverse Laplace transform. We develop a numerical method to realize reliable transformation, employing the maximum entropy method (MEM). We apply the developed actual 2D FLCS to two real systems, a dye mixture and a DNA hairpin. For the dye mixture, we show that 2D FLCS is experimentally feasible and that it can identify different species in an inhomogeneous sample without any prior knowledge. The application to the DNA hairpin demonstrates that 2D FLCS can disclose microsecond spontaneous dynamics of biological molecules in a visually comprehensible manner, through identifying species as unique lifetime distributions. A FRET pair is attached to the both ends of the DNA hairpin, and the different structures of the DNA hairpin are distinguished as different fluorescence lifetimes in 2D FLCS. By constructing the 2D correlation maps of the fluorescence lifetime of the FRET donor, the equilibrium dynamics between the open and the closed forms of the DNA hairpin is clearly observed as the appearance of the cross peaks between the corresponding fluorescence lifetimes. This equilibrium dynamics of the DNA hairpin is clearly separated from the acceptor-missing DNA that appears as an isolated diagonal peak in the 2D maps. The present study clearly shows that newly developed 2D FLCS can disclose spontaneous structural dynamics of biological molecules with microsecond time resolution.

  20. Stationary properties of maximum-entropy random walks.

    PubMed

    Dixit, Purushottam D

    2015-10-01

    Maximum-entropy (ME) inference of state probabilities using state-dependent constraints is popular in the study of complex systems. In stochastic systems, how state space topology and path-dependent constraints affect ME-inferred state probabilities remains unknown. To that end, we derive the transition probabilities and the stationary distribution of a maximum path entropy Markov process subject to state- and path-dependent constraints. A main finding is that the stationary distribution over states differs significantly from the Boltzmann distribution and reflects a competition between path multiplicity and imposed constraints. We illustrate our results with particle diffusion on a two-dimensional landscape. Connections with the path integral approach to diffusion are discussed.

  1. MEMS/ECD Method for Making Bi(2-x)Sb(x)Te3 Thermoelectric Devices

    NASA Technical Reports Server (NTRS)

    Lim, James; Huang, Chen-Kuo; Ryan, Margaret; Snyder, G. Jeffrey; Herman, Jennifer; Fleurial, Jean-Pierre

    2008-01-01

    A method of fabricating Bi(2-x)Sb(x)Te3-based thermoelectric microdevices involves a combination of (1) techniques used previously in the fabrication of integrated circuits and of microelectromechanical systems (MEMS) and (2) a relatively inexpensive MEMS-oriented electrochemical-deposition (ECD) technique. The present method overcomes the limitations of prior MEMS fabrication techniques and makes it possible to satisfy requirements.

  2. Estimation of typhoon rainfall in GaoPing River: A Multivariate Maximum Entropy Method

    NASA Astrophysics Data System (ADS)

    Pei-Jui, Wu; Hwa-Lung, Yu

    2016-04-01

    The heavy rainfall from typhoons is the main factor of the natural disaster in Taiwan, which causes the significant loss of human lives and properties. Statistically average 3.5 typhoons invade Taiwan every year, and the serious typhoon, Morakot in 2009, impacted Taiwan in recorded history. Because the duration, path and intensity of typhoon, also affect the temporal and spatial rainfall type in specific region , finding the characteristics of the typhoon rainfall type is advantageous when we try to estimate the quantity of rainfall. This study developed a rainfall prediction model and can be divided three parts. First, using the EEOF(extended empirical orthogonal function) to classify the typhoon events, and decompose the standard rainfall type of all stations of each typhoon event into the EOF and PC(principal component). So we can classify the typhoon events which vary similarly in temporally and spatially as the similar typhoon types. Next, according to the classification above, we construct the PDF(probability density function) in different space and time by means of using the multivariate maximum entropy from the first to forth moment statistically. Therefore, we can get the probability of each stations of each time. Final we use the BME(Bayesian Maximum Entropy method) to construct the typhoon rainfall prediction model , and to estimate the rainfall for the case of GaoPing river which located in south of Taiwan.This study could be useful for typhoon rainfall predictions in future and suitable to government for the typhoon disaster prevention .

  3. Entropy generation in biophysical systems

    NASA Astrophysics Data System (ADS)

    Lucia, U.; Maino, G.

    2013-03-01

    Recently, in theoretical biology and in biophysical engineering the entropy production has been verified to approach asymptotically its maximum rate, by using the probability of individual elementary modes distributed in accordance with the Boltzmann distribution. The basis of this approach is the hypothesis that the entropy production rate is maximum at the stationary state. In the present work, this hypothesis is explained and motivated, starting from the entropy generation analysis. This latter quantity is obtained from the entropy balance for open systems considering the lifetime of the natural real process. The Lagrangian formalism is introduced in order to develop an analytical approach to the thermodynamic analysis of the open irreversible systems. The stationary conditions of the open systems are thus obtained in relation to the entropy generation and the least action principle. Consequently, the considered hypothesis is analytically proved and it represents an original basic approach in theoretical and mathematical biology and also in biophysical engineering. It is worth remarking that the present results show that entropy generation not only increases but increases as fast as possible.

  4. MEMS-based tunable gratings and their applications

    NASA Astrophysics Data System (ADS)

    Yu, Yiting; Yuan, Weizheng; Qiao, Dayong

    2015-03-01

    The marriage of optics and MEMS has resulted in a new category of optical devices and systems that have unprecedented advantages compared with their traditional counterparts. As an important spatial light modulating technology, diffractive optical MEMS obtains a wide variety of successful commercial applications, e.g. projection displays, optical communication and spectral analysis, due to its features of highly compact, low-cost, IC-compatible, excellent performance, and providing possibilities for developing totally new, yet smart devices and systems. Three most successful MEMS diffraction gratings (GLVs, Polychromator and DMDs) are briefly introduced and their potential applications are analyzed. Then, three different MEMS tunable gratings developed by our group, named as micro programmable blazed gratings (μPBGs) and micro pitch-tunable gratings (μPTGs) working in either digital or analog mode, are demonstrated. The strategies to largely enhance the maximum blazed angle and grating period are described. Some preliminary application explorations based on the developed grating devices are also shown. For our ongoing research focus, we will further improve the device performance to meet the engineering application requirements.

  5. Electrical latching of microelectromechanical devices

    DOEpatents

    Garcia, Ernest J.; Sleefe, Gerard E.

    2004-11-02

    Methods are disclosed for row and column addressing of an array of microelectromechanical (MEM) devices. The methods of the present invention are applicable to MEM micromirrors or memory elements and allow the MEM array to be programmed and maintained latched in a programmed state with a voltage that is generally lower than the voltage required for electrostatically switching the MEM devices.

  6. Determining Dynamical Path Distributions usingMaximum Relative Entropy

    DTIC Science & Technology

    2015-05-31

    entropy to a one-dimensional continuum labeled by a parameter η. The resulting η-entropies are equivalent to those proposed by Renyi [12] or by Tsallis [13...1995). [12] A. Renyi , “On measures of entropy and information,”Proc. 4th Berkeley Simposium on Mathematical Statistics and Probability, Vol 1, p. 547-461

  7. Ergodicity, Maximum Entropy Production, and Steepest Entropy Ascent in the Proofs of Onsager's Reciprocal Relations

    NASA Astrophysics Data System (ADS)

    Benfenati, Francesco; Beretta, Gian Paolo

    2018-04-01

    We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).

  8. Elements of the cognitive universe

    NASA Astrophysics Data System (ADS)

    Topsøe, Flemming

    2017-06-01

    "The least biased inference, taking available information into account, is the one with maximum entropy". So we are taught by Jaynes. The many followers from a broad spectrum of the natural and social sciences point to the wisdom of this principle, the maximum entropy principle, MaxEnt. But "entropy" need not be tied only to classical entropy and thus to probabilistic thinking. In fact, the arguments found in Jaynes' writings and elsewhere can, as we shall attempt to demonstrate, profitably be revisited, elaborated and transformed to apply in a much more general abstract setting. The approach is based on game theoretical thinking. Philosophical considerations dealing with notions of cognition - basically truth and belief - lie behind. Quantitative elements are introduced via a concept of description effort. An interpretation of Tsallis Entropy is indicated.

  9. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  10. Exploration of the Maximum Entropy/Optimal Projection Approach to Control Design Synthesis for Large Space Structures.

    DTIC Science & Technology

    1985-02-01

    Energy Analysis , a branch of dynamic modal analysis developed for analyzing acoustic vibration problems, its present stage of development embodies a...Maximum Entropy Stochastic Modelling and Reduced-Order Design Synthesis is a rigorous new approach to this class of problems. Inspired by Statistical

  11. Autonomous entropy-based intelligent experimental design

    NASA Astrophysics Data System (ADS)

    Malakar, Nabin Kumar

    2011-07-01

    The aim of this thesis is to explore the application of probability and information theory in experimental design, and to do so in a way that combines what we know about inference and inquiry in a comprehensive and consistent manner. Present day scientific frontiers involve data collection at an ever-increasing rate. This requires that we find a way to collect the most relevant data in an automated fashion. By following the logic of the scientific method, we couple an inference engine with an inquiry engine to automate the iterative process of scientific learning. The inference engine involves Bayesian machine learning techniques to estimate model parameters based upon both prior information and previously collected data, while the inquiry engine implements data-driven exploration. By choosing an experiment whose distribution of expected results has the maximum entropy, the inquiry engine selects the experiment that maximizes the expected information gain. The coupled inference and inquiry engines constitute an autonomous learning method for scientific exploration. We apply it to a robotic arm to demonstrate the efficacy of the method. Optimizing inquiry involves searching for an experiment that promises, on average, to be maximally informative. If the set of potential experiments is described by many parameters, the search involves a high-dimensional entropy space. In such cases, a brute force search method will be slow and computationally expensive. We develop an entropy-based search algorithm, called nested entropy sampling, to select the most informative experiment. This helps to reduce the number of computations necessary to find the optimal experiment. We also extended the method of maximizing entropy, and developed a method of maximizing joint entropy so that it could be used as a principle of collaboration between two robots. This is a major achievement of this thesis, as it allows the information-based collaboration between two robotic units towards a same goal in an automated fashion.

  12. Development of the micro pixel chamber based on MEMS technology

    NASA Astrophysics Data System (ADS)

    Takemura, T.; Takada, A.; Kishimoto, T.; Komura, S.; Kubo, H.; Matsuoka, Y.; Miuchi, K.; Miyamoto, S.; Mizumoto, T.; Mizumura, Y.; Motomura, T.; Nakamasu, Y.; Nakamura, K.; Oda, M.; Ohta, K.; Parker, J. D.; Sawano, T.; Sonoda, S.; Tanimori, T.; Tomono, D.; Yoshikawa, K.

    2018-02-01

    Micro pixel chambers (μ-PIC) are gaseous two-dimensional imaging detectors originally manufactured using printed circuit board (PCB) technology. They are used in MeV gamma-ray astronomy, medicalimaging, neutron imaging, the search for dark matter, and dose monitoring. The position resolution of the present μ-PIC is approximately 120 μm (RMS), however some applications require a fine position resolution of less than 100 μm. To this end, we have started to develop a μ-PIC based on micro electro mechanical system (MEMS) technology, which provides better manufacturing accuracy than PCB technology. Our simulation predicted the gains of MEMS μ-PICs to be twice those of PCB μ-PICs at the same anode voltage. We manufactured two MEMS μ-PICs and tested them to study their behavior. In these experiments, we successfully operated the fabricatedMEMS μ-PICs and we achieved a maximum gain of approximately 7×103 and collected their energy spectra under irradiation of X-rays from 55Fe. However, the measured gains of the MEMS μ-PICs were less than half of the values predicted in the simulations. We postulated that the gains of the MEMS μ-PICs are diminished by the effect of the silicon used as a semiconducting substrate.

  13. Development of a compact optical MEMS scanner with integrated VCSEL light source and diffractive optics

    NASA Astrophysics Data System (ADS)

    Krygowski, Thomas W.; Reyes, David; Rodgers, M. Steven; Smith, James H.; Warren, Mial E.; Sweatt, William C.; Blum-Spahn, Olga; Wendt, Joel R.; Asbill, Randolph E.

    1999-09-01

    In this work the design and initial fabrication results are reported for the components of a compact optical-MEMS laser scanning system. This system integrates a silicon MEMS laser scanner, a Vertical Cavity Surface Emitting Laser (VCSEL) and passive optical components. The MEMS scanner and VCSEL are mounted onto a fused silica substrate which serves as an optical interconnect between the devices. Two Diffractive Optical Elements (DOE's) are etched into the fused silica substrate to focus the VCSEL beam and increase the scan range. The silicon MEMS scanner consists of an actuator that continuously scans the position of a large polysilicon gold- coated shuttle containing a third DOE. Interferometric measurements show that the residual stress in the 50 micrometer X 1000 micrometer shuttle is extremely low, with a maximum deflection of only 0.18 micrometer over an 800 micrometer span for an unmetallized case and a deflection of 0.56 micrometer for the metallized case. A conservative estimate for the scan range is approximately plus or minus 4 degrees, with a spot size of about 0.5 mm, producing 50 resolvable spots. The basic system architecture, optical and MEMS design is reported in this paper, with an emphasis on the design and fabrication of the silicon MEMS scanner portion of the system.

  14. High resolution observations of small-scale gravity waves and turbulence features in the OH airglow layer

    NASA Astrophysics Data System (ADS)

    Sedlak, René; Hannawald, Patrick; Schmidt, Carsten; Wüst, Sabine; Bittner, Michael

    2017-04-01

    A new version of the Fast Airglow Imager (FAIM) for the detection of atmospheric waves in the OH airglow layer has been set up at the German Remote Sensing Data Centre (DFD) of the German Aerospace Centre (DLR) at Oberpfaffenhofen (48.09 ° N, 11.28 ° E), Germany. The spatial resolution of the instrument is 17 m/pixel in zenith direction with a field of view (FOV) of 11.1 km x 9.0 km at the OH layer height of ca. 87 km. Since November 2015, the system has been in operation in two different setups (zenith angles 46 ° and 0 °) with a temporal resolution of 2.5 to 2.8 s. In a first case study we present observations of two small wave-like features that might be attributed to gravity wave instabilities. In order to spectrally analyse harmonic structures even on small spatial scales down to 550 m horizontal wavelength, we made use of the Maximum Entropy Method (MEM) since this method exhibits an excellent wavelength resolution. MEM further allows analysing relatively short data series, which considerably helps to reduce problems such as stationarity of the underlying data series from a statistical point of view. We present an observation of the subsequent decay of well-organized wave fronts into eddies, which we tentatively interpret in terms of an indication for the onset of turbulence. Another remarkable event which demonstrates the technical capabilities of the instrument was observed during the night of 4th to 5th April 2016. It reveals the disintegration of a rather homogenous brightness variation into several filaments moving in different directions and with different speeds. It resembles the formation of a vortex with a horizontal axis of rotation likely related to a vertical wind shear. This case shows a notable similarity to what is expected from theoretical modelling of Kelvin-Helmholtz instabilities (KHIs). The comparatively high spatial resolution of the presented new version of the FAIM airglow imager provides new insights into the structure of atmospheric wave instability and turbulent processes. Infrared imaging of wave dynamics on the sub-kilometre scale in the airglow layer supports the findings of theoretical simulations and modellings. Parts of this research received funding from the Bavarian State Ministry of the Environment and Consumer Protection.

  15. A non-uniformly sampled 4D HCC(CO)NH-TOCSY experiment processed using maximum entropy for rapid protein sidechain assignment

    PubMed Central

    Mobli, Mehdi; Stern, Alan S.; Bermel, Wolfgang; King, Glenn F.; Hoch, Jeffrey C.

    2010-01-01

    One of the stiffest challenges in structural studies of proteins using NMR is the assignment of sidechain resonances. Typically, a panel of lengthy 3D experiments are acquired in order to establish connectivities and resolve ambiguities due to overlap. We demonstrate that these experiments can be replaced by a single 4D experiment that is time-efficient, yields excellent resolution, and captures unique carbon-proton connectivity information. The approach is made practical by the use of non-uniform sampling in the three indirect time dimensions and maximum entropy reconstruction of the corresponding 3D frequency spectrum. This 4D method will facilitate automated resonance assignment procedures and it should be particularly beneficial for increasing throughput in NMR-based structural genomics initiatives. PMID:20299257

  16. Novel analytical model for optimizing the pull-in voltage in a flexured MEMS switch incorporating beam perforation effect

    NASA Astrophysics Data System (ADS)

    Guha, K.; Laskar, N. M.; Gogoi, H. J.; Borah, A. K.; Baishnab, K. L.; Baishya, S.

    2017-11-01

    This paper presents a new method for the design, modelling and optimization of a uniform serpentine meander based MEMS shunt capacitive switch with perforation on upper beam. The new approach is proposed to improve the Pull-in Voltage performance in a MEMS switch. First a new analytical model of the Pull-in Voltage is proposed using the modified Mejis-Fokkema capacitance model taking care of the nonlinear electrostatic force, the fringing field effect due to beam thickness and etched holes on the beam simultaneously followed by the validation of same with the simulated results of benchmark full 3D FEM solver CoventorWare in a wide range of structural parameter variations. It shows a good agreement with the simulated results. Secondly, an optimization method is presented to determine the optimum configuration of switch for achieving minimum Pull-in voltage considering the proposed analytical mode as objective function. Some high performance Evolutionary Optimization Algorithms have been utilized to obtain the optimum dimensions with less computational cost and complexity. Upon comparing the applied algorithms between each other, the Dragonfly Algorithm is found to be most suitable in terms of minimum Pull-in voltage and higher convergence speed. Optimized values are validated against the simulated results of CoventorWare which shows a very satisfactory results with a small deviation of 0.223 V. In addition to these, the paper proposes, for the first time, a novel algorithmic approach for uniform arrangement of square holes in a given beam area of RF MEMS switch for perforation. The algorithm dynamically accommodates all the square holes within a given beam area such that the maximum space is utilized. This automated arrangement of perforation holes will further improve the computational complexity and design accuracy of the complex design of perforated MEMS switch.

  17. Prediction of pKa Values for Neutral and Basic Drugs based on Hybrid Artificial Intelligence Methods.

    PubMed

    Li, Mengshan; Zhang, Huaijing; Chen, Bingsheng; Wu, Yan; Guan, Lixin

    2018-03-05

    The pKa value of drugs is an important parameter in drug design and pharmacology. In this paper, an improved particle swarm optimization (PSO) algorithm was proposed based on the population entropy diversity. In the improved algorithm, when the population entropy was higher than the set maximum threshold, the convergence strategy was adopted; when the population entropy was lower than the set minimum threshold the divergence strategy was adopted; when the population entropy was between the maximum and minimum threshold, the self-adaptive adjustment strategy was maintained. The improved PSO algorithm was applied in the training of radial basis function artificial neural network (RBF ANN) model and the selection of molecular descriptors. A quantitative structure-activity relationship model based on RBF ANN trained by the improved PSO algorithm was proposed to predict the pKa values of 74 kinds of neutral and basic drugs and then validated by another database containing 20 molecules. The validation results showed that the model had a good prediction performance. The absolute average relative error, root mean square error, and squared correlation coefficient were 0.3105, 0.0411, and 0.9685, respectively. The model can be used as a reference for exploring other quantitative structure-activity relationships.

  18. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle.

    PubMed

    Shalymov, Dmitry S; Fradkov, Alexander L

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined.

  19. Dynamics of non-stationary processes that follow the maximum of the Rényi entropy principle

    PubMed Central

    2016-01-01

    We propose dynamics equations which describe the behaviour of non-stationary processes that follow the maximum Rényi entropy principle. The equations are derived on the basis of the speed-gradient principle originated in the control theory. The maximum of the Rényi entropy principle is analysed for discrete and continuous cases, and both a discrete random variable and probability density function (PDF) are used. We consider mass conservation and energy conservation constraints and demonstrate the uniqueness of the limit distribution and asymptotic convergence of the PDF for both cases. The coincidence of the limit distribution of the proposed equations with the Rényi distribution is examined. PMID:26997886

  20. Maximum-entropy description of animal movement.

    PubMed

    Fleming, Chris H; Subaşı, Yiğit; Calabrese, Justin M

    2015-03-01

    We introduce a class of maximum-entropy states that naturally includes within it all of the major continuous-time stochastic processes that have been applied to animal movement, including Brownian motion, Ornstein-Uhlenbeck motion, integrated Ornstein-Uhlenbeck motion, a recently discovered hybrid of the previous models, and a new model that describes central-place foraging. We are also able to predict a further hierarchy of new models that will emerge as data quality improves to better resolve the underlying continuity of animal movement. Finally, we also show that Langevin equations must obey a fluctuation-dissipation theorem to generate processes that fall from this class of maximum-entropy distributions when the constraints are purely kinematic.

  1. Exact computation of the maximum-entropy potential of spiking neural-network models.

    PubMed

    Cofré, R; Cessac, B

    2014-05-01

    Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.

  2. Uncertainty quantification in capacitive RF MEMS switches

    NASA Astrophysics Data System (ADS)

    Pax, Benjamin J.

    Development of radio frequency micro electrical-mechanical systems (RF MEMS) has led to novel approaches to implement electrical circuitry. The introduction of capacitive MEMS switches, in particular, has shown promise in low-loss, low-power devices. However, the promise of MEMS switches has not yet been completely realized. RF-MEMS switches are known to fail after only a few months of operation, and nominally similar designs show wide variability in lifetime. Modeling switch operation using nominal or as-designed parameters cannot predict the statistical spread in the number of cycles to failure, and probabilistic methods are necessary. A Bayesian framework for calibration, validation and prediction offers an integrated approach to quantifying the uncertainty in predictions of MEMS switch performance. The objective of this thesis is to use the Bayesian framework to predict the creep-related deflection of the PRISM RF-MEMS switch over several thousand hours of operation. The PRISM switch used in this thesis is the focus of research at Purdue's PRISM center, and is a capacitive contacting RF-MEMS switch. It employs a fixed-fixed nickel membrane which is electrostatically actuated by applying voltage between the membrane and a pull-down electrode. Creep plays a central role in the reliability of this switch. The focus of this thesis is on the creep model, which is calibrated against experimental data measured for a frog-leg varactor fabricated and characterized at Purdue University. Creep plasticity is modeled using plate element theory with electrostatic forces being generated using either parallel plate approximations where appropriate, or solving for the full 3D potential field. For the latter, structure-electrostatics interaction is determined through immersed boundary method. A probabilistic framework using generalized polynomial chaos (gPC) is used to create surrogate models to mitigate the costly full physics simulations, and Bayesian calibration and forward propagation of uncertainty are performed using this surrogate model. The first step in the analysis is Bayesian calibration of the creep related parameters. A computational model of the frog-leg varactor is created, and the computed creep deflection of the device over 800 hours is used to generate a surrogate model using a polynomial chaos expansion in Hermite polynomials. Parameters related to the creep phenomenon are calibrated using Bayesian calibration with experimental deflection data from the frog-leg device. The calibrated input distributions are subsequently propagated through a surrogate gPC model for the PRISM MEMS switch to produce probability density functions of the maximum membrane deflection of the membrane over several thousand hours. The assumptions related to the Bayesian calibration and forward propagation are analyzed to determine the sensitivity to these assumptions of the calibrated input distributions and propagated output distributions of the PRISM device. The work is an early step in understanding the role of geometric variability, model uncertainty, numerical errors and experimental uncertainties in the long-term performance of RF-MEMS.

  3. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  4. Pre-release plastic packaging of MEMS and IMEMS devices

    DOEpatents

    Peterson, Kenneth A.; Conley, William R.

    2002-01-01

    A method is disclosed for pre-release plastic packaging of MEMS and IMEMS devices. The method can include encapsulating the MEMS device in a transfer molded plastic package. Next, a perforation can be made in the package to provide access to the MEMS elements. The non-ablative material removal process can include wet etching, dry etching, mechanical machining, water jet cutting, and ultrasonic machining, or any combination thereof. Finally, the MEMS elements can be released by using either a wet etching or dry plasma etching process. The MEMS elements can be protected with a parylene protective coating. After releasing the MEMS elements, an anti-stiction coating can be applied. The perforating step can be applied to both sides of the device or package. A cover lid can be attached to the face of the package after releasing any MEMS elements. The cover lid can include a window for providing optical access. The method can be applied to any plastic packaged microelectronic device that requires access to the environment, including chemical, pressure, or temperature-sensitive microsensors; CCD chips, photocells, laser diodes, VCSEL's, and UV-EPROMS. The present method places the high-risk packaging steps ahead of the release of the fragile portions of the device. It also provides protection for the die in shipment between the molding house and the house that will release the MEMS elements and subsequently treat the surfaces.

  5. Holographic equipartition and the maximization of entropy

    NASA Astrophysics Data System (ADS)

    Krishna, P. B.; Mathew, Titus K.

    2017-09-01

    The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.

  6. Fast autonomous holographic adaptive optics

    NASA Astrophysics Data System (ADS)

    Andersen, G.

    2010-07-01

    We have created a new adaptive optics system using a holographic modal wavefront sensing method capable of autonomous (computer-free) closed-loop control of a MEMS deformable mirror. A multiplexed hologram is recorded using the maximum and minimum actuator positions on the deformable mirror as the "modes". On reconstruction, an input beam will be diffracted into pairs of focal spots - the ratio of particular pairs determines the absolute wavefront phase at a particular actuator location. The wavefront measurement is made using a fast, sensitive photo-detector array such as a multi-pixel photon counters. This information is then used to directly control each actuator in the MEMS DM without the need for any computer in the loop. We present initial results of a 32-actuator prototype device. We further demonstrate that being an all-optical, parallel processing scheme, the speed is independent of the number of actuators. In fact, the limitations on speed are ultimately determined by the maximum driving speed of the DM actuators themselves. Finally, being modal in nature, the system is largely insensitive to both obscuration and scintillation. This should make it ideal for laser beam transmission or imaging under highly turbulent conditions.

  7. Merging daily sea surface temperature data from multiple satellites using a Bayesian maximum entropy method

    NASA Astrophysics Data System (ADS)

    Tang, Shaolei; Yang, Xiaofeng; Dong, Di; Li, Ziwei

    2015-12-01

    Sea surface temperature (SST) is an important variable for understanding interactions between the ocean and the atmosphere. SST fusion is crucial for acquiring SST products of high spatial resolution and coverage. This study introduces a Bayesian maximum entropy (BME) method for blending daily SSTs from multiple satellite sensors. A new spatiotemporal covariance model of an SST field is built to integrate not only single-day SSTs but also time-adjacent SSTs. In addition, AVHRR 30-year SST climatology data are introduced as soft data at the estimation points to improve the accuracy of blended results within the BME framework. The merged SSTs, with a spatial resolution of 4 km and a temporal resolution of 24 hours, are produced in the Western Pacific Ocean region to demonstrate and evaluate the proposed methodology. Comparisons with in situ drifting buoy observations show that the merged SSTs are accurate and the bias and root-mean-square errors for the comparison are 0.15°C and 0.72°C, respectively.

  8. Maximum entropy estimation of a Benzene contaminated plume using ecotoxicological assays.

    PubMed

    Wahyudi, Agung; Bartzke, Mariana; Küster, Eberhard; Bogaert, Patrick

    2013-01-01

    Ecotoxicological bioassays, e.g. based on Danio rerio teratogenicity (DarT) or the acute luminescence inhibition with Vibrio fischeri, could potentially lead to significant benefits for detecting on site contaminations on qualitative or semi-quantitative bases. The aim was to use the observed effects of two ecotoxicological assays for estimating the extent of a Benzene groundwater contamination plume. We used a Maximum Entropy (MaxEnt) method to rebuild a bivariate probability table that links the observed toxicity from the bioassays with Benzene concentrations. Compared with direct mapping of the contamination plume as obtained from groundwater samples, the MaxEnt concentration map exhibits on average slightly higher concentrations though the global pattern is close to it. This suggest MaxEnt is a valuable method to build a relationship between quantitative data, e.g. contaminant concentrations, and more qualitative or indirect measurements, in a spatial mapping framework, which is especially useful when clear quantitative relation is not at hand. Copyright © 2012 Elsevier Ltd. All rights reserved.

  9. Magnetocaloric effect in potassium doped lanthanum manganite perovskites prepared by a pyrophoric method

    NASA Astrophysics Data System (ADS)

    Das, Soma; Dey, T. K.

    2006-08-01

    The magnetocaloric effect (MCE) in fine grained perovskite manganites of the type La1-xKxMnO3 (0

  10. Fast Maximum Entropy Moment Closure Approach to Solving the Boltzmann Equation

    NASA Astrophysics Data System (ADS)

    Summy, Dustin; Pullin, Dale

    2015-11-01

    We describe a method for a moment-based solution of the Boltzmann Equation (BE). This is applicable to an arbitrary set of velocity moments whose transport is governed by partial-differential equations (PDEs) derived from the BE. The equations are unclosed, containing both higher-order moments and molecular-collision terms. These are evaluated using a maximum-entropy reconstruction of the velocity distribution function f (c , x , t) , from the known moments, within a finite-box domain of single-particle velocity (c) space. Use of a finite-domain alleviates known problems (Junk and Unterreiter, Continuum Mech. Thermodyn., 2002) concerning existence and uniqueness of the reconstruction. Unclosed moments are evaluated with quadrature while collision terms are calculated using any desired method. This allows integration of the moment PDEs in time. The high computational cost of the general method is greatly reduced by careful choice of the velocity moments, allowing the necessary integrals to be reduced from three- to one-dimensional in the case of strictly 1D flows. A method to extend this enhancement to fully 3D flows is discussed. Comparison with relaxation and shock-wave problems using the DSMC method will be presented. Partially supported by NSF grant DMS-1418903.

  11. Bayesian Methods and Universal Darwinism

    NASA Astrophysics Data System (ADS)

    Campbell, John

    2009-12-01

    Bayesian methods since the time of Laplace have been understood by their practitioners as closely aligned to the scientific method. Indeed a recent Champion of Bayesian methods, E. T. Jaynes, titled his textbook on the subject Probability Theory: the Logic of Science. Many philosophers of science including Karl Popper and Donald Campbell have interpreted the evolution of Science as a Darwinian process consisting of a `copy with selective retention' algorithm abstracted from Darwin's theory of Natural Selection. Arguments are presented for an isomorphism between Bayesian Methods and Darwinian processes. Universal Darwinism, as the term has been developed by Richard Dawkins, Daniel Dennett and Susan Blackmore, is the collection of scientific theories which explain the creation and evolution of their subject matter as due to the Operation of Darwinian processes. These subject matters span the fields of atomic physics, chemistry, biology and the social sciences. The principle of Maximum Entropy states that Systems will evolve to states of highest entropy subject to the constraints of scientific law. This principle may be inverted to provide illumination as to the nature of scientific law. Our best cosmological theories suggest the universe contained much less complexity during the period shortly after the Big Bang than it does at present. The scientific subject matter of atomic physics, chemistry, biology and the social sciences has been created since that time. An explanation is proposed for the existence of this subject matter as due to the evolution of constraints in the form of adaptations imposed on Maximum Entropy. It is argued these adaptations were discovered and instantiated through the Operations of a succession of Darwinian processes.

  12. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  13. Bayesian Probability Theory

    NASA Astrophysics Data System (ADS)

    von der Linden, Wolfgang; Dose, Volker; von Toussaint, Udo

    2014-06-01

    Preface; Part I. Introduction: 1. The meaning of probability; 2. Basic definitions; 3. Bayesian inference; 4. Combinatrics; 5. Random walks; 6. Limit theorems; 7. Continuous distributions; 8. The central limit theorem; 9. Poisson processes and waiting times; Part II. Assigning Probabilities: 10. Transformation invariance; 11. Maximum entropy; 12. Qualified maximum entropy; 13. Global smoothness; Part III. Parameter Estimation: 14. Bayesian parameter estimation; 15. Frequentist parameter estimation; 16. The Cramer-Rao inequality; Part IV. Testing Hypotheses: 17. The Bayesian way; 18. The frequentist way; 19. Sampling distributions; 20. Bayesian vs frequentist hypothesis tests; Part V. Real World Applications: 21. Regression; 22. Inconsistent data; 23. Unrecognized signal contributions; 24. Change point problems; 25. Function estimation; 26. Integral equations; 27. Model selection; 28. Bayesian experimental design; Part VI. Probabilistic Numerical Techniques: 29. Numerical integration; 30. Monte Carlo methods; 31. Nested sampling; Appendixes; References; Index.

  14. Multi-Group Maximum Entropy Model for Translational Non-Equilibrium

    NASA Technical Reports Server (NTRS)

    Jayaraman, Vegnesh; Liu, Yen; Panesi, Marco

    2017-01-01

    The aim of the current work is to describe a new model for flows in translational non- equilibrium. Starting from the statistical description of a gas proposed by Boltzmann, the model relies on a domain decomposition technique in velocity space. Using the maximum entropy principle, the logarithm of the distribution function in each velocity sub-domain (group) is expressed with a power series in molecular velocity. New governing equations are obtained using the method of weighted residuals by taking the velocity moments of the Boltzmann equation. The model is applied to a spatially homogeneous Boltzmann equation with a Bhatnagar-Gross-Krook1(BGK) model collision operator and the relaxation of an initial non-equilibrium distribution to a Maxwellian is studied using the model. In addition, numerical results obtained using the model for a 1D shock tube problem are also reported.

  15. Development of a Compact Optical-MEMS Scanner with Integrated VCSEL Light Source and Diffractive Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krygowski, Thomas W.; Reyes, David; Rodgers, M. Steven

    1999-06-30

    In this work the design and initial fabrication results are reported for the components of a compact optical-MEMS laser scanning system. This system integrates a silicon MEMS laser scanner, a Vertical Cavity Surface Emitting Laser (VCSEL) and passive optical components. The MEMS scanner and VCSEL are mounted onto a fused silica substrate which serves as an optical interconnect between the devices. Two Diffractive Optical Elements (DOEs) are etched into the fused silica substrate to focus the VCSEL beam and increase the scan range. The silicon MEMS scanner consists of an actuator that continuously scans the position of a large polysiliconmore » gold-coated shuttle containing a third DOE. Interferometric measurements show that the residual stress in the 500 {micro}m x 1000 {micro}m shuttle is extremely low, with a maximum deflection of only 0.18{micro}m over an 800 {micro}m span for an unmetallized case and a deflection of 0.56{micro}m for the metallized case. A conservative estimate for the scan range is {approximately}{+-}4{degree}, with a spot size of about 0.5 mm, producing 50 resolvable spots. The basic system architecture, optical and MEMS design is reported in this paper, with an emphasis on the design and fabrication of the silicon MEMS scanner portion of the system.« less

  16. [Quantitative assessment of urban ecosystem services flow based on entropy theory: A case study of Beijing, China].

    PubMed

    Li, Jing Xin; Yang, Li; Yang, Lei; Zhang, Chao; Huo, Zhao Min; Chen, Min Hao; Luan, Xiao Feng

    2018-03-01

    Quantitative evaluation of ecosystem service is a primary premise for rational resources exploitation and sustainable development. Examining ecosystem services flow provides a scientific method to quantity ecosystem services. We built an assessment indicator system based on land cover/land use under the framework of four types of ecosystem services. The types of ecosystem services flow were reclassified. Using entropy theory, disorder degree and developing trend of indicators and urban ecosystem were quantitatively assessed. Beijing was chosen as the study area, and twenty-four indicators were selected for evaluation. The results showed that the entropy value of Beijing urban ecosystem during 2004 to 2015 was 0.794 and the entropy flow was -0.024, suggesting a large disordered degree and near verge of non-health. The system got maximum values for three times, while the mean annual variation of the system entropy value increased gradually in three periods, indicating that human activities had negative effects on urban ecosystem. Entropy flow reached minimum value in 2007, implying the environmental quality was the best in 2007. The determination coefficient for the fitting function of total permanent population in Beijing and urban ecosystem entropy flow was 0.921, indicating that urban ecosystem health was highly correlated with total permanent population.

  17. Ku to V-band 4-bit MEMS phase shifter bank using high isolation SP4T switches and DMTL structures

    NASA Astrophysics Data System (ADS)

    Dey, Sukomal; Koul, Shiban K.; Poddar, Ajay K.; Rohde, Ulrich L.

    2017-10-01

    This work presents a micro-electro-mechanical system (MEMS) based on a wide-band 4-bit phase shifter using two back-to-back single-pole-four-throw (SP4T) switches and four different distributed MEMS transmission line (DMTL) structures that are implemented on 635 µm alumina substrate using surface micromachining process. An SP4T switch is designed with a series-shunt configuration and it demonstrates an average return loss of  >17 dB, an insertion loss of  <1.97 dB and maximum isolation of  >28 dB up to 60 GHz. A maximum area of the SP4T switch is ~0.76 mm2. Single-pole-single-throw and SP4T switches are capable of handling 1 W of radio frequency (RF) power up to  >100 million cycles at 25° C; they can even sustained up to  >70 million cycles with 1 W at 85 °C. The proposed wide-band phase shifter works at 17 GHz (Ku-band), 25 GHz (K-band), 35 GHz (Ka-band) and 60 GHz (V-band) frequencies. Finally,a 4-bit phase shifter demonstrates an average insertion loss of  <6 dB, return loss of  >10 dB and maximum phase error of ~3.8° at 60 GHz frequency over 500 MHz bandwidth. Total area of the fabricated device is ~11 mm2. In addition, the proposed device works well up to  >107 cycles with 1 W of RF power. To the best of the author’s knowledge, this is the best reported wide-band MEMS 4-bit phase shifter in the literature that works with a constant resolution.

  18. Method for spatially modulating X-ray pulses using MEMS-based X-ray optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez, Daniel; Shenoy, Gopal; Wang, Jin

    A method and apparatus are provided for spatially modulating X-rays or X-ray pulses using microelectromechanical systems (MEMS) based X-ray optics. A torsionally-oscillating MEMS micromirror and a method of leveraging the grazing-angle reflection property are provided to modulate X-ray pulses with a high-degree of controllability.

  19. In-flight performance analysis of MEMS GPS receiver and its application to precise orbit determination of APOD-A satellite

    NASA Astrophysics Data System (ADS)

    Gu, Defeng; Liu, Ye; Yi, Bin; Cao, Jianfeng; Li, Xie

    2017-12-01

    An experimental satellite mission termed atmospheric density detection and precise orbit determination (APOD) was developed by China and launched on 20 September 2015. The micro-electro-mechanical system (MEMS) GPS receiver provides the basis for precise orbit determination (POD) within the range of a few decimetres. The in-flight performance of the MEMS GPS receiver was assessed. The average number of tracked GPS satellites is 10.7. However, only 5.1 GPS satellites are available for dual-frequency navigation because of the loss of many L2 observations at low elevations. The variations in the multipath error for C1 and P2 were estimated, and the maximum multipath error could reach up to 0.8 m. The average code noises are 0.28 m (C1) and 0.69 m (P2). Using the MEMS GPS receiver, the orbit of the APOD nanosatellite (APOD-A) was precisely determined. Two types of orbit solutions are proposed: a dual-frequency solution and a single-frequency solution. The antenna phase center variations (PCVs) and code residual variations (CRVs) were estimated, and the maximum value of the PCVs is 4.0 cm. After correcting the antenna PCVs and CRVs, the final orbit precision for the dual-frequency and single-frequency solutions were 7.71 cm and 12.91 cm, respectively, validated using the satellite laser ranging (SLR) data, which were significantly improved by 3.35 cm and 25.25 cm. The average RMS of the 6-h overlap differences in the dual-frequency solution between two consecutive days in three dimensions (3D) is 4.59 cm. The MEMS GPS receiver is the Chinese indigenous onboard receiver, which was successfully used in the POD of a nanosatellite. This study has important reference value for improving the MEMS GPS receiver and its application in other low Earth orbit (LEO) nanosatellites.

  20. Hardware platforms for MEMS gyroscope tuning based on evolutionary computation using open-loop and closed -loop frequency response

    NASA Technical Reports Server (NTRS)

    Keymeulen, Didier; Ferguson, Michael I.; Fink, Wolfgang; Oks, Boris; Peay, Chris; Terrile, Richard; Cheng, Yen; Kim, Dennis; MacDonald, Eric; Foor, David

    2005-01-01

    We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation. We also report on the development of a hardware platform for integrated tuning and closed loop operation of MEMS gyroscopes. The control of this device is implemented through a digital design on a Field Programmable Gate Array (FPGA). The hardware platform easily transitions to an embedded solution that allows for the miniaturization of the system to a single chip.

  1. Economics and Maximum Entropy Production

    NASA Astrophysics Data System (ADS)

    Lorenz, R. D.

    2003-04-01

    Price differentials, sales volume and profit can be seen as analogues of temperature difference, heat flow and work or entropy production in the climate system. One aspect in which economic systems exhibit more clarity than the climate is that the empirical and/or statistical mechanical tendency for systems to seek a maximum in production is very evident in economics, in that the profit motive is very clear. Noting the common link between 1/f noise, power laws and Self-Organized Criticality with Maximum Entropy Production, the power law fluctuations in security and commodity prices is not inconsistent with the analogy. There is an additional thermodynamic analogy, in that scarcity is valued. A commodity concentrated among a few traders is valued highly by the many who do not have it. The market therefore encourages via prices the spreading of those goods among a wider group, just as heat tends to diffuse, increasing entropy. I explore some empirical price-volume relationships of metals and meteorites in this context.

  2. Metabolic networks evolve towards states of maximum entropy production.

    PubMed

    Unrean, Pornkamol; Srienc, Friedrich

    2011-11-01

    A metabolic network can be described by a set of elementary modes or pathways representing discrete metabolic states that support cell function. We have recently shown that in the most likely metabolic state the usage probability of individual elementary modes is distributed according to the Boltzmann distribution law while complying with the principle of maximum entropy production. To demonstrate that a metabolic network evolves towards such state we have carried out adaptive evolution experiments with Thermoanaerobacterium saccharolyticum operating with a reduced metabolic functionality based on a reduced set of elementary modes. In such reduced metabolic network metabolic fluxes can be conveniently computed from the measured metabolite secretion pattern. Over a time span of 300 generations the specific growth rate of the strain continuously increased together with a continuous increase in the rate of entropy production. We show that the rate of entropy production asymptotically approaches the maximum entropy production rate predicted from the state when the usage probability of individual elementary modes is distributed according to the Boltzmann distribution. Therefore, the outcome of evolution of a complex biological system can be predicted in highly quantitative terms using basic statistical mechanical principles. Copyright © 2011 Elsevier Inc. All rights reserved.

  3. Time dependence of Hawking radiation entropy

    NASA Astrophysics Data System (ADS)

    Page, Don N.

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its original Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM02, or about 7.509M02 ≈ 6.268 × 1076(M0/Msolar)2, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M02 ≈ 1.254 × 1077(M0/Msolar)2, and then decreases back down to 4πM02 = 1.049 × 1077(M0/Msolar)2.

  4. Using maximum entropy modeling to identify and prioritize red spruce forest habitat in West Virginia

    Treesearch

    Nathan R. Beane; James S. Rentch; Thomas M. Schuler

    2013-01-01

    Red spruce forests in West Virginia are found in island-like distributions at high elevations and provide essential habitat for the endangered Cheat Mountain salamander and the recently delisted Virginia northern flying squirrel. Therefore, it is important to identify restoration priorities of red spruce forests. Maximum entropy modeling was used to identify areas of...

  5. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. PMID:27455279

  6. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments.

  7. Coupling diffusion and maximum entropy models to estimate thermal inertia

    USDA-ARS?s Scientific Manuscript database

    Thermal inertia is a physical property of soil at the land surface related to water content. We have developed a method for estimating soil thermal inertia using two daily measurements of surface temperature, to capture the diurnal range, and diurnal time series of net radiation and specific humidi...

  8. Stochastic modeling and control system designs of the NASA/MSFC Ground Facility for large space structures: The maximum entropy/optimal projection approach

    NASA Technical Reports Server (NTRS)

    Hsia, Wei-Shen

    1986-01-01

    In the Control Systems Division of the Systems Dynamics Laboratory of the NASA/MSFC, a Ground Facility (GF), in which the dynamics and control system concepts being considered for Large Space Structures (LSS) applications can be verified, was designed and built. One of the important aspects of the GF is to design an analytical model which will be as close to experimental data as possible so that a feasible control law can be generated. Using Hyland's Maximum Entropy/Optimal Projection Approach, a procedure was developed in which the maximum entropy principle is used for stochastic modeling and the optimal projection technique is used for a reduced-order dynamic compensator design for a high-order plant.

  9. Inertial measurement unit using rotatable MEMS sensors

    DOEpatents

    Kohler, Stewart M [Albuquerque, NM; Allen, James J [Albuquerque, NM

    2007-05-01

    A MEM inertial sensor (e.g. accelerometer, gyroscope) having integral rotational means for providing static and dynamic bias compensation is disclosed. A bias compensated MEM inertial sensor is described comprising a MEM inertial sense element disposed on a rotatable MEM stage. A MEM actuator drives the rotation of the stage between at least two predetermined rotational positions. Measuring and comparing the output of the MEM inertial sensor in the at least two rotational positions allows for both static and dynamic bias compensation in inertial calculations based on the sensor's output. An inertial measurement unit (IMU) comprising a plurality of independently rotatable MEM inertial sensors and methods for making bias compensated inertial measurements are disclosed.

  10. Inertial measurement unit using rotatable MEMS sensors

    DOEpatents

    Kohler, Stewart M.; Allen, James J.

    2006-06-27

    A MEM inertial sensor (e.g. accelerometer, gyroscope) having integral rotational means for providing static and dynamic bias compensation is disclosed. A bias compensated MEM inertial sensor is described comprising a MEM inertial sense element disposed on a rotatable MEM stage. A MEM actuator for drives the rotation of the stage between at least two predetermined rotational positions. Measuring and comparing the output of the MEM inertial sensor in the at least two rotational positions allows, for both static and dynamic bias compensation in inertial calculations based on the sensor's output. An inertial measurement unit (IMU) comprising a plurality of independently rotatable MEM inertial sensors and methods for making bias compensated inertial measurements are disclosed.

  11. A geometrical defect detection method for non-silicon MEMS part based on HU moment invariants of skeleton image

    NASA Astrophysics Data System (ADS)

    Cheng, Xu; Jin, Xin; Zhang, Zhijing; Lu, Jun

    2014-01-01

    In order to improve the accuracy of geometrical defect detection, this paper presented a method based on HU moment invariants of skeleton image. This method have four steps: first of all, grayscale images of non-silicon MEMS parts are collected and converted into binary images, secondly, skeletons of binary images are extracted using medialaxis- transform method, and then HU moment invariants of skeleton images are calculated, finally, differences of HU moment invariants between measured parts and qualified parts are obtained to determine whether there are geometrical defects. To demonstrate the availability of this method, experiments were carried out between skeleton images and grayscale images, and results show that: when defects of non-silicon MEMS part are the same, HU moment invariants of skeleton images are more sensitive than that of grayscale images, and detection accuracy is higher. Therefore, this method can more accurately determine whether non-silicon MEMS parts qualified or not, and can be applied to nonsilicon MEMS part detection system.

  12. Control of solid-state lasers using an intra-cavity MEMS micromirror.

    PubMed

    Lubeigt, Walter; Gomes, Joao; Brown, Gordon; Kelly, Andrew; Savitski, Vasili; Uttamchandani, Deepak; Burns, David

    2011-01-31

    High reflectivity, electrothermal and electrostatic MEMS (Micro-Electro-Mechanical Systems) micromirrors were used as a control element within a Nd-doped laser cavity. Stable continuous-wave oscillation of a 3-mirror Nd:YLF laser at a maximum output power of 200 mW was limited by thermally-induced surface deformation of the micromirror. An electrostatic micromirror was used to induce Q-switching, resulting in pulse durations of 220 ns - 2 μs over a repetition frequency range of 6 kHz - 40 kHz.

  13. Recent advances in phase shifted time averaging and stroboscopic interferometry

    NASA Astrophysics Data System (ADS)

    Styk, Adam; Józwik, Michał

    2016-08-01

    Classical Time Averaging and Stroboscopic Interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an extensive measurement and data processing strategies in order to evaluate the information on maximum amplitude at a given load of vibrating object. In this paper the modified strategies of data processing in both techniques are introduced. These modifications allow for fast and reliable calculation of searched value, without additional complication of measurement systems. Through the paper the both approaches are discussed and experimentally verified.

  14. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    PubMed

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  15. Method for fabricating a microelectromechanical resonator

    DOEpatents

    Wojciechowski, Kenneth E; Olsson, III, Roy H

    2013-02-05

    A method is disclosed which calculates dimensions for a MEM resonator in terms of integer multiples of a grid width G for reticles used to fabricate the resonator, including an actual sub-width L.sub.a=NG and an effective electrode width W.sub.e=MG where N and M are integers which minimize a frequency error f.sub.e=f.sub.d-f.sub.a between a desired resonant frequency f.sub.d and an actual resonant frequency f.sub.a. The method can also be used to calculate an overall width W.sub.o for the MEM resonator, and an effective electrode length L.sub.e which provides a desired motional impedance for the MEM resonator. The MEM resonator can then be fabricated using these values for L.sub.a, W.sub.e, W.sub.o and L.sub.e. The method can also be applied to a number j of MEM resonators formed on a common substrate.

  16. Modelling of a bridge-shaped nonlinear piezoelectric energy harvester

    NASA Astrophysics Data System (ADS)

    Gafforelli, G.; Xu, R.; Corigliano, A.; Kim, S. G.

    2013-12-01

    Piezoelectric MicroElectroMechanical Systems (MEMS) energy harvesting is an attractive technology for harvesting small magnitudes of energy from ambient vibrations. Increasing the operating frequency bandwidth of such devices is one of the major issues for real world applications. A MEMS-scale doubly clamped nonlinear beam resonator is designed and developed to demonstrate very wide bandwidth and high power density. In this paper a first complete theoretical discussion of nonlinear resonating piezoelectric energy harvesting is provided. The sectional behaviour of the beam is studied through the Classical Lamination Theory (CLT) specifically modified to introduce the piezoelectric coupling and nonlinear Green-Lagrange strain tensor. A lumped parameter model is built through Rayleigh-Ritz Method and the resulting nonlinear coupled equations are solved in the frequency domain through the Harmonic Balance Method (HBM). Finally, the influence of external load resistance on the dynamic behaviour is studied. The theoretical model shows that nonlinear resonant harvesters have much wider power bandwidth than that of linear resonators but their maximum power is still bounded by the mechanical damping as is the case for linear resonating harvesters.

  17. Application of the maximum entropy principle to determine ensembles of intrinsically disordered proteins from residual dipolar couplings.

    PubMed

    Sanchez-Martinez, M; Crehuet, R

    2014-12-21

    We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.

  18. Image reconstruction of IRAS survey scans

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. Romke

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulties, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds is presented, using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spacial resolutions, at different wavelengths. Data estimates of the physical parameters, temperature, density and composition, can be made from the data without prior image (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  19. Water-Immersible MEMS scanning mirror designed for wide-field fast-scanning photoacoustic microscopy

    NASA Astrophysics Data System (ADS)

    Yao, Junjie; Huang, Chih-Hsien; Martel, Catherine; Maslov, Konstantin I.; Wang, Lidai; Yang, Joon-Mo; Gao, Liang; Randolph, Gwendalyn; Zou, Jun; Wang, Lihong V.

    2013-03-01

    By offering images with high spatial resolution and unique optical absorption contrast, optical-resolution photoacoustic microscopy (OR-PAM) has gained increasing attention in biomedical research. Recent developments in OR-PAM have improved its imaging speed, but have sacrificed either the detection sensitivity or field of view or both. We have developed a wide-field fast-scanning OR-PAM by using a water-immersible MEMS scanning mirror (MEMS-ORPAM). Made of silicon with a gold coating, the MEMS mirror plate can reflect both optical and acoustic beams. Because it uses an electromagnetic driving force, the whole MEMS scanning system can be submerged in water. In MEMS-ORPAM, the optical and acoustic beams are confocally configured and simultaneously steered, which ensures uniform detection sensitivity. A B-scan imaging speed as high as 400 Hz can be achieved over a 3 mm scanning range. A diffraction-limited lateral resolution of 2.4 μm in water and a maximum imaging depth of 1.1 mm in soft tissue have been experimentally determined. Using the system, we imaged the flow dynamics of both red blood cells and carbon particles in a mouse ear in vivo. By using Evans blue dye as the contrast agent, we also imaged the flow dynamics of lymphatic vessels in a mouse tail in vivo. The results show that MEMS-OR-PAM could be a powerful tool for studying highly dynamic and time-sensitive biological phenomena.

  20. Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations

    DTIC Science & Technology

    2017-08-21

    distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured

  1. Stochastic characteristics of different duration annual maximum rainfall and its spatial difference in China based on information entropy

    NASA Astrophysics Data System (ADS)

    Li, X.; Sang, Y. F.

    2017-12-01

    Mountain torrents, urban floods and other disasters caused by extreme precipitation bring great losses to the ecological environment, social and economic development, people's lives and property security. So there is of great significance to floods prevention and control by the study of its spatial distribution. Based on the annual maximum rainfall data of 60min, 6h and 24h, the paper generate long sequences following Pearson-III distribution, and then use the information entropy index to study the spatial distribution and difference of different duration. The results show that the information entropy value of annual maximum rainfall in the south region is greater than that in the north region, indicating more obvious stochastic characteristics of annual maximum rainfall in the latter. However, the spatial distribution of stochastic characteristics is different in different duration. For example, stochastic characteristics of 60min annual maximum rainfall in the Eastern Tibet is smaller than surrounding, but 6h and 24h annual maximum rainfall is larger than surrounding area. In the Haihe River Basin and the Huaihe River Basin, the stochastic characteristics of the 60min annual maximum rainfall was not significantly different from that in the surrounding area, and stochastic characteristics of 6h and 24h was smaller than that in the surrounding area. We conclude that the spatial distribution of information entropy values of annual maximum rainfall in different duration can reflect the spatial distribution of its stochastic characteristics, thus the results can be an importantly scientific basis for the flood prevention and control, agriculture, economic-social developments and urban flood control and waterlogging.

  2. Entropy generation minimization for the sloshing phenomenon in half-full elliptical storage tanks

    NASA Astrophysics Data System (ADS)

    Saghi, Hassan

    2018-02-01

    In this paper, the entropy generation in the sloshing phenomenon was obtained in elliptical storage tanks and the optimum geometry of tank was suggested. To do this, a numerical model was developed to simulate the sloshing phenomenon by using coupled Reynolds-Averaged Navier-Stokes (RANS) solver and the Volume-of-Fluid (VOF) method. The RANS equations were discretized and solved using the staggered grid finite difference and SMAC methods, and the available data were used for the model validation. Some parameters consisting of maximum free surface displacement (MFSD), maximum horizontal force exerted on the tank perimeter (MHF), tank perimeter (TP), and total entropy generation (Sgen) were introduced as design criteria for elliptical storage tanks. The entropy generation distribution provides designers with useful information about the causes of the energy loss. In this step, horizontal periodic sway motions as X =amsin(ωt) were applied to elliptical storage tanks with different aspect ratios namely ratios of large diameter to small diameter of elliptical storage tank (AR). Then, the effect of am and ω was studied on the results. The results show that the relation between MFSD and MHF is almost linear relative to the sway motion amplitude. Moreover, the results show that an increase in the AR causes a decrease in the MFSD and MHF. The results, also, show that the relation between MFSD and MHF is nonlinear relative to the sway motion angular frequency. Furthermore, the results show that an increase in the AR causes that the relation between MFSD and MHF becomes linear relative to the sway motion angular frequency. In addition, MFSD and MHF were minimized in a sway motion with a 7 rad/s angular frequency. Finally, the results show that the elliptical storage tank with AR =1.2-1.4 is the optimum section.

  3. Tribo-functionalizing Si and SU8 materials by surface modification for application in MEMS/NEMS actuator-based devices

    NASA Astrophysics Data System (ADS)

    Singh, R. A.; Satyanarayana, N.; Kustandi, T. S.; Sinha, S. K.

    2011-01-01

    Micro/nano-electro-mechanical-systems (MEMS/NEMS) are miniaturized devices built at micro/nanoscales. At these scales, the surface/interfacial forces are extremely strong and they adversely affect the smooth operation and the useful operating lifetimes of such devices. When these forces manifest in severe forms, they lead to material removal and thereby reduce the wear durability of the devices. In this paper, we present a simple, yet robust, two-step surface modification method to significantly enhance the tribological performance of MEMS/NEMS materials. The two-step method involves oxygen plasma treatment of polymeric films and the application of a nanolubricant, namely perfluoropolyether. We apply the two-step method to the two most important MEMS/NEMS structural materials, namely silicon and SU8 polymer. On applying surface modification to these materials, their initial coefficient of friction reduces by ~4-7 times and the steady-state coefficient of friction reduces by ~2.5-3.5 times. Simultaneously, the wear durability of both the materials increases by >1000 times. The two-step method is time effective as each of the steps takes the time duration of approximately 1 min. It is also cost effective as the oxygen plasma treatment is a part of the MEMS/NEMS fabrication process. The two-step method can be readily and easily integrated into MEMS/NEMS fabrication processes. It is anticipated that this method will work for any kind of structural material from which MEMS/NEMS are or can be made.

  4. MaxEnt alternatives to pearson family distributions

    NASA Astrophysics Data System (ADS)

    Stokes, Barrie J.

    2012-05-01

    In a previous MaxEnt conference [11] a method of obtaining MaxEnt univariate distributions under a variety of constraints was presented. The Mathematica function Interpolation[], normally used with numerical data, can also process "semi-symbolic" data, and Lagrange Multiplier equations were solved for a set of symbolic ordinates describing the required MaxEnt probability density function. We apply a more developed version of this approach to finding MaxEnt distributions having prescribed β1 and β2 values, and compare the entropy of the MaxEnt distribution to that of the Pearson family distribution having the same β1 and β2. These MaxEnt distributions do have, in general, greater entropy than the related Pearson distribution. In accordance with Jaynes' Maximum Entropy Principle, these MaxEnt distributions are thus to be preferred to the corresponding Pearson distributions as priors in Bayes' Theorem.

  5. Stability of Tsallis entropy and instabilities of Rényi and normalized Tsallis entropies: a basis for q-exponential distributions.

    PubMed

    Abe, Sumiyoshi

    2002-10-01

    The q-exponential distributions, which are generalizations of the Zipf-Mandelbrot power-law distribution, are frequently encountered in complex systems at their stationary states. From the viewpoint of the principle of maximum entropy, they can apparently be derived from three different generalized entropies: the Rényi entropy, the Tsallis entropy, and the normalized Tsallis entropy. Accordingly, mere fittings of observed data by the q-exponential distributions do not lead to identification of the correct physical entropy. Here, stabilities of these entropies, i.e., their behaviors under arbitrary small deformation of a distribution, are examined. It is shown that, among the three, the Tsallis entropy is stable and can provide an entropic basis for the q-exponential distributions, whereas the others are unstable and cannot represent any experimentally observable quantities.

  6. Time dependence of Hawking radiation entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Page, Don N., E-mail: profdonpage@gmail.com

    2013-09-01

    If a black hole starts in a pure quantum state and evaporates completely by a unitary process, the von Neumann entropy of the Hawking radiation initially increases and then decreases back to zero when the black hole has disappeared. Here numerical results are given for an approximation to the time dependence of the radiation entropy under an assumption of fast scrambling, for large nonrotating black holes that emit essentially only photons and gravitons. The maximum of the von Neumann entropy then occurs after about 53.81% of the evaporation time, when the black hole has lost about 40.25% of its originalmore » Bekenstein-Hawking (BH) entropy (an upper bound for its von Neumann entropy) and then has a BH entropy that equals the entropy in the radiation, which is about 59.75% of the original BH entropy 4πM{sub 0}{sup 2}, or about 7.509M{sub 0}{sup 2} ≈ 6.268 × 10{sup 76}(M{sub 0}/M{sub s}un){sup 2}, using my 1976 calculations that the photon and graviton emission process into empty space gives about 1.4847 times the BH entropy loss of the black hole. Results are also given for black holes in initially impure states. If the black hole starts in a maximally mixed state, the von Neumann entropy of the Hawking radiation increases from zero up to a maximum of about 119.51% of the original BH entropy, or about 15.018M{sub 0}{sup 2} ≈ 1.254 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}, and then decreases back down to 4πM{sub 0}{sup 2} = 1.049 × 10{sup 77}(M{sub 0}/M{sub s}un){sup 2}.« less

  7. A Maximum Entropy Test for Evaluating Higher-Order Correlations in Spike Counts

    PubMed Central

    Onken, Arno; Dragoi, Valentin; Obermayer, Klaus

    2012-01-01

    Evaluating the importance of higher-order correlations of neural spike counts has been notoriously hard. A large number of samples are typically required in order to estimate higher-order correlations and resulting information theoretic quantities. In typical electrophysiology data sets with many experimental conditions, however, the number of samples in each condition is rather small. Here we describe a method that allows to quantify evidence for higher-order correlations in exactly these cases. We construct a family of reference distributions: maximum entropy distributions, which are constrained only by marginals and by linear correlations as quantified by the Pearson correlation coefficient. We devise a Monte Carlo goodness-of-fit test, which tests - for a given divergence measure of interest - whether the experimental data lead to the rejection of the null hypothesis that it was generated by one of the reference distributions. Applying our test to artificial data shows that the effects of higher-order correlations on these divergence measures can be detected even when the number of samples is small. Subsequently, we apply our method to spike count data which were recorded with multielectrode arrays from the primary visual cortex of anesthetized cat during an adaptation experiment. Using mutual information as a divergence measure we find that there are spike count bin sizes at which the maximum entropy hypothesis can be rejected for a substantial number of neuronal pairs. These results demonstrate that higher-order correlations can matter when estimating information theoretic quantities in V1. They also show that our test is able to detect their presence in typical in-vivo data sets, where the number of samples is too small to estimate higher-order correlations directly. PMID:22685392

  8. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models

    PubMed Central

    Grün, Sonja; Helias, Moritz

    2017-01-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition. PMID:28968396

  9. Bistability, non-ergodicity, and inhibition in pairwise maximum-entropy models.

    PubMed

    Rostami, Vahid; Porta Mana, PierGianLuca; Grün, Sonja; Helias, Moritz

    2017-10-01

    Pairwise maximum-entropy models have been used in neuroscience to predict the activity of neuronal populations, given only the time-averaged correlations of the neuron activities. This paper provides evidence that the pairwise model, applied to experimental recordings, would produce a bimodal distribution for the population-averaged activity, and for some population sizes the second mode would peak at high activities, that experimentally would be equivalent to 90% of the neuron population active within time-windows of few milliseconds. Several problems are connected with this bimodality: 1. The presence of the high-activity mode is unrealistic in view of observed neuronal activity and on neurobiological grounds. 2. Boltzmann learning becomes non-ergodic, hence the pairwise maximum-entropy distribution cannot be found: in fact, Boltzmann learning would produce an incorrect distribution; similarly, common variants of mean-field approximations also produce an incorrect distribution. 3. The Glauber dynamics associated with the model is unrealistically bistable and cannot be used to generate realistic surrogate data. This bimodality problem is first demonstrated for an experimental dataset from 159 neurons in the motor cortex of macaque monkey. Evidence is then provided that this problem affects typical neural recordings of population sizes of a couple of hundreds or more neurons. The cause of the bimodality problem is identified as the inability of standard maximum-entropy distributions with a uniform reference measure to model neuronal inhibition. To eliminate this problem a modified maximum-entropy model is presented, which reflects a basic effect of inhibition in the form of a simple but non-uniform reference measure. This model does not lead to unrealistic bimodalities, can be found with Boltzmann learning, and has an associated Glauber dynamics which incorporates a minimal asymmetric inhibition.

  10. A capacitive CMOS-MEMS sensor designed by multi-physics simulation for integrated CMOS-MEMS technology

    NASA Astrophysics Data System (ADS)

    Konishi, Toshifumi; Yamane, Daisuke; Matsushima, Takaaki; Masu, Kazuya; Machida, Katsuyuki; Toshiyoshi, Hiroshi

    2014-01-01

    This paper reports the design and evaluation results of a capacitive CMOS-MEMS sensor that consists of the proposed sensor circuit and a capacitive MEMS device implemented on the circuit. To design a capacitive CMOS-MEMS sensor, a multi-physics simulation of the electromechanical behavior of both the MEMS structure and the sensing LSI was carried out simultaneously. In order to verify the validity of the design, we applied the capacitive CMOS-MEMS sensor to a MEMS accelerometer implemented by the post-CMOS process onto a 0.35-µm CMOS circuit. The experimental results of the CMOS-MEMS accelerometer exhibited good agreement with the simulation results within the input acceleration range between 0.5 and 6 G (1 G = 9.8 m/s2), corresponding to the output voltages between 908.6 and 915.4 mV, respectively. Therefore, we have confirmed that our capacitive CMOS-MEMS sensor and the multi-physics simulation will be beneficial method to realize integrated CMOS-MEMS technology.

  11. The conical conformal MEMS quasi-end-fire array antenna

    NASA Astrophysics Data System (ADS)

    Cong, Lin; Xu, Lixin; Li, Jianhua; Wang, Ting; Han, Qi

    2017-03-01

    The microelectromechanical system (MEMS) quasi-end-fire array antenna based on a liquid crystal polymer (LCP) substrate is designed and fabricated in this paper. The maximum radiation direction of the antenna tends to the cone axis forming an angle less than 90∘, which satisfies the proximity detection system applied at the forward target detection. Furthermore, the proposed antenna is fed at the ended side in order to save internal space. Moreover, the proposed antenna takes small covering area of the proximity detection system. The proposed antenna is fabricated by using the flexible MEMS process, and the measurement results agree well with the simulation results. This is the first time that a conical conformal array antenna is fabricated by the flexible MEMS process to realize the quasi-end-fire radiation. A pair of conformal MEMS array antennas resonates at 14.2 GHz with its mainlobes tending to the cone axis forming a 30∘ angle and a 31∘ angle separately, and the gains achieved are 1.82 dB in two directions, respectively. The proposed antenna meets the performance requirements for the proximity detection system which has vast application prospects.

  12. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems.

    PubMed

    Chen, Huipeng; Li, Mengyuan; Zhang, Yi; Xie, Huikai; Chen, Chang; Peng, Zhangming; Su, Shaohui

    2018-02-08

    Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm -1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions.

  13. H∞ Robust Control of a Large-Piston MEMS Micromirror for Compact Fourier Transform Spectrometer Systems

    PubMed Central

    Li, Mengyuan; Zhang, Yi; Chen, Chang; Peng, Zhangming; Su, Shaohui

    2018-01-01

    Incorporating linear-scanning micro-electro-mechanical systems (MEMS) micromirrors into Fourier transform spectral acquisition systems can greatly reduce the size of the spectrometer equipment, making portable Fourier transform spectrometers (FTS) possible. How to minimize the tilting of the MEMS mirror plate during its large linear scan is a major problem in this application. In this work, an FTS system has been constructed based on a biaxial MEMS micromirror with a large-piston displacement of 180 μm, and a biaxial H∞ robust controller is designed. Compared with open-loop control and proportional-integral-derivative (PID) closed-loop control, H∞ robust control has good stability and robustness. The experimental results show that the stable scanning displacement reaches 110.9 μm under the H∞ robust control, and the tilting angle of the MEMS mirror plate in that full scanning range falls within ±0.0014°. Without control, the FTS system cannot generate meaningful spectra. In contrast, the FTS yields a clean spectrum with a full width at half maximum (FWHM) spectral linewidth of 96 cm−1 under the H∞ robust control. Moreover, the FTS system can maintain good stability and robustness under various driving conditions. PMID:29419765

  14. Estimation of Fine Particulate Matter in Taipei Using Landuse Regression and Bayesian Maximum Entropy Methods

    PubMed Central

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-01-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005–2007. PMID:21776223

  15. Estimation of fine particulate matter in Taipei using landuse regression and bayesian maximum entropy methods.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsih; Liu, Ming-Che; Kuo, Yi-Ming

    2011-06-01

    Fine airborne particulate matter (PM2.5) has adverse effects on human health. Assessing the long-term effects of PM2.5 exposure on human health and ecology is often limited by a lack of reliable PM2.5 measurements. In Taipei, PM2.5 levels were not systematically measured until August, 2005. Due to the popularity of geographic information systems (GIS), the landuse regression method has been widely used in the spatial estimation of PM concentrations. This method accounts for the potential contributing factors of the local environment, such as traffic volume. Geostatistical methods, on other hand, account for the spatiotemporal dependence among the observations of ambient pollutants. This study assesses the performance of the landuse regression model for the spatiotemporal estimation of PM2.5 in the Taipei area. Specifically, this study integrates the landuse regression model with the geostatistical approach within the framework of the Bayesian maximum entropy (BME) method. The resulting epistemic framework can assimilate knowledge bases including: (a) empirical-based spatial trends of PM concentration based on landuse regression, (b) the spatio-temporal dependence among PM observation information, and (c) site-specific PM observations. The proposed approach performs the spatiotemporal estimation of PM2.5 levels in the Taipei area (Taiwan) from 2005-2007.

  16. Cosmic equilibration: A holographic no-hair theorem from the generalized second law

    NASA Astrophysics Data System (ADS)

    Carroll, Sean M.; Chatwin-Davies, Aidan

    2018-02-01

    In a wide class of cosmological models, a positive cosmological constant drives cosmological evolution toward an asymptotically de Sitter phase. Here we connect this behavior to the increase of entropy over time, based on the idea that de Sitter spacetime is a maximum-entropy state. We prove a cosmic no-hair theorem for Robertson-Walker and Bianchi I spacetimes that admit a Q-screen ("quantum" holographic screen) with certain entropic properties: If generalized entropy, in the sense of the cosmological version of the generalized second law conjectured by Bousso and Engelhardt, increases up to a finite maximum value along the screen, then the spacetime is asymptotically de Sitter in the future. Moreover, the limiting value of generalized entropy coincides with the de Sitter horizon entropy. We do not use the Einstein field equations in our proof, nor do we assume the existence of a positive cosmological constant. As such, asymptotic relaxation to a de Sitter phase can, in a precise sense, be thought of as cosmological equilibration.

  17. Maximum Entropy for the International Division of Labor.

    PubMed

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country's strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product's complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country's strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter.

  18. Maximum Entropy for the International Division of Labor

    PubMed Central

    Lei, Hongmei; Chen, Ying; Li, Ruiqi; He, Deli; Zhang, Jiang

    2015-01-01

    As a result of the international division of labor, the trade value distribution on different products substantiated by international trade flows can be regarded as one country’s strategy for competition. According to the empirical data of trade flows, countries may spend a large fraction of export values on ubiquitous and competitive products. Meanwhile, countries may also diversify their exports share on different types of products to reduce the risk. In this paper, we report that the export share distribution curves can be derived by maximizing the entropy of shares on different products under the product’s complexity constraint once the international market structure (the country-product bipartite network) is given. Therefore, a maximum entropy model provides a good fit to empirical data. The empirical data is consistent with maximum entropy subject to a constraint on the expected value of the product complexity for each country. One country’s strategy is mainly determined by the types of products this country can export. In addition, our model is able to fit the empirical export share distribution curves of nearly every country very well by tuning only one parameter. PMID:26172052

  19. Maximum entropy production in environmental and ecological systems.

    PubMed

    Kleidon, Axel; Malhi, Yadvinder; Cox, Peter M

    2010-05-12

    The coupled biosphere-atmosphere system entails a vast range of processes at different scales, from ecosystem exchange fluxes of energy, water and carbon to the processes that drive global biogeochemical cycles, atmospheric composition and, ultimately, the planetary energy balance. These processes are generally complex with numerous interactions and feedbacks, and they are irreversible in their nature, thereby producing entropy. The proposed principle of maximum entropy production (MEP), based on statistical mechanics and information theory, states that thermodynamic processes far from thermodynamic equilibrium will adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate. This issue focuses on the latest development of applications of MEP to the biosphere-atmosphere system including aspects of the atmospheric circulation, the role of clouds, hydrology, vegetation effects, ecosystem exchange of energy and mass, biogeochemical interactions and the Gaia hypothesis. The examples shown in this special issue demonstrate the potential of MEP to contribute to improved understanding and modelling of the biosphere and the wider Earth system, and also explore limitations and constraints to the application of the MEP principle.

  20. Spectral and correlation analysis with applications to middle-atmosphere radars

    NASA Technical Reports Server (NTRS)

    Rastogi, Prabhat K.

    1989-01-01

    The correlation and spectral analysis methods for uniformly sampled stationary random signals, estimation of their spectral moments, and problems arising due to nonstationary are reviewed. Some of these methods are already in routine use in atmospheric radar experiments. Other methods based on the maximum entropy principle and time series models have been used in analyzing data, but are just beginning to receive attention in the analysis of radar signals. These methods are also briefly discussed.

  1. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  2. Precipitation Interpolation by Multivariate Bayesian Maximum Entropy Based on Meteorological Data in Yun- Gui-Guang region, Mainland China

    NASA Astrophysics Data System (ADS)

    Wang, Chaolin; Zhong, Shaobo; Zhang, Fushen; Huang, Quanyi

    2016-11-01

    Precipitation interpolation has been a hot area of research for many years. It had close relation to meteorological factors. In this paper, precipitation from 91 meteorological stations located in and around Yunnan, Guizhou and Guangxi Zhuang provinces (or autonomous region), Mainland China was taken into consideration for spatial interpolation. Multivariate Bayesian maximum entropy (BME) method with auxiliary variables, including mean relative humidity, water vapour pressure, mean temperature, mean wind speed and terrain elevation, was used to get more accurate regional distribution of annual precipitation. The means, standard deviations, skewness and kurtosis of meteorological factors were calculated. Variogram and cross- variogram were fitted between precipitation and auxiliary variables. The results showed that the multivariate BME method was precise with hard and soft data, probability density function. Annual mean precipitation was positively correlated with mean relative humidity, mean water vapour pressure, mean temperature and mean wind speed, negatively correlated with terrain elevation. The results are supposed to provide substantial reference for research of drought and waterlog in the region.

  3. Thermodynamic resource theories, non-commutativity and maximum entropy principles

    NASA Astrophysics Data System (ADS)

    Lostaglio, Matteo; Jennings, David; Rudolph, Terry

    2017-04-01

    We discuss some features of thermodynamics in the presence of multiple conserved quantities. We prove a generalisation of Landauer principle illustrating tradeoffs between the erasure costs paid in different ‘currencies’. We then show how the maximum entropy and complete passivity approaches give different answers in the presence of multiple observables. We discuss how this seems to prevent current resource theories from fully capturing thermodynamic aspects of non-commutativity.

  4. Using the Maximum Entropy Principle as a Unifying Theory Characterization and Sampling of Multi-Scaling Processes in Hydrometeorology

    DTIC Science & Technology

    2015-08-20

    evapotranspiration (ET) over oceans may be significantly lower than previously thought. The MEP model parameterized turbulent transfer coefficients...fluxes, ocean freshwater fluxes, regional crop yield among others. An on-going study suggests that the global annual evapotranspiration (ET) over...Bras, Jingfeng Wang. A model of evapotranspiration based on the theory of maximum entropy production, Water Resources Research, (03 2011): 0. doi

  5. Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barletti, Luigi, E-mail: luigi.barletti@unifi.it

    2014-08-15

    The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.

  6. Maximum Entropy/Optimal Projection (MEOP) control design synthesis: Optimal quantification of the major design tradeoffs

    NASA Technical Reports Server (NTRS)

    Hyland, D. C.; Bernstein, D. S.

    1987-01-01

    The underlying philosophy and motivation of the optimal projection/maximum entropy (OP/ME) stochastic modeling and reduced control design methodology for high order systems with parameter uncertainties are discussed. The OP/ME design equations for reduced-order dynamic compensation including the effect of parameter uncertainties are reviewed. The application of the methodology to several Large Space Structures (LSS) problems of representative complexity is illustrated.

  7. Modeling and Compensation of Random Drift of MEMS Gyroscopes Based on Least Squares Support Vector Machine Optimized by Chaotic Particle Swarm Optimization.

    PubMed

    Xing, Haifeng; Hou, Bo; Lin, Zhihui; Guo, Meifeng

    2017-10-13

    MEMS (Micro Electro Mechanical System) gyroscopes have been widely applied to various fields, but MEMS gyroscope random drift has nonlinear and non-stationary characteristics. It has attracted much attention to model and compensate the random drift because it can improve the precision of inertial devices. This paper has proposed to use wavelet filtering to reduce noise in the original data of MEMS gyroscopes, then reconstruct the random drift data with PSR (phase space reconstruction), and establish the model for the reconstructed data by LSSVM (least squares support vector machine), of which the parameters were optimized using CPSO (chaotic particle swarm optimization). Comparing the effect of modeling the MEMS gyroscope random drift with BP-ANN (back propagation artificial neural network) and the proposed method, the results showed that the latter had a better prediction accuracy. Using the compensation of three groups of MEMS gyroscope random drift data, the standard deviation of three groups of experimental data dropped from 0.00354°/s, 0.00412°/s, and 0.00328°/s to 0.00065°/s, 0.00072°/s and 0.00061°/s, respectively, which demonstrated that the proposed method can reduce the influence of MEMS gyroscope random drift and verified the effectiveness of this method for modeling MEMS gyroscope random drift.

  8. Maximum caliber inference of nonequilibrium processes

    NASA Astrophysics Data System (ADS)

    Otten, Moritz; Stock, Gerhard

    2010-07-01

    Thirty years ago, Jaynes suggested a general theoretical approach to nonequilibrium statistical mechanics, called maximum caliber (MaxCal) [Annu. Rev. Phys. Chem. 31, 579 (1980)]. MaxCal is a variational principle for dynamics in the same spirit that maximum entropy is a variational principle for equilibrium statistical mechanics. Motivated by the success of maximum entropy inference methods for equilibrium problems, in this work the MaxCal formulation is applied to the inference of nonequilibrium processes. That is, given some time-dependent observables of a dynamical process, one constructs a model that reproduces these input data and moreover, predicts the underlying dynamics of the system. For example, the observables could be some time-resolved measurements of the folding of a protein, which are described by a few-state model of the free energy landscape of the system. MaxCal then calculates the probabilities of an ensemble of trajectories such that on average the data are reproduced. From this probability distribution, any dynamical quantity of the system can be calculated, including population probabilities, fluxes, or waiting time distributions. After briefly reviewing the formalism, the practical numerical implementation of MaxCal in the case of an inference problem is discussed. Adopting various few-state models of increasing complexity, it is demonstrated that the MaxCal principle indeed works as a practical method of inference: The scheme is fairly robust and yields correct results as long as the input data are sufficient. As the method is unbiased and general, it can deal with any kind of time dependency such as oscillatory transients and multitime decays.

  9. Piezoelectric MEMS: Ferroelectric thin films for MEMS applications

    NASA Astrophysics Data System (ADS)

    Kanno, Isaku

    2018-04-01

    In recent years, piezoelectric microelectromechanical systems (MEMS) have attracted attention as next-generation functional microdevices. Typical applications of piezoelectric MEMS are micropumps for inkjet heads or micro-gyrosensors, which are composed of piezoelectric Pb(Zr,Ti)O3 (PZT) thin films and have already been commercialized. In addition, piezoelectric vibration energy harvesters (PVEHs), which are regarded as one of the key devices for Internet of Things (IoT)-related technologies, are promising future applications of piezoelectric MEMS. Significant features of piezoelectric MEMS are their simple structure and high energy conversion efficiency between mechanical and electrical domains even on the microscale. The device performance strongly depends on the function of the piezoelectric thin films, especially on their transverse piezoelectric properties, indicating that the deposition of high-quality piezoelectric thin films is a crucial technology for piezoelectric MEMS. On the other hand, although the difficulty in measuring the precise piezoelectric coefficients of thin films is a serious obstacle in the research and development of piezoelectric thin films, a simple unimorph cantilever measurement method has been proposed to obtain precise values of the direct or converse transverse piezoelectric coefficient of thin films, and recently this method has become to be the standardized testing method. In this article, I will introduce fundamental technologies of piezoelectric thin films and related microdevices, especially focusing on the deposition of PZT thin films and evaluation methods for their transverse piezoelectric properties.

  10. Quantifying and minimizing entropy generation in AMTEC cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hendricks, T.J.; Huang, C.

    1997-12-31

    Entropy generation in an AMTEC cell represents inherent power loss to the AMTEC cell. Minimizing cell entropy generation directly maximizes cell power generation and efficiency. An internal project is on-going at AMPS to identify, quantify and minimize entropy generation mechanisms within an AMTEC cell, with the goal of determining cost-effective design approaches for maximizing AMTEC cell power generation. Various entropy generation mechanisms have been identified and quantified. The project has investigated several cell design techniques in a solar-driven AMTEC system to minimize cell entropy generation and produce maximum power cell designs. In many cases, various sources of entropy generation aremore » interrelated such that minimizing entropy generation requires cell and system design optimization. Some of the tradeoffs between various entropy generation mechanisms are quantified and explained and their implications on cell design are discussed. The relationship between AMTEC cell power and efficiency and entropy generation is presented and discussed.« less

  11. Apparatus and method for sensing motion in a microelectro-mechanical system

    DOEpatents

    Dickey, Fred M.; Holswade, Scott C.

    1999-01-01

    An apparatus and method are disclosed for optically sensing motion in a microelectromechanical system (also termed a MEMS device) formed by surface micromachining or LIGA. The apparatus operates by reflecting or scattering a light beam off a corrugated surface (e.g. gear teeth or a reference feature) of a moveable member (e.g. a gear, rack or linkage) within the MEMS device and detecting the reflected or scattered light. The apparatus can be used to characterize a MEMS device, measuring one or more performance characteristic such as spring and damping coefficients, torque and friction, or uniformity of motion of the moveable member. The apparatus can also be used to determine the direction and extent of motion of the moveable member; or to determine a particular mechanical state that a MEMS device is in. Finally, the apparatus and method can be used for providing feedback to the MEMS device to improve performance and reliability.

  12. Characterization of Residual Stress in Microelectromechanical Systems (MEMS) Devices Using Raman Spectroscopy

    DTIC Science & Technology

    2002-04-01

    residual and induced stress curves . A key to modelling MEMS structures, especially micromirrors , is to 2-23 (a) 0V (b) 10V (c) 20V (d) 40V (e) 50V (f...outlined in Figure 4.20. A line marker is used to extract the FEM data as displayed across the micromirror flexure. The MEMCAD FEM stress curve for the... curved as observed by the number of fringe lines displayed on the micromirror surface. The maximum peak deformation for this series of micromirrors is

  13. Entropy jump across an inviscid shock wave

    NASA Technical Reports Server (NTRS)

    Salas, Manuel D.; Iollo, Angelo

    1995-01-01

    The shock jump conditions for the Euler equations in their primitive form are derived by using generalized functions. The shock profiles for specific volume, speed, and pressure are shown to be the same, however density has a different shock profile. Careful study of the equations that govern the entropy shows that the inviscid entropy profile has a local maximum within the shock layer. We demonstrate that because of this phenomenon, the entropy, propagation equation cannot be used as a conservation law.

  14. The moving-window Bayesian maximum entropy framework: estimation of PM(2.5) yearly average concentration across the contiguous United States.

    PubMed

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L

    2012-09-01

    Geostatistical methods are widely used in estimating long-term exposures for epidemiological studies on air pollution, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and the uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian maximum entropy (BME) method and applied this framework to estimate fine particulate matter (PM(2.5)) yearly average concentrations over the contiguous US. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingness in the air-monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM(2.5) data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM(2.5). Moreover, the MWBME method further reduces the MSE by 8.4-43.7%, with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM(2.5) across large geographical domains with expected spatial non-stationarity.

  15. An improved wavelet neural network medical image segmentation algorithm with combined maximum entropy

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoqian; Tao, Jinxu; Ye, Zhongfu; Qiu, Bensheng; Xu, Jinzhang

    2018-05-01

    In order to solve the problem of medical image segmentation, a wavelet neural network medical image segmentation algorithm based on combined maximum entropy criterion is proposed. Firstly, we use bee colony algorithm to optimize the network parameters of wavelet neural network, get the parameters of network structure, initial weights and threshold values, and so on, we can quickly converge to higher precision when training, and avoid to falling into relative extremum; then the optimal number of iterations is obtained by calculating the maximum entropy of the segmented image, so as to achieve the automatic and accurate segmentation effect. Medical image segmentation experiments show that the proposed algorithm can reduce sample training time effectively and improve convergence precision, and segmentation effect is more accurate and effective than traditional BP neural network (back propagation neural network : a multilayer feed forward neural network which trained according to the error backward propagation algorithm.

  16. Maximum entropy deconvolution of the optical jet of 3C 273

    NASA Technical Reports Server (NTRS)

    Evans, I. N.; Ford, H. C.; Hui, X.

    1989-01-01

    The technique of maximum entropy image restoration is applied to the problem of deconvolving the point spread function from a deep, high-quality V band image of the optical jet of 3C 273. The resulting maximum entropy image has an approximate spatial resolution of 0.6 arcsec and has been used to study the morphology of the optical jet. Four regularly-spaced optical knots are clearly evident in the data, together with an optical 'extension' at each end of the optical jet. The jet oscillates around its center of gravity, and the spatial scale of the oscillations is very similar to the spacing between the optical knots. The jet is marginally resolved in the transverse direction and has an asymmetric profile perpendicular to the jet axis. The distribution of V band flux along the length of the jet, and accurate astrometry of the optical knot positions are presented.

  17. Intensity modulation of a terahertz bandpass filter: utilizing image currents induced on MEMS reconfigurable metamaterials.

    PubMed

    Hu, Fangrong; Fan, Yixing; Zhang, Xiaowen; Jiang, Wenying; Chen, Yuanzhi; Li, Peng; Yin, Xianhua; Zhang, Wentao

    2018-01-01

    We experimentally demonstrated a tunable terahertz bandpass filter based on microelectromechanical systems (MEMS) reconfigurable metamaterials. The unit cell of the filter consists of two split-ring resonators (SRRs) and a movable bar. Initially, the movable bar situates at the center of the unit cell, and the filter has two passbands whose central frequencies locate at 0.65 and 0.96 THz. The intensity of the two passbands can be actively modulated by the movable bar, and a maximum modulation depth of 96% is achieved at 0.96 THz. The mechanism of tunability is investigated using the finite-integration time-domain method. The result shows that the image currents induced on the movable bar are opposite the resonance currents induced on the SRRs and, thus, weaken the oscillating intensity of the resonance currents. This scheme paves the way to dynamically control and switch the terahertz wave at some constant frequencies utilizing induced image currents.

  18. Method for integrating microelectromechanical devices with electronic circuitry

    DOEpatents

    Barron, Carole C.; Fleming, James G.; Montague, Stephen

    1999-01-01

    A method is disclosed for integrating one or more microelectromechanical (MEM) devices with electronic circuitry on a common substrate. The MEM device can be fabricated within a substrate cavity and encapsulated with a sacrificial material. This allows the MEM device to be annealed and the substrate planarized prior to forming electronic circuitry on the substrate using a series of standard processing steps. After fabrication of the electronic circuitry, the electronic circuitry can be protected by a two-ply protection layer of titanium nitride (TiN) and tungsten (W) during an etch release process whereby the MEM device is released for operation by etching away a portion of a sacrificial material (e.g. silicon dioxide or a silicate glass) that encapsulates the MEM device. The etch release process is preferably performed using a mixture of hydrofluoric acid (HF) and hydrochloric acid (HCI) which reduces the time for releasing the MEM device compared to use of a buffered oxide etchant. After release of the MEM device, the TiN:W protection layer can be removed with a peroxide-based etchant without damaging the electronic circuitry.

  19. Efficient algorithms and implementations of entropy-based moment closures for rarefied gases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schaerer, Roman Pascal, E-mail: schaerer@mathcces.rwth-aachen.de; Bansal, Pratyuksh; Torrilhon, Manuel

    We present efficient algorithms and implementations of the 35-moment system equipped with the maximum-entropy closure in the context of rarefied gases. While closures based on the principle of entropy maximization have been shown to yield very promising results for moderately rarefied gas flows, the computational cost of these closures is in general much higher than for closure theories with explicit closed-form expressions of the closing fluxes, such as Grad's classical closure. Following a similar approach as Garrett et al. (2015) , we investigate efficient implementations of the computationally expensive numerical quadrature method used for the moment evaluations of the maximum-entropymore » distribution by exploiting its inherent fine-grained parallelism with the parallelism offered by multi-core processors and graphics cards. We show that using a single graphics card as an accelerator allows speed-ups of two orders of magnitude when compared to a serial CPU implementation. To accelerate the time-to-solution for steady-state problems, we propose a new semi-implicit time discretization scheme. The resulting nonlinear system of equations is solved with a Newton type method in the Lagrange multipliers of the dual optimization problem in order to reduce the computational cost. Additionally, fully explicit time-stepping schemes of first and second order accuracy are presented. We investigate the accuracy and efficiency of the numerical schemes for several numerical test cases, including a steady-state shock-structure problem.« less

  20. Evaluation of physiologic complexity in time series using generalized sample entropy and surrogate data analysis

    NASA Astrophysics Data System (ADS)

    Eduardo Virgilio Silva, Luiz; Otavio Murta, Luiz

    2012-12-01

    Complexity in time series is an intriguing feature of living dynamical systems, with potential use for identification of system state. Although various methods have been proposed for measuring physiologic complexity, uncorrelated time series are often assigned high values of complexity, errouneously classifying them as a complex physiological signals. Here, we propose and discuss a method for complex system analysis based on generalized statistical formalism and surrogate time series. Sample entropy (SampEn) was rewritten inspired in Tsallis generalized entropy, as function of q parameter (qSampEn). qSDiff curves were calculated, which consist of differences between original and surrogate series qSampEn. We evaluated qSDiff for 125 real heart rate variability (HRV) dynamics, divided into groups of 70 healthy, 44 congestive heart failure (CHF), and 11 atrial fibrillation (AF) subjects, and for simulated series of stochastic and chaotic process. The evaluations showed that, for nonperiodic signals, qSDiff curves have a maximum point (qSDiffmax) for q ≠1. Values of q where the maximum point occurs and where qSDiff is zero were also evaluated. Only qSDiffmax values were capable of distinguish HRV groups (p-values 5.10×10-3, 1.11×10-7, and 5.50×10-7 for healthy vs. CHF, healthy vs. AF, and CHF vs. AF, respectively), consistently with the concept of physiologic complexity, and suggests a potential use for chaotic system analysis.

  1. Space-based measurements of stratospheric mountain waves by CRISTA 1. Sensitivity, analysis method, and a case study

    NASA Astrophysics Data System (ADS)

    Preusse, Peter; Dörnbrack, Andreas; Eckermann, Stephen D.; Riese, Martin; Schaeler, Bernd; Bacmeister, Julio T.; Broutman, Dave; Grossmann, Klaus U.

    2002-09-01

    The Cryogenic Infrared Spectrometers and Telescopes for the Atmosphere (CRISTA) instrument measured stratospheric temperatures and trace species concentrations with high precision and spatial resolution during two missions. The measuring technique is infrared limb-sounding of optically thin emissions. In a general approach, we investigate the applicability of the technique to measure gravity waves (GWs) in the retrieved temperature data. It is shown that GWs with wavelengths of the order of 100-200 km horizontally can be detected. The results are applicable to any instrument using the same technique. We discuss additional constraints inherent to the CRISTA instrument. The vertical field of view and the influence of the sampling and retrieval imply that waves with vertical wavelengths ~3-5 km or larger can be retrieved. Global distributions of GW fluctuations were extracted from temperature data measured by CRISTA using Maximum Entropy Method (MEM) and Harmonic Analysis (HA), yielding height profiles of vertical wavelength and peak amplitude for fluctuations in each scanned profile. The method is discussed and compared to Fourier transform analyses and standard deviations. Analysis of data from the first mission reveals large GW amplitudes in the stratosphere over southernmost South America. These waves obey the dispersion relation for linear two-dimensional mountain waves (MWs). The horizontal structure on 6 November 1994 is compared to temperature fields calculated by the Pennsylvania State University (PSU)/National Center for Atmospheric Research (NCAR) mesoscale model (MM5). It is demonstrated that precise knowledge of the instrument's sensitivity is essential. Particularly good agreement is found at the southern tip of South America where the MM5 accurately reproduces the amplitudes and phases of a large-scale wave with 400 km horizontal wavelength. Targeted ray-tracing simulations allow us to interpret some of the observed wave features. A companion paper will discuss MWs on a global scale and estimates the fraction that MWs contribute to the total GW energy (Preusse et al., in preparation, 2002).

  2. Accuracy in the estimation of quantitative minimal area from the diversity/area curve.

    PubMed

    Vives, Sergi; Salicrú, Miquel

    2005-05-01

    The problem of representativity is fundamental in ecological studies. A qualitative minimal area that gives a good representation of species pool [C.M. Bouderesque, Methodes d'etude qualitative et quantitative du benthos (en particulier du phytobenthos), Tethys 3(1) (1971) 79] can be discerned from a quantitative minimal area which reflects the structural complexity of community [F.X. Niell, Sobre la biologia de Ascophyllum nosodum (L.) Le Jolis en Galicia, Invest. Pesq. 43 (1979) 501]. This suggests that the populational diversity can be considered as the value of the horizontal asymptote corresponding to the curve sample diversity/biomass [F.X. Niell, Les applications de l'index de Shannon a l'etude de la vegetation interdidale, Soc. Phycol. Fr. Bull. 19 (1974) 238]. In this study we develop a expression to determine minimal areas and use it to obtain certain information about the community structure based on diversity/area curve graphs. This expression is based on the functional relationship between the expected value of the diversity and the sample size used to estimate it. In order to establish the quality of the estimation process, we obtained the confidence intervals as a particularization of the functional (h-phi)-entropies proposed in [M. Salicru, M.L. Menendez, D. Morales, L. Pardo, Asymptotic distribution of (h,phi)-entropies, Commun. Stat. (Theory Methods) 22 (7) (1993) 2015]. As an example used to demonstrate the possibilities of this method, and only for illustrative purposes, data about a study on the rocky intertidal seawed populations in the Ria of Vigo (N.W. Spain) are analyzed [F.X. Niell, Estudios sobre la estructura, dinamica y produccion del Fitobentos intermareal (Facies rocosa) de la Ria de Vigo. Ph.D. Mem. University of Barcelona, Barcelona, 1979].

  3. Quantum entropy and uncertainty for two-mode squeezed, coherent and intelligent spin states

    NASA Technical Reports Server (NTRS)

    Aragone, C.; Mundarain, D.

    1993-01-01

    We compute the quantum entropy for monomode and two-mode systems set in squeezed states. Thereafter, the quantum entropy is also calculated for angular momentum algebra when the system is either in a coherent or in an intelligent spin state. These values are compared with the corresponding values of the respective uncertainties. In general, quantum entropies and uncertainties have the same minimum and maximum points. However, for coherent and intelligent spin states, it is found that some minima for the quantum entropy turn out to be uncertainty maxima. We feel that the quantum entropy we use provides the right answer, since it is given in an essentially unique way.

  4. Microelectromechanical resonator and method for fabrication

    DOEpatents

    Wittwer, Jonathan W [Albuquerque, NM; Olsson, Roy H [Albuquerque, NM

    2009-11-10

    A method is disclosed for the robust fabrication of a microelectromechanical (MEM) resonator. In this method, a pattern of holes is formed in the resonator mass with the position, size and number of holes in the pattern being optimized to minimize an uncertainty .DELTA.f in the resonant frequency f.sub.0 of the MEM resonator due to manufacturing process variations (e.g. edge bias). A number of different types of MEM resonators are disclosed which can be formed using this method, including capacitively transduced Lame, wineglass and extensional resonators, and piezoelectric length-extensional resonators.

  5. Microelectromechanical resonator and method for fabrication

    DOEpatents

    Wittwer, Jonathan W [Albuquerque, NM; Olsson, Roy H [Albuquerque, NM

    2010-01-26

    A method is disclosed for the robust fabrication of a microelectromechanical (MEM) resonator. In this method, a pattern of holes is formed in the resonator mass with the position, size and number of holes in the pattern being optimized to minimize an uncertainty .DELTA.f in the resonant frequency f.sub.0 of the MEM resonator due to manufacturing process variations (e.g. edge bias). A number of different types of MEM resonators are disclosed which can be formed using this method, including capacitively transduced Lame, wineglass and extensional resonators, and piezoelectric length-extensional resonators.

  6. Application of SPM interferometry in MEMS vibration measurement

    NASA Astrophysics Data System (ADS)

    Tang, Chaowei; He, Guotian; Xu, Changbiao; Zhao, Lijuan; Hu, Jun

    2007-12-01

    The resonant frequency measurement of cantilever has an important position in MEMS(Micro Electro Mechanical Systems) research. Meanwhile the SPM interferometry is a high-precision optical measurement technique, which can be used in physical quantity measurement of vibration, displacement, surface profile. Hence, in this paper we propose to apply SPM(SPM) interferometry in measuring the vibration of MEMS cantilever and in the experiment the vibration of MEMS cantilever was driven by light source. Then this kind of vibration was measured in nm precision. Finally the relational characteristics of MEMS cantilever vibration under optical excitation can be gotten and the measurement principle is analyzed. This method eliminates the influence on the measuring precision caused by external interference and light intensity change through feedback control loop. Experiment results prove that this measurement method has a good effect.

  7. Gravity waves in the thermosphere observed by the AE satellites

    NASA Technical Reports Server (NTRS)

    Gross, S. H.; Reber, C. A.; Huang, F. T.

    1983-01-01

    Atmospheric Explorer (AE) satellite data were used to investigate the spectra characteristics of wave-like structure observed in the neutral and ionized components of the thermosphere. Power spectral analysis derived by the maximum entropy method indicate the existence of a broad spectrum of scale sizes for the fluctuations ranging from tens to thousands of kilometers.

  8. Mixed memory, (non) Hurst effect, and maximum entropy of rainfall in the tropical Andes

    NASA Astrophysics Data System (ADS)

    Poveda, Germán

    2011-02-01

    Diverse linear and nonlinear statistical parameters of rainfall under aggregation in time and the kind of temporal memory are investigated. Data sets from the Andes of Colombia at different resolutions (15 min and 1-h), and record lengths (21 months and 8-40 years) are used. A mixture of two timescales is found in the autocorrelation and autoinformation functions, with short-term memory holding for time lags less than 15-30 min, and long-term memory onwards. Consistently, rainfall variance exhibits different temporal scaling regimes separated at 15-30 min and 24 h. Tests for the Hurst effect evidence the frailty of the R/ S approach in discerning the kind of memory in high resolution rainfall, whereas rigorous statistical tests for short-memory processes do reject the existence of the Hurst effect. Rainfall information entropy grows as a power law of aggregation time, S( T) ˜ Tβ with < β> = 0.51, up to a timescale, TMaxEnt (70-202 h), at which entropy saturates, with β = 0 onwards. Maximum entropy is reached through a dynamic Generalized Pareto distribution, consistently with the maximum information-entropy principle for heavy-tailed random variables, and with its asymptotically infinitely divisible property. The dynamics towards the limit distribution is quantified. Tsallis q-entropies also exhibit power laws with T, such that Sq( T) ˜ Tβ( q) , with β( q) ⩽ 0 for q ⩽ 0, and β( q) ≃ 0.5 for q ⩾ 1. No clear patterns are found in the geographic distribution within and among the statistical parameters studied, confirming the strong variability of tropical Andean rainfall.

  9. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    NASA Astrophysics Data System (ADS)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  10. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    PubMed

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  11. Dynamic MEMS devices for multi-axial fatigue and elastic modulus measurement

    NASA Astrophysics Data System (ADS)

    White, Carolyn D.; Xu, Rui; Sun, Xiaotian; Komvopoulos, Kyriakos

    2003-01-01

    For reliable MEMS device fabrication and operation, there is a continued demand for precise characterization of materials at the micron scale. This paper presents a novel material characterization device for fatigue lifetime testing. The fatigue specimen is subjected to multi-axial loading, which is typical of most MEMS devices. Polycrystalline silicon (polysilicon) fatigue devices were fabricated using the MUMPS process with a three layer mask process ground plane, anchor, and structural layer of polysilicon. A fatigue device consists of two or three beams, attached to a rotating ring and anchored to the substrate on each end. In order to generate a sufficiently large stress, the fatigue devices were tested in resonance to produce a von Mises equivalent stress as high as 1 GPa, which is in the fracture strength range reported for polysilicon. A further increase of the stress in the beam specimens was obtained by introducing a notch with a focused ion beam. The notch resulted into a stress concentration factor of about 3.8, thereby producing maximum von Mises equivalent stress in the range of 1 through 4 GPa. This study provides insight into multi-axial fatigue testing under typical MEMS conditions and additional information about micron-scale polysilicon mechanical behavior, which is the current basic building material for MEMS devices.

  12. Nondestructive surface profiling of hidden MEMS using an infrared low-coherence interferometric microscope

    NASA Astrophysics Data System (ADS)

    Krauter, Johann; Osten, Wolfgang

    2018-03-01

    There are a wide range of applications for micro-electro-mechanical systems (MEMS). The automotive and consumer market is the strongest driver for the growing MEMS industry. A 100 % test of MEMS is particularly necessary since these are often used for safety-related purposes such as the ESP (Electronic Stability Program) system. The production of MEMS is a fully automated process that generates 90 % of the costs during the packaging and dicing steps. Nowadays, an electrical test is carried out on each individual MEMS component before these steps. However, after encapsulation, MEMS are opaque to visible light and other defects cannot be detected. Therefore, we apply an infrared low-coherence interferometer for the topography measurement of those hidden structures. A lock-in algorithm-based method is shown to calculate the object height and to reduce ghost steps due to the 2π -unambiguity. Finally, measurements of different MEMS-based sensors are presented.

  13. Competition between Homophily and Information Entropy Maximization in Social Networks

    PubMed Central

    Zhao, Jichang; Liang, Xiao; Xu, Ke

    2015-01-01

    In social networks, it is conventionally thought that two individuals with more overlapped friends tend to establish a new friendship, which could be stated as homophily breeding new connections. While the recent hypothesis of maximum information entropy is presented as the possible origin of effective navigation in small-world networks. We find there exists a competition between information entropy maximization and homophily in local structure through both theoretical and experimental analysis. This competition suggests that a newly built relationship between two individuals with more common friends would lead to less information entropy gain for them. We demonstrate that in the evolution of the social network, both of the two assumptions coexist. The rule of maximum information entropy produces weak ties in the network, while the law of homophily makes the network highly clustered locally and the individuals would obtain strong and trust ties. A toy model is also presented to demonstrate the competition and evaluate the roles of different rules in the evolution of real networks. Our findings could shed light on the social network modeling from a new perspective. PMID:26334994

  14. Nonequilibrium thermodynamics and maximum entropy production in the Earth system: applications and implications.

    PubMed

    Kleidon, Axel

    2009-06-01

    The Earth system is maintained in a unique state far from thermodynamic equilibrium, as, for instance, reflected in the high concentration of reactive oxygen in the atmosphere. The myriad of processes that transform energy, that result in the motion of mass in the atmosphere, in oceans, and on land, processes that drive the global water, carbon, and other biogeochemical cycles, all have in common that they are irreversible in their nature. Entropy production is a general consequence of these processes and measures their degree of irreversibility. The proposed principle of maximum entropy production (MEP) states that systems are driven to steady states in which they produce entropy at the maximum possible rate given the prevailing constraints. In this review, the basics of nonequilibrium thermodynamics are described, as well as how these apply to Earth system processes. Applications of the MEP principle are discussed, ranging from the strength of the atmospheric circulation, the hydrological cycle, and biogeochemical cycles to the role that life plays in these processes. Nonequilibrium thermodynamics and the MEP principle have potentially wide-ranging implications for our understanding of Earth system functioning, how it has evolved in the past, and why it is habitable. Entropy production allows us to quantify an objective direction of Earth system change (closer to vs further away from thermodynamic equilibrium, or, equivalently, towards a state of MEP). When a maximum in entropy production is reached, MEP implies that the Earth system reacts to perturbations primarily with negative feedbacks. In conclusion, this nonequilibrium thermodynamic view of the Earth system shows great promise to establish a holistic description of the Earth as one system. This perspective is likely to allow us to better understand and predict its function as one entity, how it has evolved in the past, and how it is modified by human activities in the future.

  15. Applications of the principle of maximum entropy: from physics to ecology.

    PubMed

    Banavar, Jayanth R; Maritan, Amos; Volkov, Igor

    2010-02-17

    There are numerous situations in physics and other disciplines which can be described at different levels of detail in terms of probability distributions. Such descriptions arise either intrinsically as in quantum mechanics, or because of the vast amount of details necessary for a complete description as, for example, in Brownian motion and in many-body systems. We show that an application of the principle of maximum entropy for estimating the underlying probability distribution can depend on the variables used for describing the system. The choice of characterization of the system carries with it implicit assumptions about fundamental attributes such as whether the system is classical or quantum mechanical or equivalently whether the individuals are distinguishable or indistinguishable. We show that the correct procedure entails the maximization of the relative entropy subject to known constraints and, additionally, requires knowledge of the behavior of the system in the absence of these constraints. We present an application of the principle of maximum entropy to understanding species diversity in ecology and introduce a new statistical ensemble corresponding to the distribution of a variable population of individuals into a set of species not defined a priori.

  16. Moisture sorption isotherms and thermodynamic properties of mexican mennonite-style cheese.

    PubMed

    Martinez-Monteagudo, Sergio I; Salais-Fierro, Fabiola

    2014-10-01

    Moisture adsorption isotherms of fresh and ripened Mexican Mennonite-style cheese were investigated using the static gravimetric method at 4, 8, and 12 °C in a water activity range (aw) of 0.08-0.96. These isotherms were modeled using GAB, BET, Oswin and Halsey equations through weighed non-linear regression. All isotherms were sigmoid in shape, showing a type II BET isotherm, and the data were best described by GAB model. GAB model coefficients revealed that water adsorption by cheese matrix is a multilayer process characterized by molecules that are strongly bound in the monolayer and molecules that are slightly structured in a multilayer. Using the GAB model, it was possible to estimate thermodynamic functions (net isosteric heat, differential entropy, integral enthalpy and entropy, and enthalpy-entropy compensation) as function of moisture content. For both samples, the isosteric heat and differential entropy decreased with moisture content in exponential fashion. The integral enthalpy gradually decreased with increasing moisture content after reached a maximum value, while the integral entropy decreased with increasing moisture content after reached a minimum value. A linear compensation was found between integral enthalpy and entropy suggesting enthalpy controlled adsorption. Determination of moisture content and aw relationship yields to important information of controlling the ripening, drying and storage operations as well as understanding of the water state within a cheese matrix.

  17. MOEMS FPI sensors for NIR-MIR microspectrometer applications

    NASA Astrophysics Data System (ADS)

    Akujärvi, A.; Guo, B.; Mannila, R.; Rissanen, A.

    2016-03-01

    This paper presents near- and mid- infrared (NIR-MIR) wavelength range optical MEMS Fabry-Perot interferometers (FPIs) developed for automotive and multi-gas sensing applications. MEMS FPI platform for NIR-range consist of LPCVD (low-pressure chemical vapour) deposited polySi-SiN λ/4-thin film Bragg reflectors, with the air gap formed by sacrificial SiO2 etching in HF vapour. Characterization results for the NIR MFPI devices for λ = 1.5 - 2.0 μm show resolution of 15 nm at the optimization wavelength of 1750 nm. We also present a MIR-range MEMS FPI for λ = 2.5 - 3.5 μm, which utilizes silicon and air in within the Bragg reflector structure to provide a high contrast for improved resolution. Characterization results show a FWHM (Full Width Half Maximum) of 20 nm in comparison to the 50 nm resolution provided by earlier MEMS FPIs realized for hydrocarbon sensing with conventional CVD-thin film materials. The improved resolution and the extended operation region shows potential to enable simultaneous sensing of CO2 and multiple hydrocarbons.

  18. Design and implementation of fiber-based multiphoton endoscopy with microelectromechanical systems scanning

    PubMed Central

    Tang, Shuo; Jung, Woonggyu; McCormick, Daniel; Xie, Tuqiang; Su, Jiangping; Ahn, Yeh-Chan; Tromberg, Bruce J.; Chen, Zhongping

    2010-01-01

    A multiphoton endoscopy system has been developed using a two-axis microelectromechanical systems (MEMS) mirror and double-cladding photonic crystal fiber (DCPCF). The MEMS mirror has a 2-mm-diam, 20-deg optical scanning angle, and 1.26-kHz and 780-Hz resonance frequencies on the x and y axes. The maximum number of resolvable focal spots of the MEMS scanner is 720×720 on the x and y axes, which indicates that the MEMS scanner can potentially support high-resolution multiphoton imaging. The DCPCF is compared with standard single-mode fiber and hollow-core photonic bandgap fiber on the basis of dispersion, attenuation, and coupling efficiency properties. The DCPCF has high collection efficiency, and its dispersion can be compensated by grating pairs. Three configurations of probe design are investigated, and their imaging quality and field of view are compared. A two-lens configuration with a collimation and a focusing lens provides the optimum imaging performance and packaging flexibility. The endoscope is applied to image fluorescent microspheres and bovine knee joint cartilage. PMID:19566298

  19. Clauser-Horne-Shimony-Holt violation and the entropy-concurrence plane

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Derkacz, Lukasz; Jakobczyk, Lech

    2005-10-15

    We characterize violation of Clauser-Horne-Shimony-Holt (CHSH) inequalities for mixed two-qubit states by their mixedness and entanglement. The class of states that have maximum degree of CHSH violation for a given linear entropy is also constructed.

  20. Stochastic model of the NASA/MSFC ground facility for large space structures with uncertain parameters: The maximum entropy approach, part 2

    NASA Technical Reports Server (NTRS)

    Hsia, Wei Shen

    1989-01-01

    A validated technology data base is being developed in the areas of control/structures interaction, deployment dynamics, and system performance for Large Space Structures (LSS). A Ground Facility (GF), in which the dynamics and control systems being considered for LSS applications can be verified, was designed and built. One of the important aspects of the GF is to verify the analytical model for the control system design. The procedure is to describe the control system mathematically as well as possible, then to perform tests on the control system, and finally to factor those results into the mathematical model. The reduction of the order of a higher order control plant was addressed. The computer program was improved for the maximum entropy principle adopted in Hyland's MEOP method. The program was tested against the testing problem. It resulted in a very close match. Two methods of model reduction were examined: Wilson's model reduction method and Hyland's optimal projection (OP) method. Design of a computer program for Hyland's OP method was attempted. Due to the difficulty encountered at the stage where a special matrix factorization technique is needed in order to obtain the required projection matrix, the program was successful up to the finding of the Linear Quadratic Gaussian solution but not beyond. Numerical results along with computer programs which employed ORACLS are presented.

  1. Research of MPPT for photovoltaic generation based on two-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Liu, Shuping; Fan, Wei

    2013-03-01

    The cloud model is a mathematical representation to fuzziness and randomness in linguistic concepts. It represents a qualitative concept with expected value Ex, entropy En and hyper entropy He, and integrates the fuzziness and randomness of a linguistic concept in a unified way. This model is a new method for transformation between qualitative and quantitative in the knowledge. This paper is introduced MPPT (maximum power point tracking, MPPT) controller based two- dimensional cloud model through analysis of auto-optimization MPPT control of photovoltaic power system and combining theory of cloud model. Simulation result shows that the cloud controller is simple and easy, directly perceived through the senses, and has strong robustness, better control performance.

  2. Fusion of current technologies with real-time 3D MEMS ladar for novel security and defense applications

    NASA Astrophysics Data System (ADS)

    Siepmann, James P.

    2006-05-01

    Through the utilization of scanning MEMS mirrors in ladar devices, a whole new range of potential military, Homeland Security, law enforcement, and civilian applications is now possible. Currently, ladar devices are typically large (>15,000 cc), heavy (>15 kg), and expensive (>$100,000) while current MEMS ladar designs are more than a magnitude less, opening up a myriad of potential new applications. One such application with current technology is a GPS integrated MEMS ladar unit, which could be used for real-time border monitoring or the creation of virtual 3D battlefields after being dropped or propelled into hostile territory. Another current technology that can be integrated into a MEMS ladar unit is digital video that can give high resolution and true color to a picture that is then enhanced with range information in a real-time display format that is easier for the user to understand and assimilate than typical gray-scale or false color images. The problem with using 2-axis MEMS mirrors in ladar devices is that in order to have a resonance frequency capable of practical real-time scanning, they must either be quite small and/or have a low maximum tilt angle. Typically, this value has been less than (< or = to 10 mg-mm2-kHz2)-degrees. We have been able to solve this problem by using angle amplification techniques that utilize a series of MEMS mirrors and/or a specialized set of optics to achieve a broad field of view. These techniques and some of their novel applications mentioned will be explained and discussed herein.

  3. Optimal resolution in maximum entropy image reconstruction from projections with multigrid acceleration

    NASA Technical Reports Server (NTRS)

    Limber, Mark A.; Manteuffel, Thomas A.; Mccormick, Stephen F.; Sholl, David S.

    1993-01-01

    We consider the problem of image reconstruction from a finite number of projections over the space L(sup 1)(Omega), where Omega is a compact subset of the set of Real numbers (exp 2). We prove that, given a discretization of the projection space, the function that generates the correct projection data and maximizes the Boltzmann-Shannon entropy is piecewise constant on a certain discretization of Omega, which we call the 'optimal grid'. It is on this grid that one obtains the maximum resolution given the problem setup. The size of this grid grows very quickly as the number of projections and number of cells per projection grow, indicating fast computational methods are essential to make its use feasible. We use a Fenchel duality formulation of the problem to keep the number of variables small while still using the optimal discretization, and propose a multilevel scheme to improve convergence of a simple cyclic maximization scheme applied to the dual problem.

  4. Image construction from the IRAS survey and data fusion

    NASA Technical Reports Server (NTRS)

    Bontekoe, Tj. R.

    1990-01-01

    The IRAS survey data can be used successfully to produce images of extended objects. The major difficulty, viz. non-uniform sampling, different response functions for each detector, and varying signal-to-noise levels for each detector for each scan, were resolved. The results of three different image construction techniques are compared: co-addition, constrained least squares, and maximum entropy. The maximum entropy result is superior. An image of the galaxy M51 with an average spatial resolution of 45 arc seconds, is presented using 60 micron survey data. This exceeds the telescope diffraction limit of 1 minute of arc, at this wavelength. Data fusion is a proposed method for combining data from different instruments, with different spatial resolutions, at different wavelengths. Direct estimates of the physical parameters, temperature, density and composition, can be made from the data without prior images (re-)construction. An increase in the accuracy of these parameters is expected as the result of this more systematic approach.

  5. An Improved Evidential-IOWA Sensor Data Fusion Approach in Fault Diagnosis

    PubMed Central

    Zhou, Deyun; Zhuang, Miaoyan; Fang, Xueyi; Xie, Chunhe

    2017-01-01

    As an important tool of information fusion, Dempster–Shafer evidence theory is widely applied in handling the uncertain information in fault diagnosis. However, an incorrect result may be obtained if the combined evidence is highly conflicting, which may leads to failure in locating the fault. To deal with the problem, an improved evidential-Induced Ordered Weighted Averaging (IOWA) sensor data fusion approach is proposed in the frame of Dempster–Shafer evidence theory. In the new method, the IOWA operator is used to determine the weight of different sensor data source, while determining the parameter of the IOWA, both the distance of evidence and the belief entropy are taken into consideration. First, based on the global distance of evidence and the global belief entropy, the α value of IOWA is obtained. Simultaneously, a weight vector is given based on the maximum entropy method model. Then, according to IOWA operator, the evidence are modified before applying the Dempster’s combination rule. The proposed method has a better performance in conflict management and fault diagnosis due to the fact that the information volume of each evidence is taken into consideration. A numerical example and a case study in fault diagnosis are presented to show the rationality and efficiency of the proposed method. PMID:28927017

  6. High Sensitivity MEMS Strain Sensor: Design and Simulation

    PubMed Central

    Mohammed, Ahmed A. S.; Moussa, Walied A.; Lou, Edmond

    2008-01-01

    In this article, we report on the new design of a miniaturized strain microsensor. The proposed sensor utilizes the piezoresistive properties of doped single crystal silicon. Employing the Micro Electro Mechanical Systems (MEMS) technology, high sensor sensitivities and resolutions have been achieved. The current sensor design employs different levels of signal amplifications. These amplifications include geometric, material and electronic levels. The sensor and the electronic circuits can be integrated on a single chip, and packaged as a small functional unit. The sensor converts input strain to resistance change, which can be transformed to bridge imbalance voltage. An analog output that demonstrates high sensitivity (0.03mV/με), high absolute resolution (1με) and low power consumption (100μA) with a maximum range of ±4000με has been reported. These performance characteristics have been achieved with high signal stability over a wide temperature range (±50°C), which introduces the proposed MEMS strain sensor as a strong candidate for wireless strain sensing applications under harsh environmental conditions. Moreover, this sensor has been designed, verified and can be easily modified to measure other values such as force, torque…etc. In this work, the sensor design is achieved using Finite Element Method (FEM) with the application of the piezoresistivity theory. This design process and the microfabrication process flow to prototype the design have been presented. PMID:27879841

  7. A laboratory demonstration of high-resolution hard X-ray and gamma-ray imaging using Fourier-transform techniques

    NASA Technical Reports Server (NTRS)

    Palmer, David; Prince, Thomas A.

    1987-01-01

    A laboratory imaging system has been developed to study the use of Fourier-transform techniques in high-resolution hard X-ray and gamma-ray imaging, with particular emphasis on possible applications to high-energy astronomy. Considerations for the design of a Fourier-transform imager and the instrumentation used in the laboratory studies is described. Several analysis methods for image reconstruction are discussed including the CLEAN algorithm and maximum entropy methods. Images obtained using these methods are presented.

  8. A hybrid indoor ambient light and vibration energy harvester for wireless sensor nodes.

    PubMed

    Yu, Hua; Yue, Qiuqin; Zhou, Jielin; Wang, Wei

    2014-05-19

    To take advantage of applications where both light and vibration energy are available, a hybrid indoor ambient light and vibration energy harvesting scheme is proposed in this paper. This scheme uses only one power conditioning circuit to condition the combined output power harvested from both energy sources so as to reduce the power dissipation. In order to more accurately predict the instantaneous power harvested from the solar panel, an improved five-parameter model for small-scale solar panel applying in low light illumination is presented. The output voltage is increased by using the MEMS piezoelectric cantilever arrays architecture. It overcomes the disadvantage of traditional MEMS vibration energy harvester with low voltage output. The implementation of the maximum power point tracking (MPPT) for indoor ambient light is implemented using analog discrete components, which improves the whole harvester efficiency significantly compared to the digital signal processor. The output power of the vibration energy harvester is improved by using the impedance matching technique. An efficient mechanism of energy accumulation and bleed-off is also discussed. Experiment results obtained from an amorphous-silicon (a-Si) solar panel of 4.8 × 2.0 cm2 and a fabricated piezoelectric MEMS generator of 11 × 12.4 mm2 show that the hybrid energy harvester achieves a maximum efficiency around 76.7%.

  9. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  10. Optimal information networks: Application for data-driven integrated health in populations

    PubMed Central

    Servadio, Joseph L.; Convertino, Matteo

    2018-01-01

    Development of composite indicators for integrated health in populations typically relies on a priori assumptions rather than model-free, data-driven evidence. Traditional variable selection processes tend not to consider relatedness and redundancy among variables, instead considering only individual correlations. In addition, a unified method for assessing integrated health statuses of populations is lacking, making systematic comparison among populations impossible. We propose the use of maximum entropy networks (MENets) that use transfer entropy to assess interrelatedness among selected variables considered for inclusion in a composite indicator. We also define optimal information networks (OINs) that are scale-invariant MENets, which use the information in constructed networks for optimal decision-making. Health outcome data from multiple cities in the United States are applied to this method to create a systemic health indicator, representing integrated health in a city. PMID:29423440

  11. Effect of diet processing method and ingredient substitution on feed characteristics and survival of larval walleye, Sander vitreus

    USGS Publications Warehouse

    Barrows, F.T.; Lellis, W.A.

    2006-01-01

    Two methods were developed for the production of larval fish diets. The first method, microextrusion marumerization (MEM), has been tested in laboratory feeding trials for many years and produces particles that are palatable and water stable. The second method, particle-assisted rotational agglomeration (PARA), produced diets that have lower density than diets produced by MEM. Each method was used to produce diets in the 250- to 400- and 400- to 700-??m range and compared with a reference diet (Fry Feed Kyowa* [FFK]) for feeding larval walleye in two experiments. The effect of substituting 4% of the fish meal with freeze-dried artemia fines was also investigated. In the first experiment, 30-d survival was greater (P < 0.05) for fish fed a diet produced by PARA without Artemia (49.1.0%) than for fish fed the same diet produced by MEM (27.6%). The addition of Artemia to a diet produced by MEM did not increase survival of larval walleye. Fish fed the reference diet had 24.4% survival. In the second experiment, there was an effect of both processing method and Artemia supplementation, and an interaction of these effects, on survival. Fish fed a diet produced by PARA without Artemia supplementation had 48.4% survival, and fish fed the same diet produced by MEM had only 19.6% survival. Inclusion of 4% freeze-dried Artemia improved (P < 0.04) survival of fish fed MEM particles but not those fed PARA particles. Fish fed FFK had greater weight gain than fish fed other diets in both experiments. Data indicate that the PARA method of diet processing produces smaller, lower density particles than the MEM process and that diets produced by the PARA process support higher survival of larval walleye with low capital and operating costs. ?? Copyright by the World Aquaculture Society 2006.

  12. Computational Amide I Spectroscopy for Refinement of Disordered Peptide Ensembles: Maximum Entropy and Related Approaches

    NASA Astrophysics Data System (ADS)

    Reppert, Michael; Tokmakoff, Andrei

    The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.

  13. Using Maximum Entropy to Find Patterns in Genomes

    NASA Astrophysics Data System (ADS)

    Liu, Sophia; Hockenberry, Adam; Lancichinetti, Andrea; Jewett, Michael; Amaral, Luis

    The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. To accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. This approach can also be easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes. National Institute of General Medical Science, Northwestern University Presidential Fellowship, National Science Foundation, David and Lucile Packard Foundation, Camille Dreyfus Teacher Scholar Award.

  14. It is not the entropy you produce, rather, how you produce it

    PubMed Central

    Volk, Tyler; Pauluis, Olivier

    2010-01-01

    The principle of maximum entropy production (MEP) seeks to better understand a large variety of the Earth's environmental and ecological systems by postulating that processes far from thermodynamic equilibrium will ‘adapt to steady states at which they dissipate energy and produce entropy at the maximum possible rate’. Our aim in this ‘outside view’, invited by Axel Kleidon, is to focus on what we think is an outstanding challenge for MEP and for irreversible thermodynamics in general: making specific predictions about the relative contribution of individual processes to entropy production. Using studies that compared entropy production in the atmosphere of a dry versus humid Earth, we show that two systems might have the same entropy production rate but very different internal dynamics of dissipation. Using the results of several of the papers in this special issue and a thought experiment, we show that components of life-containing systems can evolve to either lower or raise the entropy production rate. Our analysis makes explicit fundamental questions for MEP that should be brought into focus: can MEP predict not just the overall state of entropy production of a system but also the details of the sub-systems of dissipaters within the system? Which fluxes of the system are those that are most likely to be maximized? How it is possible for MEP theory to be so domain-neutral that it can claim to apply equally to both purely physical–chemical systems and also systems governed by the ‘laws’ of biological evolution? We conclude that the principle of MEP needs to take on the issue of exactly how entropy is produced. PMID:20368249

  15. Application of Micro-Electro-Mechanical Sensors Contactless NDT of Concrete Structures.

    PubMed

    Ham, Suyun; Popovics, John S

    2015-04-17

    The utility of micro-electro-mechanical sensors (MEMS) for application in air-coupled (contactless or noncontact) sensing to concrete nondestructive testing (NDT) is studied in this paper. The fundamental operation and characteristics of MEMS are first described. Then application of MEMS sensors toward established concrete test methods, including vibration resonance, impact-echo, ultrasonic surface wave, and multi-channel analysis of surface waves (MASW), is demonstrated. In each test application, the performance of MEMS is compared with conventional contactless and contact sensing technology. Favorable performance of the MEMS sensors demonstrates the potential of the technology for applied contactless NDT efforts. To illustrate the utility of air-coupled MEMS sensors for concrete NDT, as compared with conventional sensor technology.

  16. MEMS actuators and sensors: observations on their performance and selection for purpose

    NASA Astrophysics Data System (ADS)

    Bell, D. J.; Lu, T. J.; Fleck, N. A.; Spearing, S. M.

    2005-07-01

    This paper presents an exercise in comparing the performance of microelectromechanical systems (MEMS) actuators and sensors as a function of operating principle. Data have been obtained from the literature for the mechanical performance characteristics of actuators, force sensors and displacement sensors. On-chip and off-chip actuators and sensors are each sub-grouped into families, classes and members according to their principle of operation. The performance of MEMS sharing common operating principles is compared with each other and with equivalent macroscopic devices. The data are used to construct performance maps showing the capability of existing actuators and sensors in terms of maximum force and displacement capability, resolution and frequency. These can also be used as a preliminary design tool, as shown in a case study on the design of an on-chip tensile test machine for materials in thin-film form.

  17. A multi wavelength study of the circumnuclear region of NGC 1365

    NASA Astrophysics Data System (ADS)

    Kristen, H.; Sandqvist, A. A.; Lindblad, P. O.

    We select a sample of five barred spiral galaxies in order to test previous findings concerning NGC 1365. The H I halos of the investigated objects are found to be compact. The effect of companions on the H I extent is illustrated. The central region of NGC 1365 has been mapped in the J = 3-2 CO emission line with the 15-m SEST, which has a HPBW of 15" at the frequency of this transition. The observing grid has a 5"-spacing in the inner and a 10"-spacing in the outer region. A Maximum Entropy Method (MEM) deconvolution has been performed on the inner region observations. A circumnuclear molecular torus with a radius of about 5" is the dominant feature. Molecular emission is also seen coming from various dust streamers in the bar of the galaxy. The velocity field of the molecular region agrees well with predictions of models of gas streaming in the bar and nuclear region. Comparisons with 7"-resolution VLA observations of H I absorption in the nuclear region are discussed. The morphology and kinematics of the high excitation outflow cone in the nuclear region of the Seyfert 1.5 galaxy NGC 1365 is investigated. The opening angle of the cone is 100 degrees, and the orientation such that the line of sight to the Seyfert 1.5 nucleus falls inside the cone. The outflow velocities within the cone are accelerated and fall off towards the edge. An HST FOC exposure of the nuclear region in [OIII] emission shows the cone to contain a large number of discrete clouds. Seen in the B band, a number of star forming regions surround the nucleus, of which the brightest seem to have absolute magnitudes of about -16.6 magnitudes.

  18. Self-Alignment MEMS IMU Method Based on the Rotation Modulation Technique on a Swing Base

    PubMed Central

    Chen, Zhiyong; Yang, Haotian; Wang, Chengbin; Lin, Zhihui; Guo, Meifeng

    2018-01-01

    The micro-electro-mechanical-system (MEMS) inertial measurement unit (IMU) has been widely used in the field of inertial navigation due to its small size, low cost, and light weight, but aligning MEMS IMUs remains a challenge for researchers. MEMS IMUs have been conventionally aligned on a static base, requiring other sensors, such as magnetometers or satellites, to provide auxiliary information, which limits its application range to some extent. Therefore, improving the alignment accuracy of MEMS IMU as much as possible under swing conditions is of considerable value. This paper proposes an alignment method based on the rotation modulation technique (RMT), which is completely self-aligned, unlike the existing alignment techniques. The effect of the inertial sensor errors is mitigated by rotating the IMU. Then, inertial frame-based alignment using the rotation modulation technique (RMT-IFBA) achieved coarse alignment on the swing base. The strong tracking filter (STF) further improved the alignment accuracy. The performance of the proposed method was validated with a physical experiment, and the results of the alignment showed that the standard deviations of pitch, roll, and heading angle were 0.0140°, 0.0097°, and 0.91°, respectively, which verified the practicality and efficacy of the proposed method for the self-alignment of the MEMS IMU on a swing base. PMID:29649150

  19. Reconstruction of calmodulin single-molecule FRET states, dye interactions, and CaMKII peptide binding by MultiNest and classic maximum entropy

    NASA Astrophysics Data System (ADS)

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2013-08-01

    We analyzed single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.

  20. Reconstruction of Calmodulin Single-Molecule FRET States, Dye-Interactions, and CaMKII Peptide Binding by MultiNest and Classic Maximum Entropy

    PubMed Central

    DeVore, Matthew S.; Gull, Stephen F.; Johnson, Carey K.

    2013-01-01

    We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data. PMID:24223465

  1. Reconstruction of Calmodulin Single-Molecule FRET States, Dye-Interactions, and CaMKII Peptide Binding by MultiNest and Classic Maximum Entropy.

    PubMed

    Devore, Matthew S; Gull, Stephen F; Johnson, Carey K

    2013-08-30

    We analyze single molecule FRET burst measurements using Bayesian nested sampling. The MultiNest algorithm produces accurate FRET efficiency distributions from single-molecule data. FRET efficiency distributions recovered by MultiNest and classic maximum entropy are compared for simulated data and for calmodulin labeled at residues 44 and 117. MultiNest compares favorably with maximum entropy analysis for simulated data, judged by the Bayesian evidence. FRET efficiency distributions recovered for calmodulin labeled with two different FRET dye pairs depended on the dye pair and changed upon Ca 2+ binding. We also looked at the FRET efficiency distributions of calmodulin bound to the calcium/calmodulin dependent protein kinase II (CaMKII) binding domain. For both dye pairs, the FRET efficiency distribution collapsed to a single peak in the case of calmodulin bound to the CaMKII peptide. These measurements strongly suggest that consideration of dye-protein interactions is crucial in forming an accurate picture of protein conformations from FRET data.

  2. Monolithic integration of a MOSFET with a MEMS device

    DOEpatents

    Bennett, Reid; Draper, Bruce

    2003-01-01

    An integrated microelectromechanical system comprises at least one MOSFET interconnected to at least one MEMS device on a common substrate. A method for integrating the MOSFET with the MEMS device comprises fabricating the MOSFET and MEMS device monolithically on the common substrate. Conveniently, the gate insulator, gate electrode, and electrical contacts for the gate, source, and drain can be formed simultaneously with the MEMS device structure, thereby eliminating many process steps and materials. In particular, the gate electrode and electrical contacts of the MOSFET and the structural layers of the MEMS device can be doped polysilicon. Dopant diffusion from the electrical contacts is used to form the source and drain regions of the MOSFET. The thermal diffusion step for forming the source and drain of the MOSFET can comprise one or more of the thermal anneal steps to relieve stress in the structural layers of the MEMS device.

  3. The moving-window Bayesian Maximum Entropy framework: Estimation of PM2.5 yearly average concentration across the contiguous United States

    PubMed Central

    Akita, Yasuyuki; Chen, Jiu-Chiuan; Serre, Marc L.

    2013-01-01

    Geostatistical methods are widely used in estimating long-term exposures for air pollution epidemiological studies, despite their limited capabilities to handle spatial non-stationarity over large geographic domains and uncertainty associated with missing monitoring data. We developed a moving-window (MW) Bayesian Maximum Entropy (BME) method and applied this framework to estimate fine particulate matter (PM2.5) yearly average concentrations over the contiguous U.S. The MW approach accounts for the spatial non-stationarity, while the BME method rigorously processes the uncertainty associated with data missingnees in the air monitoring system. In the cross-validation analyses conducted on a set of randomly selected complete PM2.5 data in 2003 and on simulated data with different degrees of missing data, we demonstrate that the MW approach alone leads to at least 17.8% reduction in mean square error (MSE) in estimating the yearly PM2.5. Moreover, the MWBME method further reduces the MSE by 8.4% to 43.7% with the proportion of incomplete data increased from 18.3% to 82.0%. The MWBME approach leads to significant reductions in estimation error and thus is recommended for epidemiological studies investigating the effect of long-term exposure to PM2.5 across large geographical domains with expected spatial non-stationarity. PMID:22739679

  4. Molecular extended thermodynamics of rarefied polyatomic gases and wave velocities for increasing number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arima, Takashi, E-mail: tks@stat.nitech.ac.jp; Mentrelli, Andrea, E-mail: andrea.mentrelli@unibo.it; Ruggeri, Tommaso, E-mail: tommaso.ruggeri@unibo.it

    Molecular extended thermodynamics of rarefied polyatomic gases is characterized by two hierarchies of equations for moments of a suitable distribution function in which the internal degrees of freedom of a molecule is taken into account. On the basis of physical relevance the truncation orders of the two hierarchies are proven to be not independent on each other, and the closure procedures based on the maximum entropy principle (MEP) and on the entropy principle (EP) are proven to be equivalent. The characteristic velocities of the emerging hyperbolic system of differential equations are compared to those obtained for monatomic gases and themore » lower bound estimate for the maximum equilibrium characteristic velocity established for monatomic gases (characterized by only one hierarchy for moments with truncation order of moments N) by Boillat and Ruggeri (1997) (λ{sub (N)}{sup E,max})/(c{sub 0}) ⩾√(6/5 (N−1/2 )),(c{sub 0}=√(5/3 k/m T)) is proven to hold also for rarefied polyatomic gases independently from the degrees of freedom of a molecule. -- Highlights: •Molecular extended thermodynamics of rarefied polyatomic gases is studied. •The relation between two hierarchies of equations for moments is derived. •The equivalence of maximum entropy principle and entropy principle is proven. •The characteristic velocities are compared to those of monatomic gases. •The lower bound of the maximum characteristic velocity is estimated.« less

  5. Quantum Rényi relative entropies affirm universality of thermodynamics.

    PubMed

    Misra, Avijit; Singh, Uttam; Bera, Manabendra Nath; Rajagopal, A K

    2015-10-01

    We formulate a complete theory of quantum thermodynamics in the Rényi entropic formalism exploiting the Rényi relative entropies, starting from the maximum entropy principle. In establishing the first and second laws of quantum thermodynamics, we have correctly identified accessible work and heat exchange in both equilibrium and nonequilibrium cases. The free energy (internal energy minus temperature times entropy) remains unaltered, when all the entities entering this relation are suitably defined. Exploiting Rényi relative entropies we have shown that this "form invariance" holds even beyond equilibrium and has profound operational significance in isothermal process. These results reduce to the Gibbs-von Neumann results when the Rényi entropic parameter α approaches 1. Moreover, it is shown that the universality of the Carnot statement of the second law is the consequence of the form invariance of the free energy, which is in turn the consequence of maximum entropy principle. Further, the Clausius inequality, which is the precursor to the Carnot statement, is also shown to hold based on the data processing inequalities for the traditional and sandwiched Rényi relative entropies. Thus, we find that the thermodynamics of nonequilibrium state and its deviation from equilibrium together determine the thermodynamic laws. This is another important manifestation of the concepts of information theory in thermodynamics when they are extended to the quantum realm. Our work is a substantial step towards formulating a complete theory of quantum thermodynamics and corresponding resource theory.

  6. MEMS micromirrors for optical switching in multichannel spectrophotometers

    NASA Astrophysics Data System (ADS)

    Tuantranont, Adisorn; Lomas, Tanom; Bright, Victor M.

    2004-04-01

    This paper reports for the first time that a novel MEMS-based micromirror switch has successfully demonstrated for optical switching in a multi-channel fiber optics spectrophotometer system. The conventional optomechanical fiber optic switches for multi-channel spectrophotometers available in market are bulky, slow, low numbers of channels and expensive. Our foundry MEMS-based micromirror switch designed for integrating with commercially available spectrophotometers offers more compact devices, increased number of probing channels, higher performance and cheaper. Our MEMS-based micromirror switch is a surface micromachined mirror fabricated through MUMPs foundry. The 280 μm x 280 μm gold coated mirror is suspended by the double-gimbal structure for X and Y axis scanning. Self-assembly by solders is used to elevate the torsion mirror 30 μm over the substrate to achieve large scan angle. The solder self-assembly approach dramatically reduces the time to assembly the switch. The scan mirror is electrostatically controlled by applying voltages. The individual probing signal from each probing head is guided by fibers with collimated lenses and incidents on the center of the mirror. The operating scan angle is in the range of 3.5 degrees with driving voltage of 0-100 V. The fastest switching time of 4 millisecond (1 ms rise time and 3 ms fall time) is measured corresponding to the maximum speed of the mirror of 0.25 kHz when the mirror is scanning at +/- 1.5 degrees. The micromirror switch is packaged with a multi-mode fiber bundle using active alignment technique. A centered fiber is the output fiber that is connected to spectrophotometer. Maximum insertion loss of 5 dB has been obtained. The accuracy of measured spectral data is equivalent to the single channel spectrophotometer with a small degradation on probing signal due to fiber coupling.

  7. Fabrication and performance evaluation of a metal-based bimorph piezoelectric MEMS generator for vibration energy harvesting

    NASA Astrophysics Data System (ADS)

    Kuo, Chun-Liang; Lin, Shun-Chiu; Wu, Wen-Jong

    2016-10-01

    This paper presents the development of a bimorph microelectromechanical system (MEMS) generator for vibration energy harvesting. The bimorph generator is in cantilever beam structure formed by laminating two lead zirconate titanate thick-film layers on both sides of a stainless steel substrate. Aiming to scavenge vibration energy efficiently from the environment and transform into useful electrical energy, the two piezoelectric layers on the device can be poled for serial and parallel connections to enhance the output voltage or output current respectively. In addition, a tungsten proof mass is bonded at the tip of the device to adjust the resonance frequency. The experimental result shows superior performance the generator. At the 0.5 g base excitation acceleration level, the devices pooled for serial connection and the device poled for parallel connection possess an open-circuit output voltage of 11.6 VP-P and 20.1 VP-P, respectively. The device poled for parallel connection reaches a maximum power output of 423 μW and an output voltage of 15.2 VP-P at an excitation frequency of 143.4 Hz and an externally applied based excitation acceleration of 1.5 g, whereas the device poled serial connection achieves a maximum power output of 413 μW and an output voltage of 33.0 VP-P at an excitation frequency of 140.8 Hz and an externally applied base excitation acceleration of 1.5 g. To demonstrate the feasibility of the MEMS generator for real applications, we finished the demonstration of a self-powered Bluetooth low energy wireless temperature sensor sending readings to a smartphone with only the power from the MEMS generator harvesting from vibration.

  8. Energy conservation and maximal entropy production in enzyme reactions.

    PubMed

    Dobovišek, Andrej; Vitas, Marko; Brumen, Milan; Fajmut, Aleš

    2017-08-01

    A procedure for maximization of the density of entropy production in a single stationary two-step enzyme reaction is developed. Under the constraints of mass conservation, fixed equilibrium constant of a reaction and fixed products of forward and backward enzyme rate constants the existence of maximum in the density of entropy production is demonstrated. In the state with maximal density of entropy production the optimal enzyme rate constants, the stationary concentrations of the substrate and the product, the stationary product yield as well as the stationary reaction flux are calculated. The test, whether these calculated values of the reaction parameters are consistent with their corresponding measured values, is performed for the enzyme Glucose Isomerase. It is found that calculated and measured rate constants agree within an order of magnitude, whereas the calculated reaction flux and the product yield differ from their corresponding measured values for less than 20 % and 5 %, respectively. This indicates that the enzyme Glucose Isomerase, considered in a non-equilibrium stationary state, as found in experiments using the continuous stirred tank reactors, possibly operates close to the state with the maximum in the density of entropy production. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. The Development and Validation of Novel, Simple High-Performance Liquid Chromatographic Method with Refractive Index Detector for Quantification of Memantine Hydrochloride in Dissolution Samples.

    PubMed

    Sawant, Tukaram B; Wakchaure, Vikas S; Rakibe, Udyakumar K; Musmade, Prashant B; Chaudhari, Bhata R; Mane, Dhananjay V

    2017-07-01

    The present study was aimed to develop an analytical method for quantification of memantine (MEM) hydrochloride in dissolution samples using high-performance liquid chromatography with refractive index (RI) detector. The chromatographic separation was achieved on C18 (250 × 4.5 mm, 5 μm) column using isocratic mobile phase comprises of buffer (pH 5.2):methanol (40:60 v/v) pumped at a flow rate of 1.0 mL/min. The column effluents were monitored using RI detector. The retention time of MEM was found to be ~6.5 ± 0.3 min. The developed chromatographic method was validated and found to be linear over the concentration range of 5.0-45.0 μg/mL for MEM. Mean recovery of MEM was found to be 99.2 ± 0.5% (w/w). The method was found to be simple, fast, precise and accurate, which can be utilized for the quantification of MEM in dissolution samples. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  10. A large-scan-angle piezoelectric MEMS optical scanner actuated by a Nb-doped PZT thin film

    NASA Astrophysics Data System (ADS)

    Naono, Takayuki; Fujii, Takamichi; Esashi, Masayoshi; Tanaka, Shuji

    2014-01-01

    Resonant 1D microelectromechanical systems (MEMS) optical scanners actuated by piezoelectric unimorph actuators with a Nb-doped lead zirconate titanate (PNZT) thin film were developed for endoscopic optical coherence tomography (OCT) application. The MEMS scanners were designed as the resonance frequency was less than 125 Hz to obtain enough pixels per frame in OCT images. The device size was within 3.4 mm × 2.5 mm, which is compact enough to be installed in a side-imaging probe with 4 mm inner diameter. The fabrication process started with a silicon-on-insulator wafer, followed by PNZT deposition by the Rf sputtering and Si bulk micromachining process. The fabricated MEMS scanners showed maximum optical scan angles of 146° at 90 Hz, 148° at 124 Hz, 162° at 180 Hz, and 152° at 394 Hz at resonance in atmospheric pressure. Such wide scan angles were obtained by a drive voltage below 1.3 Vpp, ensuring intrinsic safety in in vivo uses. The scanner with the unpoled PNZT film showed three times as large a scan angle as that with a poled PZT films. A swept-source OCT system was constructed using the fabricated MEMS scanner, and cross-sectional images of a fingertip with image widths of 4.6 and 2.3 mm were acquired. In addition, a PNZT-based angle sensor was studied for feedback operation.

  11. Shock heating of the solar wind plasma

    NASA Technical Reports Server (NTRS)

    Whang, Y. C.; Liu, Shaoliang; Burlaga, L. F.

    1990-01-01

    The role played by shocks in heating solar-wind plasma is investigated using data on 413 shocks which were identified from the plasma and magnetic-field data collected between 1973 and 1982 by Pioneer and Voyager spacecraft. It is found that the average shock strength increased with the heliocentric distance outside 1 AU, reaching a maximum near 5 AU, after which the shock strength decreased with the distance; the entropy of the solar wind protons also reached a maximum at 5 AU. An MHD simulation model in which shock heating is the only heating mechanism available was used to calculate the entropy changes for the November 1977 event. The calculated entropy agreed well with the value calculated from observational data, suggesting that shocks are chiefly responsible for heating solar wind plasma between 1 and 15 AU.

  12. Entropy maximization under the constraints on the generalized Gini index and its application in modeling income distributions

    NASA Astrophysics Data System (ADS)

    Khosravi Tanak, A.; Mohtashami Borzadaran, G. R.; Ahmadi, J.

    2015-11-01

    In economics and social sciences, the inequality measures such as Gini index, Pietra index etc., are commonly used to measure the statistical dispersion. There is a generalization of Gini index which includes it as special case. In this paper, we use principle of maximum entropy to approximate the model of income distribution with a given mean and generalized Gini index. Many distributions have been used as descriptive models for the distribution of income. The most widely known of these models are the generalized beta of second kind and its subclass distributions. The obtained maximum entropy distributions are fitted to the US family total money income in 2009, 2011 and 2013 and their relative performances with respect to generalized beta of second kind family are compared.

  13. Maximum entropy and equations of state for random cellular structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rivier, N.

    Random, space-filling cellular structures (biological tissues, metallurgical grain aggregates, foams, etc.) are investigated. Maximum entropy inference under a few constraints yields structural equations of state, relating the size of cells to their topological shape. These relations are known empirically as Lewis's law in Botany, or Desch's relation in Metallurgy. Here, the functional form of the constraints is now known as a priori, and one takes advantage of this arbitrariness to increase the entropy further. The resulting structural equations of state are independent of priors, they are measurable experimentally and constitute therefore a direct test for the applicability of MaxEnt inferencemore » (given that the structure is in statistical equilibrium, a fact which can be tested by another simple relation (Aboav's law)). 23 refs., 2 figs., 1 tab.« less

  14. Use of mutual information to decrease entropy: Implications for the second law of thermodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, S.

    1989-05-15

    Several theorems on the mechanics of gathering information are proved, and the possibility of violating the second law of thermodynamics by obtaining information is discussed in light of these theorems. Maxwell's demon can lower the entropy of his surroundings by an amount equal to the difference between the maximum entropy of his recording device and its initial entropy, without generating a compensating entropy increase. A demon with human-scale recording devices can reduce the entropy of a gas by a negligible amount only, but the proof of the demon's impracticability leaves open the possibility that systems highly correlated with their environmentmore » can reduce the environment's entropy by a substantial amount without increasing entropy elsewhere. In the event that a boundary condition for the universe requires it to be in a state of low entropy when small, the correlations induced between different particle modes during the expansion phase allow the modes to behave like Maxwell's demons during the contracting phase, reducing the entropy of the universe to a low value.« less

  15. Exact Maximum-Entropy Estimation with Feynman Diagrams

    NASA Astrophysics Data System (ADS)

    Netser Zernik, Amitai; Schlank, Tomer M.; Tessler, Ran J.

    2018-02-01

    A longstanding open problem in statistics is finding an explicit expression for the probability measure which maximizes entropy with respect to given constraints. In this paper a solution to this problem is found, using perturbative Feynman calculus. The explicit expression is given as a sum over weighted trees.

  16. Numerical and Statistical Analysis of Fractures in Mechanically Dissimilar Rocks of Limestone Interbedded with Shale from Nash Point in Bristol Channel, South Wales, UK.

    NASA Astrophysics Data System (ADS)

    Adeoye-Akinde, K.; Gudmundsson, A.

    2017-12-01

    Heterogeneity and anisotropy, especially with layered strata within the same reservoir, makes the geometry and permeability of an in-situ fracture network challenging to forecast. This study looks at outcrops analogous to reservoir rocks for a better understanding of in-situ fracture networks and permeability, especially fracture formation, propagation, and arrest/deflection. Here, fracture geometry (e.g. length and aperture) from interbedded limestone and shale is combined with statistical and numerical modelling (using the Finite Element Method) to better forecast fracture network properties and permeability. The main aim is to bridge the gap between fracture data obtained at the core level (cm-scale) and at the seismic level (km-scale). Analysis has been made of geometric properties of over 250 fractures from the blue Lias in Nash Point, UK. As fractures propagate, energy is required to keep them going, and according to the laws of thermodynamics, this energy can be linked to entropy. As fractures grow, entropy increases, therefore, the result shows a strong linear correlation between entropy and the scaling exponent of fracture length and aperture-size distributions. Modelling is used to numerically simulate the stress/fracture behaviour in mechanically dissimilar rocks. Results show that the maximum principal compressive stress orientation changes in the host rock as the fracture-induced stress tip moves towards a more compliant (shale) layer. This behaviour can be related to the three mechanisms of fracture arrest/deflection at an interface, namely: elastic mismatch, stress barrier and Cook-Gordon debonding. Tensile stress concentrates at the contact between the stratigraphic layers, ahead of and around the propagating fracture. However, as shale stiffens with time, the stresses concentrated at the contact start to dissipate into it. This can happen in nature through diagenesis, and with greater depth of burial. This study also investigates how induced fractures propagate and interact with existing discontinuities in layered rocks using analogue modelling. Further work will introduce the Maximum Entropy Method for more accurate statistical modelling. This method is mainly useful to forecast likely fracture-size probability distributions from incomplete subsurface information.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Naghavi, S. Shahab; Emery, Antoine A.; Hansen, Heine A.

    Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Cemore » 4+/Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.« less

  18. Nonequilibrium Entropy in a Shock

    DOE PAGES

    Margolin, Len G.

    2017-07-19

    In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less

  19. Nonequilibrium Entropy in a Shock

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, Len G.

    In a classic paper, Morduchow and Libby use an analytic solution for the profile of a Navier–Stokes shock to show that the equilibrium thermodynamic entropy has a maximum inside the shock. There is no general nonequilibrium thermodynamic formulation of entropy; the extension of equilibrium theory to nonequililbrium processes is usually made through the assumption of local thermodynamic equilibrium (LTE). However, gas kinetic theory provides a perfectly general formulation of a nonequilibrium entropy in terms of the probability distribution function (PDF) solutions of the Boltzmann equation. In this paper I will evaluate the Boltzmann entropy for the PDF that underlies themore » Navier–Stokes equations and also for the PDF of the Mott–Smith shock solution. I will show that both monotonically increase in the shock. As a result, I will propose a new nonequilibrium thermodynamic entropy and show that it is also monotone and closely approximates the Boltzmann entropy.« less

  20. Science with High Spatial Resolution Far-Infrared Data

    NASA Technical Reports Server (NTRS)

    Terebey, Susan (Editor); Mazzarella, Joseph M. (Editor)

    1994-01-01

    The goal of this workshop was to discuss new science and techniques relevant to high spatial resolution processing of far-infrared data, with particular focus on high resolution processing of IRAS data. Users of the maximum correlation method, maximum entropy, and other resolution enhancement algorithms applicable to far-infrared data gathered at the Infrared Processing and Analysis Center (IPAC) for two days in June 1993 to compare techniques and discuss new results. During a special session on the third day, interested astronomers were introduced to IRAS HIRES processing, which is IPAC's implementation of the maximum correlation method to the IRAS data. Topics discussed during the workshop included: (1) image reconstruction; (2) random noise; (3) imagery; (4) interacting galaxies; (5) spiral galaxies; (6) galactic dust and elliptical galaxies; (7) star formation in Seyfert galaxies; (8) wavelet analysis; and (9) supernova remnants.

  1. Methodes entropiques appliquees au probleme inverse en magnetoencephalographie

    NASA Astrophysics Data System (ADS)

    Lapalme, Ervig

    2005-07-01

    This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.

  2. Correlation between magnetocaloric and electrical properties based on phenomenological models in La0.47Pr0.2Pb0.33MnO3 perovskite

    NASA Astrophysics Data System (ADS)

    Mechi, Nesrine; Alzahrani, Bandar; Hcini, Sobhi; Bouazizi, Mohamed Lamjed; Dhahri, Abdessalem

    2018-06-01

    We have investigated the correlation between magnetocaloric and electrical properties of La0.47Pr0.2Pb0.33MnO3 perovskite prepared using the sol-gel method. Rietveld analysis of X-ray diffraction (XRD) pattern shows pure crystalline phase with rhombohedral ? structure. Magnetic entropy change, relative cooling power (RCP) and specific heat were predicted from M(T, μ0H) data at different magnetic fields with the help of the phenomenological model. The magnetic entropy change reaches a maximum value ? of about 3.96 J kg-1 K-1 for μ0H = 5 T corresponding to RCP of 183 J kg-1. These values are relatively higher, making our sample a promising candidate for the magnetic refrigeration. Electrical-resistivity measurements were well fitted with the phenomenological percolation model, which is based on the phase segregation of ferromagnetic-metallic clusters and paramagnetic-semiconductor regions. The temperature and magnetic field dependences of resistivity data, ρ(T, μ0H), allowed us to determine the magnetic entropy change ?. Results show that the as-obtained magnetic entropy change values are similar to those determined from the phenomenological model.

  3. Tsallis entropy and decoherence of CsI quantum pseudo dot qubit

    NASA Astrophysics Data System (ADS)

    Tiotsop, M.; Fotue, A. J.; Fotsin, H. B.; Fai, L. C.

    2017-05-01

    Polaron in CsI quantum pseudo dot under an electromagnetic field was considered, and the ground and first excited state energies were derived by employing the combining Pekar variational and unitary transformation methods. With the two-level system obtained, single qubit was envisioned and the decoherence was studied using non-extensive entropy (Tsallis entropy). Numerical results showed: (i) the increase (decrease) of the energy levels (period of oscillation) with the increase of chemical potential, the zero point of pseudo dot, cyclotron frequency, and transverse and longitudinal confinements; (ii) the Tsallis entropy evolved as a wave envelop that increase with the increase of non-extenxive parameter and with the increase of electric field strength, zero point of pseudo dot and cyclotron frequency the wave envelop evolve periodically with reduction of period; (iii) The transition probability increases from the boundary to the centre of the dot where it has its maximum value. It was also noted that the probability density oscillate with period T0 = ℏ / Δ Ε with the tunnelling of the chemical potential and zero point of the pseudo dot. These results are helpful in the control of decoherence in quantum systems and may also be useful for the design of quantum computers.

  4. Evaluation of entropy and JM-distance criterions as features selection methods using spectral and spatial features derived from LANDSAT images

    NASA Technical Reports Server (NTRS)

    Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II

    1984-01-01

    A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.

  5. Proposed principles of maximum local entropy production.

    PubMed

    Ross, John; Corlan, Alexandru D; Müller, Stefan C

    2012-07-12

    Articles have appeared that rely on the application of some form of "maximum local entropy production principle" (MEPP). This is usually an optimization principle that is supposed to compensate for the lack of structural information and measurements about complex systems, even systems as complex and as little characterized as the whole biosphere or the atmosphere of the Earth or even of less known bodies in the solar system. We select a number of claims from a few well-known papers that advocate this principle and we show that they are in error with the help of simple examples of well-known chemical and physical systems. These erroneous interpretations can be attributed to ignoring well-established and verified theoretical results such as (1) entropy does not necessarily increase in nonisolated systems, such as "local" subsystems; (2) macroscopic systems, as described by classical physics, are in general intrinsically deterministic-there are no "choices" in their evolution to be selected by using supplementary principles; (3) macroscopic deterministic systems are predictable to the extent to which their state and structure is sufficiently well-known; usually they are not sufficiently known, and probabilistic methods need to be employed for their prediction; and (4) there is no causal relationship between the thermodynamic constraints and the kinetics of reaction systems. In conclusion, any predictions based on MEPP-like principles should not be considered scientifically founded.

  6. In situ MEMS testing: correlation of high-resolution X-ray diffraction with mechanical experiments and finite element analysis

    NASA Astrophysics Data System (ADS)

    Schifferle, Andreas; Dommann, Alex; Neels, Antonia

    2017-12-01

    New methods are needed in microsystems technology for evaluating microelectromechanical systems (MEMS) because of their reduced size. The assessment and characterization of mechanical and structural relations of MEMS are essential to assure the long-term functioning of devices, and have a significant impact on design and fabrication.

  7. Calibration of High Frequency MEMS Microphones

    NASA Technical Reports Server (NTRS)

    Shams, Qamar A.; Humphreys, William M.; Bartram, Scott M.; Zuckewar, Allan J.

    2007-01-01

    Understanding and controlling aircraft noise is one of the major research topics of the NASA Fundamental Aeronautics Program. One of the measurement technologies used to acquire noise data is the microphone directional array (DA). Traditional direction array hardware, consisting of commercially available condenser microphones and preamplifiers can be too expensive and their installation in hard-walled wind tunnel test sections too complicated. An emerging micro-machining technology coupled with the latest cutting edge technologies for smaller and faster systems have opened the way for development of MEMS microphones. The MEMS microphone devices are available in the market but suffer from certain important shortcomings. Based on early experiments with array prototypes, it has been found that both the bandwidth and the sound pressure level dynamic range of the microphones should be increased significantly to improve the performance and flexibility of the overall array. Thus, in collaboration with an outside MEMS design vendor, NASA Langley modified commercially available MEMS microphone as shown in Figure 1 to meet the new requirements. Coupled with the design of the enhanced MEMS microphones was the development of a new calibration method for simultaneously obtaining the sensitivity and phase response of the devices over their entire broadband frequency range. Over the years, several methods have been used for microphone calibration. Some of the common methods of microphone calibration are Coupler (Reciprocity, Substitution, and Simultaneous), Pistonphone, Electrostatic actuator, and Free-field calibration (Reciprocity, Substitution, and Simultaneous). Traditionally, electrostatic actuators (EA) have been used to characterize air-condenser microphones for wideband frequency ranges; however, MEMS microphones are not adaptable to the EA method due to their construction and very small diaphragm size. Hence a substitution-based, free-field method was developed to calibrate these microphones at frequencies up to 80 kHz. The technique relied on the use of a random, ultrasonic broadband centrifugal sound source located in a small anechoic chamber. Phase calibrations of the MEMS microphones were derived from cross spectral phase comparisons between the reference and test substitution microphones and an adjacent and invariant grazing-incidence 1/8-inch standard microphone.

  8. Maximum Entropy Production As a Framework for Understanding How Living Systems Evolve, Organize and Function

    NASA Astrophysics Data System (ADS)

    Vallino, J. J.; Algar, C. K.; Huber, J. A.; Fernandez-Gonzalez, N.

    2014-12-01

    The maximum entropy production (MEP) principle holds that non equilibrium systems with sufficient degrees of freedom will likely be found in a state that maximizes entropy production or, analogously, maximizes potential energy destruction rate. The theory does not distinguish between abiotic or biotic systems; however, we will show that systems that can coordinate function over time and/or space can potentially dissipate more free energy than purely Markovian processes (such as fire or a rock rolling down a hill) that only maximize instantaneous entropy production. Biological systems have the ability to store useful information acquired via evolution and curated by natural selection in genomic sequences that allow them to execute temporal strategies and coordinate function over space. For example, circadian rhythms allow phototrophs to "predict" that sun light will return and can orchestrate metabolic machinery appropriately before sunrise, which not only gives them a competitive advantage, but also increases the total entropy production rate compared to systems that lack such anticipatory control. Similarly, coordination over space, such a quorum sensing in microbial biofilms, can increase acquisition of spatially distributed resources and free energy and thereby enhance entropy production. In this talk we will develop a modeling framework to describe microbial biogeochemistry based on the MEP conjecture constrained by information and resource availability. Results from model simulations will be compared to laboratory experiments to demonstrate the usefulness of the MEP approach.

  9. Giant onsite electronic entropy enhances the performance of ceria for water splitting

    DOE PAGES

    Naghavi, S. Shahab; Emery, Antoine A.; Hansen, Heine A.; ...

    2017-08-18

    Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Cemore » 4+/Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.« less

  10. A fault diagnosis scheme for planetary gearboxes using modified multi-scale symbolic dynamic entropy and mRMR feature selection

    NASA Astrophysics Data System (ADS)

    Li, Yongbo; Yang, Yuantao; Li, Guoyan; Xu, Minqiang; Huang, Wenhu

    2017-07-01

    Health condition identification of planetary gearboxes is crucial to reduce the downtime and maximize productivity. This paper aims to develop a novel fault diagnosis method based on modified multi-scale symbolic dynamic entropy (MMSDE) and minimum redundancy maximum relevance (mRMR) to identify the different health conditions of planetary gearbox. MMSDE is proposed to quantify the regularity of time series, which can assess the dynamical characteristics over a range of scales. MMSDE has obvious advantages in the detection of dynamical changes and computation efficiency. Then, the mRMR approach is introduced to refine the fault features. Lastly, the obtained new features are fed into the least square support vector machine (LSSVM) to complete the fault pattern identification. The proposed method is numerically and experimentally demonstrated to be able to recognize the different fault types of planetary gearboxes.

  11. Laser-assisted advanced assembly for MEMS fabrication

    NASA Astrophysics Data System (ADS)

    Atanasov, Yuriy Andreev

    Micro Electro-Mechanical Systems (MEMS) are currently fabricated using methods originally designed for manufacturing semiconductor devices, using minimum if any assembly at all. The inherited limitations of this approach narrow the materials that can be employed and reduce the design complexity, imposing limitations on MEMS functionality. The proposed Laser-Assisted Advanced Assembly (LA3) method solves these problems by first fabricating components followed by assembly of a MEMS device. Components are micro-machined using a laser or by photolithography followed by wet/dry etching out of any material available in a thin sheet form. A wide range of materials can be utilized, including biocompatible metals, ceramics, polymers, composites, semiconductors, and materials with special properties such as memory shape alloys, thermoelectric, ferromagnetic, piezoelectric, and more. The approach proposed allows enhancing the structural and mechanical properties of the starting materials through heat treatment, tribological coatings, surface modifications, bio-functionalization, and more, a limited, even unavailable possibility with existing methods. Components are transferred to the substrate for assembly using the thermo-mechanical Selective Laser Assisted Die Transfer (tmSLADT) mechanism for microchips assembly, already demonstrated by our team. Therefore, the mechanical and electronic part of the MEMS can be fabricated using the same equipment/method. The viability of the Laser-Assisted Advanced Assembly technique for MEMS is demonstrated by fabricating magnetic switches for embedding in a conductive carbon-fiber metamaterial for use in an Electromagnetic-Responsive Mobile Cyber-Physical System (E-RMCPS), which is expected to improve the wireless communication system efficiency within a battery-powered device.

  12. Theory and implementation of a very high throughput true random number generator in field programmable gate array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yonggang, E-mail: wangyg@ustc.edu.cn; Hui, Cong; Liu, Chong

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving,more » so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.« less

  13. Theory and implementation of a very high throughput true random number generator in field programmable gate array.

    PubMed

    Wang, Yonggang; Hui, Cong; Liu, Chong; Xu, Chao

    2016-04-01

    The contribution of this paper is proposing a new entropy extraction mechanism based on sampling phase jitter in ring oscillators to make a high throughput true random number generator in a field programmable gate array (FPGA) practical. Starting from experimental observation and analysis of the entropy source in FPGA, a multi-phase sampling method is exploited to harvest the clock jitter with a maximum entropy and fast sampling speed. This parametrized design is implemented in a Xilinx Artix-7 FPGA, where the carry chains in the FPGA are explored to realize the precise phase shifting. The generator circuit is simple and resource-saving, so that multiple generation channels can run in parallel to scale the output throughput for specific applications. The prototype integrates 64 circuit units in the FPGA to provide a total output throughput of 7.68 Gbps, which meets the requirement of current high-speed quantum key distribution systems. The randomness evaluation, as well as its robustness to ambient temperature, confirms that the new method in a purely digital fashion can provide high-speed high-quality random bit sequences for a variety of embedded applications.

  14. Giant onsite electronic entropy enhances the performance of ceria for water splitting.

    PubMed

    Naghavi, S Shahab; Emery, Antoine A; Hansen, Heine A; Zhou, Fei; Ozolins, Vidvuds; Wolverton, Chris

    2017-08-18

    Previous studies have shown that a large solid-state entropy of reduction increases the thermodynamic efficiency of metal oxides, such as ceria, for two-step thermochemical water splitting cycles. In this context, the configurational entropy arising from oxygen off-stoichiometry in the oxide, has been the focus of most previous work. Here we report a different source of entropy, the onsite electronic configurational entropy, arising from coupling between orbital and spin angular momenta in lanthanide f orbitals. We find that onsite electronic configurational entropy is sizable in all lanthanides, and reaches a maximum value of ≈4.7 k B per oxygen vacancy for Ce 4+ /Ce 3+ reduction. This unique and large positive entropy source in ceria explains its excellent performance for high-temperature catalytic redox reactions such as water splitting. Our calculations also show that terbium dioxide has a high electronic entropy and thus could also be a potential candidate for solar thermochemical reactions.Solid-state entropy of reduction increases the thermodynamic efficiency of ceria for two-step thermochemical water splitting. Here, the authors report a large and different source of entropy, the onsite electronic configurational entropy arising from coupling between orbital and spin angular momenta in f orbitals.

  15. Soft context clustering for F0 modeling in HMM-based speech synthesis

    NASA Astrophysics Data System (ADS)

    Khorram, Soheil; Sameti, Hossein; King, Simon

    2015-12-01

    This paper proposes the use of a new binary decision tree, which we call a soft decision tree, to improve generalization performance compared to the conventional `hard' decision tree method that is used to cluster context-dependent model parameters in statistical parametric speech synthesis. We apply the method to improve the modeling of fundamental frequency, which is an important factor in synthesizing natural-sounding high-quality speech. Conventionally, hard decision tree-clustered hidden Markov models (HMMs) are used, in which each model parameter is assigned to a single leaf node. However, this `divide-and-conquer' approach leads to data sparsity, with the consequence that it suffers from poor generalization, meaning that it is unable to accurately predict parameters for models of unseen contexts: the hard decision tree is a weak function approximator. To alleviate this, we propose the soft decision tree, which is a binary decision tree with soft decisions at the internal nodes. In this soft clustering method, internal nodes select both their children with certain membership degrees; therefore, each node can be viewed as a fuzzy set with a context-dependent membership function. The soft decision tree improves model generalization and provides a superior function approximator because it is able to assign each context to several overlapped leaves. In order to use such a soft decision tree to predict the parameters of the HMM output probability distribution, we derive the smoothest (maximum entropy) distribution which captures all partial first-order moments and a global second-order moment of the training samples. Employing such a soft decision tree architecture with maximum entropy distributions, a novel speech synthesis system is trained using maximum likelihood (ML) parameter re-estimation and synthesis is achieved via maximum output probability parameter generation. In addition, a soft decision tree construction algorithm optimizing a log-likelihood measure is developed. Both subjective and objective evaluations were conducted and indicate a considerable improvement over the conventional method.

  16. The limit behavior of the evolution of the Tsallis entropy in self-gravitating systems

    NASA Astrophysics Data System (ADS)

    Zheng, Yahui; Du, Jiulin; Liang, Faku

    2017-06-01

    In this letter, we study the limit behavior of the evolution of the Tsallis entropy in self-gravitating systems. The study is carried out under two different situations, drawing the same conclusion. No matter in the energy transfer process or in the mass transfer process inside the system, when the nonextensive parameter q is more than unity, the total entropy is bounded; on the contrary, when this parameter is less than unity, the total entropy is unbounded. There are proofs in both theory and observation that the q is always more than unity. So the Tsallis entropy in self-gravitating systems generally exhibits a bounded property. This indicates the existence of a global maximum of the Tsallis entropy. It is possible for self-gravitating systems to evolve to thermodynamically stable states.

  17. Maximum Entropy Methods as the Bridge Between Microscopic and Macroscopic Theory

    NASA Astrophysics Data System (ADS)

    Taylor, Jamie M.

    2016-09-01

    This paper is concerned with an investigation into a function of macroscopic variables known as the singular potential, building on previous work by Ball and Majumdar. The singular potential is a function of the admissible statistical averages of probability distributions on a state space, defined so that it corresponds to the maximum possible entropy given known observed statistical averages, although non-classical entropy-like objective functions will also be considered. First the set of admissible moments must be established, and under the conditions presented in this work the set is open, bounded and convex allowing a description in terms of supporting hyperplanes, which provides estimates on the development of singularities for related probability distributions. Under appropriate conditions it is shown that the singular potential is strictly convex, as differentiable as the microscopic entropy, and blows up uniformly as the macroscopic variable tends to the boundary of the set of admissible moments. Applications of the singular potential are then discussed, and particular consideration will be given to certain free-energy functionals typical in mean-field theory, demonstrating an equivalence between certain microscopic and macroscopic free-energy functionals. This allows statements about L^1-local minimisers of Onsager's free energy to be obtained which cannot be given by two-sided variations, and overcomes the need to ensure local minimisers are bounded away from zero and +∞ before taking L^∞ variations. The analysis also permits the definition of a dual order parameter for which Onsager's free energy allows an explicit representation. Also, the difficulties in approximating the singular potential by everywhere defined functions, in particular by polynomial functions, are addressed, with examples demonstrating the failure of the Taylor approximation to preserve relevant shape properties of the singular potential.

  18. Efficient option valuation of single and double barrier options

    NASA Astrophysics Data System (ADS)

    Kabaivanov, Stanimir; Milev, Mariyan; Koleva-Petkova, Dessislava; Vladev, Veselin

    2017-12-01

    In this paper we present an implementation of pricing algorithm for single and double barrier options using Mellin transformation with Maximum Entropy Inversion and its suitability for real-world applications. A detailed analysis of the applied algorithm is accompanied by implementation in C++ that is then compared to existing solutions in terms of efficiency and computational power. We then compare the applied method with existing closed-form solutions and well known methods of pricing barrier options that are based on finite differences.

  19. Spectral functions at small energies and the electrical conductivity in hot quenched lattice QCD.

    PubMed

    Aarts, Gert; Allton, Chris; Foley, Justin; Hands, Simon; Kim, Seyong

    2007-07-13

    In lattice QCD, the maximum entropy method can be used to reconstruct spectral functions from Euclidean correlators obtained in numerical simulations. We show that at finite temperature the most commonly used algorithm, employing Bryan's method, is inherently unstable at small energies and gives a modification that avoids this. We demonstrate this approach using the vector current-current correlator obtained in quenched QCD at finite temperature. Our first results indicate a small electrical conductivity above the deconfinement transition.

  20. MEMS-based fuel cells with integrated catalytic fuel processor and method thereof

    DOEpatents

    Jankowski, Alan F [Livermore, CA; Morse, Jeffrey D [Martinez, CA; Upadhye, Ravindra S [Pleasanton, CA; Havstad, Mark A [Davis, CA

    2011-08-09

    Described herein is a means to incorporate catalytic materials into the fuel flow field structures of MEMS-based fuel cells, which enable catalytic reforming of a hydrocarbon based fuel, such as methane, methanol, or butane. Methods of fabrication are also disclosed.

  1. Modularity-like objective function in annotated networks

    NASA Astrophysics Data System (ADS)

    Xie, Jia-Rong; Wang, Bing-Hong

    2017-12-01

    We ascertain the modularity-like objective function whose optimization is equivalent to the maximum likelihood in annotated networks. We demonstrate that the modularity-like objective function is a linear combination of modularity and conditional entropy. In contrast with statistical inference methods, in our method, the influence of the metadata is adjustable; when its influence is strong enough, the metadata can be recovered. Conversely, when it is weak, the detection may correspond to another partition. Between the two, there is a transition. This paper provides a concept for expanding the scope of modularity methods.

  2. Design and characterization of MEMS interferometric sensing

    NASA Astrophysics Data System (ADS)

    Snyder, R.; Siahmakoun, A.

    2010-02-01

    A MEMS-based interferometric sensor is produced using the multi-user MEMS processing standard (MUMPS) micromirrors, movable by thermal actuation. The interferometer is comprised of gold reflection surfaces, polysilicon thermal actuators, hinges, latches and thin film polarization beam splitters. A polysilicon film of 3.5 microns reflects and transmits incident polarized light from an external laser source coupled to a multi-mode optical fiber. The input beam is shaped to a diameter of 10 to 20 microns for incidence upon the 100 micron mirrors. Losses in the optical path include diffraction effects from etch holes created in the manufacturing process, surface roughness of both gold and polysilicon layers, and misalignment of micro-scale optical components. Numerous optical paths on the chip vary by length, number of reflections, and mirror subsystems employed. Subsystems include thermal actuator batteries producing lateral position displacement, angularly tunable mirrors, double reflection surfaces, and static vertical mirrors. All mirror systems are raised via manual stimulation using two micron, residue-free probe tips and some may be aligned using electrical signals causing resistive heating in thermal actuators. The characterization of thermal actuator batteries includes maximum displacement, deflection, and frequency response that coincides with theoretical thermodynamic simulations using finite-element analysis. Maximum deflection of 35 microns at 400 mW input electrical power is shown for three types of actuator batteries as is deflection dependent frequency response data for electrical input signals up to 10 kHz.

  3. Radio synthesis imaging during the GRO solar campaign

    NASA Technical Reports Server (NTRS)

    Gary, Dale E.

    1992-01-01

    The Owens Valley (OVRO) Solar Array was recently expanded to 5 antennas. Using frequency synthesis, the 5-element OVRO Solar Array has up to 450 effective baselines, which can be employed as necessary to make maps at frequencies in the range 1 to 18 GHz. Fortuitously, the last of the 5 antennas was completed and brought into operation on 7 Jun., just in time for the Gamma Ray Observatory (GRO)/Max 1991 observing campaign. Many events were observed jointly with OVRO and the BATSE experiment on GRO, including the six larger events that are presented in tabular form. Unfortunately, the X flares that occurred during the campaign all occurred outside the OVRO time range. The UV coverage of the newly expanded solar array, combined with frequency synthesis, should give a more complete view of solar flares in the microwave range by providing simultaneous spatial and spectral resolution. A promising application of MEM (maximum entropy) is also being pursued that will use smoothness criteria in both the spatial and spectral domains to give brightness temperature maps at each observed frequency (up to 45 frequencies every 10 s). Such maps can be compared directly with the theory of microwave emission to yield plasma parameters in the source - notably the number and energy distribution of electrons, for comparison with the x ray and gamma ray results from GRO.

  4. System Modeling of a MEMS Vibratory Gyroscope and Integration to Circuit Simulation.

    PubMed

    Kwon, Hyukjin J; Seok, Seyeong; Lim, Geunbae

    2017-11-18

    Recently, consumer applications have dramatically created the demand for low-cost and compact gyroscopes. Therefore, on the basis of microelectromechanical systems (MEMS) technology, many gyroscopes have been developed and successfully commercialized. A MEMS gyroscope consists of a MEMS device and an electrical circuit for self-oscillation and angular-rate detection. Since the MEMS device and circuit are interactively related, the entire system should be analyzed together to design or test the gyroscope. In this study, a MEMS vibratory gyroscope is analyzed based on the system dynamic modeling; thus, it can be mathematically expressed and integrated into a circuit simulator. A behavioral simulation of the entire system was conducted to prove the self-oscillation and angular-rate detection and to determine the circuit parameters to be optimized. From the simulation, the operating characteristic according to the vacuum pressure and scale factor was obtained, which indicated similar trends compared with those of the experimental results. The simulation method presented in this paper can be generalized to a wide range of MEMS devices.

  5. A non-resonant fiber scanner based on an electrothermally-actuated MEMS stage

    PubMed Central

    Zhang, Xiaoyang; Duan, Can; Liu, Lin; Li, Xingde; Xie, Huikai

    2015-01-01

    Scanning fiber tips provides the most convenient way for forward-viewing fiber-optic microendoscopy. In this paper, a distal fiber scanning method based on a large-displacement MEMS actuator is presented. A single-mode fiber is glued on the micro-platform of an electrothermal MEMS stage to realize large range non-resonantscanning. The micro-platform has a large piston scan range of up to 800 µm at only 6V. The tip deflection of the fiber can be further amplified by placing the MEMS stage at a proper location along the fiber. A quasi-static model of the fiber-MEMS assembly has been developed and validated experimentally. The frequency response has also been studied and measured. A fiber tip deflection of up to 1650 µm for the 45 mm-long movable fiber portion has been achieved when the MEMS electrothermal stage was placed 25 mm away from the free end. The electrothermally-actuated MEMS stage shows a great potential for forward viewing fiber scanning and optical applications. PMID:26347583

  6. Pilot study to harmonize the reported influenza intensity levels within the Spanish Influenza Sentinel Surveillance System (SISSS) using the Moving Epidemic Method (MEM).

    PubMed

    Bangert, M; Gil, H; Oliva, J; Delgado, C; Vega, T; DE Mateo, S; Larrauri, A

    2017-03-01

    The intensity of annual Spanish influenza activity is currently estimated from historical data of the Spanish Influenza Sentinel Surveillance System (SISSS) using qualitative indicators from the European Influenza Surveillance Network. However, these indicators are subjective, based on qualitative comparison with historical data of influenza-like illness rates. This pilot study assesses the implementation of Moving Epidemic Method (MEM) intensity levels during the 2014-2015 influenza season within the 17 sentinel networks covered by SISSS, comparing them to historically reported indicators. Intensity levels reported and those obtained with MEM at the epidemic peak of the influenza wave, and at national and regional levels did not show statistical difference (P = 0·74, Wilcoxon signed-rank test), suggesting that the implementation of MEM would have limited disrupting effects on the dynamic of notification within the surveillance system. MEM allows objective influenza surveillance monitoring and standardization of criteria for comparing the intensity of influenza epidemics in regions in Spain. Following this pilot study, MEM has been adopted to harmonize the reporting of intensity levels of influenza activity in Spain, starting in the 2015-2016 season.

  7. Performance characterization of a single bi-axial scanning MEMS mirror-based head-worn display

    NASA Astrophysics Data System (ADS)

    Liang, Minhua

    2002-06-01

    The NomadTM Personal Display System is a head-worn display (HWD) with a see-through, high-resolution, high-luminance display capability. It is based on a single bi-axial scanning MEMS mirror. In the Nomad HWD system, a red laser diode emits a beam of light that is scanned bi-axially by a single MEMS mirror. A diffractive beam diffuser and an ocular expand the beam to form a 12mm exit pupil for comfortable viewing. The Nomad display has an SVGA (800x600) resolution, 60Hz frame rate, 23-degree horizontal field of view (FOV) and 3:4 vertical to horizontal aspect ratio, a luminance of 800~900 foot-Lamberts, see-through capability, 30mm eye-relief distance, and 1-foot to infinity focusing adjustment. We have characterized the performance parameters, such as field of view, distortion, contrast ratio (4x4 black and white checker board), modulation depth, exit pupil size, eye relief distance, maximum luminance, dynamic range ratio (full-on-to-full-off ratio), dimming ratio, and luminance uniformity at image plane. The Class-1 eye-safety requirements per IEC 60825-1 Amendment 2 (CDRH Laser Notice No. 50) are analyzed and verified by experiments. The paper describes all of the testing methods and set-ups as well as the representative test results. The test results demonstrate that the Nomad display is an eye-safe display product with good image quality and good user ergonomics.

  8. Research on the attitude of small UAV based on MEMS devices

    NASA Astrophysics Data System (ADS)

    Shi, Xiaojie; Lu, Libin; Jin, Guodong; Tan, Lining

    2017-05-01

    This paper mainly introduces the research principle and implementation method of the small UAV navigation attitude system based on MEMS devices. The Gauss - Newton method based on least squares is used to calibrate the MEMS accelerometer and gyroscope for calibration. Improve the accuracy of the attitude by using the modified complementary filtering to correct the attitude angle error. The experimental data show that the design of the attitude and attitude system in this paper to meet the requirements of small UAV attitude accuracy to achieve a small, low cost.

  9. Novel sonar signal processing tool using Shannon entropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quazi, A.H.

    1996-06-01

    Traditionally, conventional signal processing extracts information from sonar signals using amplitude, signal energy or frequency domain quantities obtained using spectral analysis techniques. The object is to investigate an alternate approach which is entirely different than that of traditional signal processing. This alternate approach is to utilize the Shannon entropy as a tool for the processing of sonar signals with emphasis on detection, classification, and localization leading to superior sonar system performance. Traditionally, sonar signals are processed coherently, semi-coherently, and incoherently, depending upon the a priori knowledge of the signals and noise. Here, the detection, classification, and localization technique will bemore » based on the concept of the entropy of the random process. Under a constant energy constraint, the entropy of a received process bearing finite number of sample points is maximum when hypothesis H{sub 0} (that the received process consists of noise alone) is true and decreases when correlated signal is present (H{sub 1}). Therefore, the strategy used for detection is: (I) Calculate the entropy of the received data; then, (II) compare the entropy with the maximum value; and, finally, (III) make decision: H{sub 1} is assumed if the difference is large compared to pre-assigned threshold and H{sub 0} is otherwise assumed. The test statistics will be different between entropies under H{sub 0} and H{sub 1}. Here, we shall show the simulated results for detecting stationary and non-stationary signals in noise, and results on detection of defects in a Plexiglas bar using an ultrasonic experiment conducted by Hughes. {copyright} {ital 1996 American Institute of Physics.}« less

  10. Microelectromechanical safe arm device

    DOEpatents

    Roesler, Alexander W [Tijeras, NM

    2012-06-05

    Microelectromechanical (MEM) apparatus and methods for operating, for preventing unintentional detonation of energetic components comprising pyrotechnic and explosive materials, such as air bag deployment systems, munitions and pyrotechnics. The MEM apparatus comprises an interrupting member that can be moved to block (interrupt) or complete (uninterrupt) an explosive train that is part of an energetic component. One or more latching members are provided that engage and prevent the movement of the interrupting member, until the one or more latching members are disengaged from the interrupting member. The MEM apparatus can be utilized as a safe and arm device (SAD) and electronic safe and arm device (ESAD) in preventing unintentional detonations. Methods for operating the MEM apparatus include independently applying drive signals to the actuators coupled to the latching members, and an actuator coupled to the interrupting member.

  11. Nonequilibrium Thermodynamics in Biological Systems

    NASA Astrophysics Data System (ADS)

    Aoki, I.

    2005-12-01

    1. Respiration Oxygen-uptake by respiration in organisms decomposes macromolecules such as carbohydrate, protein and lipid and liberates chemical energy of high quality, which is then used to chemical reactions and motions of matter in organisms to support lively order in structure and function in organisms. Finally, this chemical energy becomes heat energy of low quality and is discarded to the outside (dissipation function). Accompanying this heat energy, entropy production which inevitably occurs by irreversibility also is discarded to the outside. Dissipation function and entropy production are estimated from data of respiration. 2. Human body From the observed data of respiration (oxygen absorption), the entropy production in human body can be estimated. Entropy production from 0 to 75 years old human has been obtained, and extrapolated to fertilized egg (beginning of human life) and to 120 years old (maximum period of human life). Entropy production show characteristic behavior in human life span : early rapid increase in short growing phase and later slow decrease in long aging phase. It is proposed that this tendency is ubiquitous and constitutes a Principle of Organization in complex biotic systems. 3. Ecological communities From the data of respiration of eighteen aquatic communities, specific (i.e. per biomass) entropy productions are obtained. They show two phase character with respect to trophic diversity : early increase and later decrease with the increase of trophic diversity. The trophic diversity in these aquatic ecosystems is shown to be positively correlated with the degree of eutrophication, and the degree of eutrophication is an "arrow of time" in the hierarchy of aquatic ecosystems. Hence specific entropy production has the two phase: early increase and later decrease with time. 4. Entropy principle for living systems The Second Law of Thermodynamics has been expressed as follows. 1) In isolated systems, entropy increases with time and approaches to a maximum value. This is well-known classical Clausius principle. 2) In open systems near equilibrium entropy production always decreases with time approaching a minimum stationary level. This is the minimum entropy production principle by Prigogine. These two principle are established ones. However, living systems are not isolated and not near to equilibrium. Hence, these two principles can not be applied to living systems. What is entropy principle for living systems? Answer: Entropy production in living systems consists of multi-stages with time: early increasing, later decreasing and/or intermediate stages. This tendency is supported by various living systems.

  12. MEMS Louvers for Thermal Control

    NASA Technical Reports Server (NTRS)

    Champion, J. L.; Osiander, R.; Darrin, M. A. Garrison; Swanson, T. D.

    1998-01-01

    Mechanical louvers have frequently been used for spacecraft and instrument thermal control purposes. These devices typically consist of parallel or radial vanes, which can be opened or closed to vary the effective emissivity of the underlying surface. This project demonstrates the feasibility of using Micro-Electromechanical Systems (MEMS) technology to miniaturize louvers for such purposes. This concept offers the possibility of substituting the smaller, lighter weight, more rugged, and less costly MEMS devices for such mechanical louvers. In effect, a smart skin that self adjusts in response to environmental influences could be developed composed of arrays of thousands of miniaturized louvers. Several orders of magnitude size, weight, and volume decreases are potentially achieved using micro-electromechanical techniques. The use of this technology offers substantial benefits in spacecraft/instrument design, integration and testing, and flight operations. It will be particularly beneficial for the emerging smaller spacecraft and instruments of the future. In addition, this MEMS thermal louver technology can form the basis for related spacecraft instrument applications. The specific goal of this effort was to develop a preliminary MEMS device capable of modulating the effective emissivity of radiators on spacecraft. The concept pursued uses hinged panels, or louvers, in a manner such that heat emitted from the radiators is a function of louver angle. An electrostatic comb drive or other such actuator can control the louver position. The initial design calls for the louvers to be gold coated while the underlying surface is of high emissivity. Since, the base MEMS material, silicon, is transparent in the InfraRed (IR) spectrum, the device has a minimum emissivity when closed and a maximum emissivity when open. An initial set of polysilicon louver devices was designed at the Johns Hopkins Applied Physics Laboratory in conjunction with the Thermal Engineering Branch at NASA's Goddard Space Flight Center.

  13. Evaluation of MEMS-Based Wireless Accelerometer Sensors in Detecting Gear Tooth Faults in Helicopter Transmissions

    NASA Technical Reports Server (NTRS)

    Lewicki, David George; Lambert, Nicholas A.; Wagoner, Robert S.

    2015-01-01

    The diagnostics capability of micro-electro-mechanical systems (MEMS) based rotating accelerometer sensors in detecting gear tooth crack failures in helicopter main-rotor transmissions was evaluated. MEMS sensors were installed on a pre-notched OH-58C spiral-bevel pinion gear. Endurance tests were performed and the gear was run to tooth fracture failure. Results from the MEMS sensor were compared to conventional accelerometers mounted on the transmission housing. Most of the four stationary accelerometers mounted on the gear box housing and most of the CI's used gave indications of failure at the end of the test. The MEMS system performed well and lasted the entire test. All MEMS accelerometers gave an indication of failure at the end of the test. The MEMS systems performed as well, if not better, than the stationary accelerometers mounted on the gear box housing with regards to gear tooth fault detection. For both the MEMS sensors and stationary sensors, the fault detection time was not much sooner than the actual tooth fracture time. The MEMS sensor spectrum data showed large first order shaft frequency sidebands due to the measurement rotating frame of reference. The method of constructing a pseudo tach signal from periodic characteristics of the vibration data was successful in deriving a TSA signal without an actual tach and proved as an effective way to improve fault detection for the MEMS.

  14. A novel method of calibrating a MEMS inertial reference unit on a turntable under limited working conditions

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liang, Shufang; Yang, Yanqiang

    2017-10-01

    Micro-electro-mechanical systems (MEMS) inertial measurement devices tend to be widely used in inertial navigation systems and have quickly emerged on the market due to their characteristics of low cost, high reliability and small size. Calibration is the most effective way to remove the deterministic error of an inertial reference unit (IRU), which in this paper consists of three orthogonally mounted MEMS gyros. However, common testing methods in the lab cannot predict the corresponding errors precisely when the turntable’s working condition is restricted. In this paper, the turntable can only provide a relatively small rotation angle. Moreover, the errors must be compensated exactly because of the great effect caused by the high angular velocity of the craft. To deal with this question, a new method is proposed to evaluate the MEMS IRU’s performance. In the calibration procedure, a one-axis table that can rotate a limited angle in the form of a sine function is utilized to provide the MEMS IRU’s angular velocity. A new algorithm based on Fourier series is designed to calculate the misalignment and scale factor errors. The proposed method is tested in a set of experiments, and the calibration results are compared to a traditional calibration method performed under normal working conditions to verify their correctness. In addition, a verification test in the given rotation speed is implemented for further demonstration.

  15. Preparation, electronic structure, and chemical bonding of lead-free (1 - x)(K0.5Bi0.5)TiO3- xBaTiO3 solid solution

    NASA Astrophysics Data System (ADS)

    Sasikumar, S.; Saravanan, R.; Saravanakumar, S.; Robert, M. Charles

    2018-01-01

    Polycrystalline lead-free (1 - x)(K0.5Bi0.5)TiO3- xBaTiO3, ((1 - x)KBT- xBT) ( x = 0.00, 0.08, 0.12) ceramics were synthesized via solid-state reaction method. The powder X-ray diffraction (PXRD) and structural refinement results confirm that a single-phase tetragonal structure with space group P4mm. Charge density distribution inside the unit cell of (1 - x)KBT- xBT was investigated by the maximum entropy method. Charge density analysis reveals the reduction in ionic nature along K/Bi-O bond and enhancement of covalent nature along Ti-O bond with the addition of BaTiO3. The charge density distribution studies done using maximum entropy method for (1 - x)KBT- xBT have not been done so far. The surface morphology study was done using scanning electron microscopy (SEM). Energy dispersive X-rays spectra (EDS) were used to investigate the elemental compositions present in the system. The dielectric constant and loss tangent were studied as a function of frequency. The dielectric constant and loss were decreased with increase of frequency. Room temperature dielectric constant ( ɛ) and loss (tan δ) were measured for x = 0.00 about 511 and 0.51, respectively, at a frequency of 10 kHz.

  16. MEMS earthworm: a thermally actuated peristaltic linear micromotor

    NASA Astrophysics Data System (ADS)

    Arthur, Craig; Ellerington, Neil; Hubbard, Ted; Kujath, Marek

    2011-03-01

    This paper examines the design, fabrication and testing of a bio-mimetic MEMS (micro-electro mechanical systems) earthworm motor with external actuators. The motor consists of a passive mobile shuttle with two flexible diamond-shaped segments; each segment is independently squeezed by a pair of stationary chevron-shaped thermal actuators. Applying a specific sequence of squeezes to the earthworm segments, the shuttle can be driven backward or forward. Unlike existing inchworm drives that use clamping and thrusting actuators, the earthworm actuators apply only clamping forces to the shuttle, and lateral thrust is produced by the shuttle's compliant geometry. The earthworm assembly is fabricated using the PolyMUMPs process with planar dimensions of 400 µm width by 800 µm length. The stationary actuators operate within the range of 4-9 V and provide a maximum shuttle range of motion of 350 µm (approximately half its size), a maximum shuttle speed of 17 mm s-1 at 10 kHz, and a maximum dc shuttle force of 80 µN. The shuttle speed was found to vary linearly with both input voltage and input frequency. The shuttle force was found to vary linearly with the actuator voltage.

  17. Predictive modeling and mapping of Malayan Sun Bear (Helarctos malayanus) distribution using maximum entropy.

    PubMed

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear's population.

  18. Predictive Modeling and Mapping of Malayan Sun Bear (Helarctos malayanus) Distribution Using Maximum Entropy

    PubMed Central

    Nazeri, Mona; Jusoff, Kamaruzaman; Madani, Nima; Mahmud, Ahmad Rodzi; Bahman, Abdul Rani; Kumar, Lalit

    2012-01-01

    One of the available tools for mapping the geographical distribution and potential suitable habitats is species distribution models. These techniques are very helpful for finding poorly known distributions of species in poorly sampled areas, such as the tropics. Maximum Entropy (MaxEnt) is a recently developed modeling method that can be successfully calibrated using a relatively small number of records. In this research, the MaxEnt model was applied to describe the distribution and identify the key factors shaping the potential distribution of the vulnerable Malayan Sun Bear (Helarctos malayanus) in one of the main remaining habitats in Peninsular Malaysia. MaxEnt results showed that even though Malaysian sun bear habitat is tied with tropical evergreen forests, it lives in a marginal threshold of bio-climatic variables. On the other hand, current protected area networks within Peninsular Malaysia do not cover most of the sun bears potential suitable habitats. Assuming that the predicted suitability map covers sun bears actual distribution, future climate change, forest degradation and illegal hunting could potentially severely affect the sun bear’s population. PMID:23110182

  19. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  20. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  1. Quantifying Extrinsic Noise in Gene Expression Using the Maximum Entropy Framework

    PubMed Central

    Dixit, Purushottam D.

    2013-01-01

    We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. PMID:23790383

  2. Quantifying extrinsic noise in gene expression using the maximum entropy framework.

    PubMed

    Dixit, Purushottam D

    2013-06-18

    We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in Escherichia coli. We suggest that the variation in extrinsic factors may account for the observed wider-than-Poisson distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in E. coli whereas others need verification. Application of the presented framework to more complex situations is also discussed. Copyright © 2013 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  4. Classification of pulmonary pathology from breath sounds using the wavelet packet transform and an extreme learning machine.

    PubMed

    Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S

    2017-06-08

    Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.

  5. Stability, Nonlinearity and Reliability of Electrostatically Actuated MEMS Devices

    PubMed Central

    Zhang, Wen-Ming; Meng, Guang; Chen, Di

    2007-01-01

    Electrostatic micro-electro-mechanical system (MEMS) is a special branch with a wide range of applications in sensing and actuating devices in MEMS. This paper provides a survey and analysis of the electrostatic force of importance in MEMS, its physical model, scaling effect, stability, nonlinearity and reliability in detail. It is necessary to understand the effects of electrostatic forces in MEMS and then many phenomena of practical importance, such as pull-in instability and the effects of effective stiffness, dielectric charging, stress gradient, temperature on the pull-in voltage, nonlinear dynamic effects and reliability due to electrostatic forces occurred in MEMS can be explained scientifically, and consequently the great potential of MEMS technology could be explored effectively and utilized optimally. A simplified parallel-plate capacitor model is proposed to investigate the resonance response, inherent nonlinearity, stiffness softened effect and coupled nonlinear effect of the typical electrostatically actuated MEMS devices. Many failure modes and mechanisms and various methods and techniques, including materials selection, reasonable design and extending the controllable travel range used to analyze and reduce the failures are discussed in the electrostatically actuated MEMS devices. Numerical simulations and discussions indicate that the effects of instability, nonlinear characteristics and reliability subjected to electrostatic forces cannot be ignored and are in need of further investigation.

  6. MEMS in Space Systems

    NASA Technical Reports Server (NTRS)

    Lyke, J. C.; Michalicek, M. A.; Singaraju, B. K.

    1995-01-01

    Micro-electro-mechanical systems (MEMS) provide an emerging technology that has the potential for revolutionizing the way space systems are designed, assembled, and tested. The high launch costs of current space systems are a major determining factor in the amount of functionality that can be integrated in a typical space system. MEMS devices have the ability to increase the functionality of selected satellite subsystems while simultaneously decreasing spacecraft weight. The Air Force Phillips Laboratory (PL) is supporting the development of a variety of MEMS related technologies as one of several methods to reduce the weight of space systems and increase their performance. MEMS research is a natural extension of PL research objectives in micro-electronics and advanced packaging. Examples of applications that are under research include on-chip micro-coolers, micro-gyroscopes, vibration sensors, and three-dimensional packaging technologies to integrate electronics with MEMS devices. The first on-orbit space flight demonstration of these and other technologies is scheduled for next year.

  7. Lattice NRQCD study of S- and P-wave bottomonium states in a thermal medium with Nf=2 +1 light flavors

    NASA Astrophysics Data System (ADS)

    Kim, Seyong; Petreczky, Peter; Rothkopf, Alexander

    2015-03-01

    We investigate the properties of S - and P -wave bottomonium states in the vicinity of the deconfinement transition temperature. The light degrees of freedom are represented by dynamical lattice quantum chromodynamics (QCD) configurations of the HotQCD collaboration with Nf=2 +1 flavors. Bottomonium correlators are obtained from bottom quark propagators, computed in nonrelativistic QCD under the background of these gauge field configurations. The spectral functions for the 3S1 (ϒ ) and 3P1 (χb 1) channel are extracted from the Euclidean time correlators using a novel Bayesian approach in the temperature region 140 MeV ≤T ≤249 MeV and the results are contrasted to those from the standard maximum entropy method. We find that the new Bayesian approach is far superior to the maximum entropy method. It enables us to study reliably the presence or absence of the lowest state signal in the spectral function of a certain channel, even under the limitations present in the finite temperature setup. We find that χb 1 survives up to T =249 MeV , the highest temperature considered in our study, and put stringent constraints on the size of the medium modification of ϒ and χb 1 states.

  8. Spatiotemporal analysis and mapping of oral cancer risk in changhua county (taiwan): an application of generalized bayesian maximum entropy method.

    PubMed

    Yu, Hwa-Lung; Chiang, Chi-Ting; Lin, Shu-De; Chang, Tsun-Kuo

    2010-02-01

    Incidence rate of oral cancer in Changhua County is the highest among the 23 counties of Taiwan during 2001. However, in health data analysis, crude or adjusted incidence rates of a rare event (e.g., cancer) for small populations often exhibit high variances and are, thus, less reliable. We proposed a generalized Bayesian Maximum Entropy (GBME) analysis of spatiotemporal disease mapping under conditions of considerable data uncertainty. GBME was used to study the oral cancer population incidence in Changhua County (Taiwan). Methodologically, GBME is based on an epistematics principles framework and generates spatiotemporal estimates of oral cancer incidence rates. In a way, it accounts for the multi-sourced uncertainty of rates, including small population effects, and the composite space-time dependence of rare events in terms of an extended Poisson-based semivariogram. The results showed that GBME analysis alleviates the noises of oral cancer data from population size effect. Comparing to the raw incidence data, the maps of GBME-estimated results can identify high risk oral cancer regions in Changhua County, where the prevalence of betel quid chewing and cigarette smoking is relatively higher than the rest of the areas. GBME method is a valuable tool for spatiotemporal disease mapping under conditions of uncertainty. 2010 Elsevier Inc. All rights reserved.

  9. Representing and computing regular languages on massively parallel networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, M.I.; O'Sullivan, J.A.; Boysam, B.

    1991-01-01

    This paper proposes a general method for incorporating rule-based constraints corresponding to regular languages into stochastic inference problems, thereby allowing for a unified representation of stochastic and syntactic pattern constraints. The authors' approach first established the formal connection of rules to Chomsky grammars, and generalizes the original work of Shannon on the encoding of rule-based channel sequences to Markov chains of maximum entropy. This maximum entropy probabilistic view leads to Gibb's representations with potentials which have their number of minima growing at precisely the exponential rate that the language of deterministically constrained sequences grow. These representations are coupled to stochasticmore » diffusion algorithms, which sample the language-constrained sequences by visiting the energy minima according to the underlying Gibbs' probability law. The coupling to stochastic search methods yields the all-important practical result that fully parallel stochastic cellular automata may be derived to generate samples from the rule-based constraint sets. The production rules and neighborhood state structure of the language of sequences directly determines the necessary connection structures of the required parallel computing surface. Representations of this type have been mapped to the DAP-510 massively-parallel processor consisting of 1024 mesh-connected bit-serial processing elements for performing automated segmentation of electron-micrograph images.« less

  10. Derivation of Hunt equation for suspension distribution using Shannon entropy theory

    NASA Astrophysics Data System (ADS)

    Kundu, Snehasis

    2017-12-01

    In this study, the Hunt equation for computing suspension concentration in sediment-laden flows is derived using Shannon entropy theory. Considering the inverse of the void ratio as a random variable and using principle of maximum entropy, probability density function and cumulative distribution function of suspension concentration is derived. A new and more general cumulative distribution function for the flow domain is proposed which includes several specific other models of CDF reported in literature. This general form of cumulative distribution function also helps to derive the Rouse equation. The entropy based approach helps to estimate model parameters using suspension data of sediment concentration which shows the advantage of using entropy theory. Finally model parameters in the entropy based model are also expressed as functions of the Rouse number to establish a link between the parameters of the deterministic and probabilistic approaches.

  11. MEMS-based handheld confocal microscope for in-vivo skin imaging

    PubMed Central

    Arrasmith, Christopher L.; Dickensheets, David L.; Mahadevan-Jansen, Anita

    2010-01-01

    This paper describes a handheld laser scanning confocal microscope for skin microscopy. Beam scanning is accomplished with an electromagnetic MEMS bi-axial micromirror developed for pico projector applications, providing 800x600 (SVGA) resolution at 56 frames per second. The design uses commercial objective lenses with an optional hemisphere front lens, operating with a range of numerical aperture from NA=0.35 to NA=1.1 and corresponding diagonal field of view ranging from 653 μm to 216 μm. Using NA=1.1 and a laser wavelength of 830 nm we measured the axial response to be 1.14 μm full width at half maximum, with a corresponding 10%-90% lateral edge response of 0.39 μm. Image examples showing both epidermal and dermal features including capillary blood flow are provided. These images represent the highest resolution and frame rate yet achieved for tissue imaging with a MEMS bi-axial scan mirror. PMID:20389391

  12. The Matrix Element Method: Past, Present, and Future

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gainer, James S.; Lykken, Joseph; Matchev, Konstantin T.

    2013-07-12

    The increasing use of multivariate methods, and in particular the Matrix Element Method (MEM), represents a revolution in experimental particle physics. With continued exponential growth in computing capabilities, the use of sophisticated multivariate methods-- already common-- will soon become ubiquitous and ultimately almost compulsory. While the existence of sophisticated algorithms for disentangling signal and background might naively suggest a diminished role for theorists, the use of the MEM, with its inherent connection to the calculation of differential cross sections will benefit from collaboration between theorists and experimentalists. In this white paper, we will briefly describe the MEM and some ofmore » its recent uses, note some current issues and potential resolutions, and speculate about exciting future opportunities.« less

  13. The Development of a Portable Hard Disk Encryption/Decryption System with a MEMS Coded Lock.

    PubMed

    Zhang, Weiping; Chen, Wenyuan; Tang, Jian; Xu, Peng; Li, Yibin; Li, Shengyong

    2009-01-01

    In this paper, a novel portable hard-disk encryption/decryption system with a MEMS coded lock is presented, which can authenticate the user and provide the key for the AES encryption/decryption module. The portable hard-disk encryption/decryption system is composed of the authentication module, the USB portable hard-disk interface card, the ATA protocol command decoder module, the data encryption/decryption module, the cipher key management module, the MEMS coded lock controlling circuit module, the MEMS coded lock and the hard disk. The ATA protocol circuit, the MEMS control circuit and AES encryption/decryption circuit are designed and realized by FPGA(Field Programmable Gate Array). The MEMS coded lock with two couplers and two groups of counter-meshing-gears (CMGs) are fabricated by a LIGA-like process and precision engineering method. The whole prototype was fabricated and tested. The test results show that the user's password could be correctly discriminated by the MEMS coded lock, and the AES encryption module could get the key from the MEMS coded lock. Moreover, the data in the hard-disk could be encrypted or decrypted, and the read-write speed of the dataflow could reach 17 MB/s in Ultra DMA mode.

  14. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.

  15. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies.

    PubMed

    Lorenz, Ralph D

    2010-05-12

    The 'two-box model' of planetary climate is discussed. This model has been used to demonstrate consistency of the equator-pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b.

  16. The two-box model of climate: limitations and applications to planetary habitability and maximum entropy production studies

    PubMed Central

    Lorenz, Ralph D.

    2010-01-01

    The ‘two-box model’ of planetary climate is discussed. This model has been used to demonstrate consistency of the equator–pole temperature gradient on Earth, Mars and Titan with what would be predicted from a principle of maximum entropy production (MEP). While useful for exposition and for generating first-order estimates of planetary heat transports, it has too low a resolution to investigate climate systems with strong feedbacks. A two-box MEP model agrees well with the observed day : night temperature contrast observed on the extrasolar planet HD 189733b. PMID:20368253

  17. Optimal behavior of viscoelastic flow at resonant frequencies.

    PubMed

    Lambert, A A; Ibáñez, G; Cuevas, S; del Río, J A

    2004-11-01

    The global entropy generation rate in the zero-mean oscillatory flow of a Maxwell fluid in a pipe is analyzed with the aim of determining its behavior at resonant flow conditions. This quantity is calculated explicitly using the analytic expression for the velocity field and assuming isothermal conditions. The global entropy generation rate shows well-defined peaks at the resonant frequencies where the flow displays maximum velocities. It was found that resonant frequencies can be considered optimal in the sense that they maximize the power transmitted to the pulsating flow at the expense of maximum dissipation.

  18. Twenty-five years of maximum-entropy principle

    NASA Astrophysics Data System (ADS)

    Kapur, J. N.

    1983-04-01

    The strengths and weaknesses of the maximum entropy principle (MEP) are examined and some challenging problems that remain outstanding at the end of the first quarter century of the principle are discussed. The original formalism of the MEP is presented and its relationship to statistical mechanics is set forth. The use of MEP for characterizing statistical distributions, in statistical inference, nonlinear spectral analysis, transportation models, population density models, models for brand-switching in marketing and vote-switching in elections is discussed. Its application to finance, insurance, image reconstruction, pattern recognition, operations research and engineering, biology and medicine, and nonparametric density estimation is considered.

  19. Deep coupling of star tracker and MEMS-gyro data under highly dynamic and long exposure conditions

    NASA Astrophysics Data System (ADS)

    Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin

    2014-08-01

    Star trackers and gyroscopes are the two most widely used attitude measurement devices in spacecrafts. The star tracker is supposed to have the highest accuracy in stable conditions among different types of attitude measurement devices. In general, to detect faint stars and reduce the size of the star tracker, a method with long exposure time method is usually used. Thus, under dynamic conditions, smearing of the star image may appear and result in decreased accuracy or even failed extraction of the star spot. This may cause inaccuracies in attitude measurement. Gyros have relatively good dynamic performance and are usually used in combination with star trackers. However, current combination methods focus mainly on the data fusion of the output attitude data levels, which are inadequate for utilizing and processing internal blurred star image information. A method for tracking deep coupling stars and MEMS-gyro data is proposed in this work. The method achieves deep fusion at the star image level. First, dynamic star image processing is performed based on the angular velocity information of the MEMS-gyro. Signal-to-noise ratio (SNR) of the star spot could be improved, and extraction is achieved more effectively. Then, a prediction model for optimal estimation of the star spot position is obtained through the MEMS-gyro, and an extended Kalman filter is introduced. Meanwhile, the MEMS-gyro drift can be estimated and compensated though the proposed method. These enable the star tracker to achieve high star centroid determination accuracy under dynamic conditions. The MEMS-gyro drift can be corrected even when attitude data of the star tracker are unable to be solved and only one navigation star is captured in the field of view. Laboratory experiments were performed to verify the effectiveness of the proposed method and the whole system.

  20. Statistical mechanical theory for steady state systems. VI. Variational principles

    NASA Astrophysics Data System (ADS)

    Attard, Phil

    2006-12-01

    Several variational principles that have been proposed for nonequilibrium systems are analyzed. These include the principle of minimum rate of entropy production due to Prigogine [Introduction to Thermodynamics of Irreversible Processes (Interscience, New York, 1967)], the principle of maximum rate of entropy production, which is common on the internet and in the natural sciences, two principles of minimum dissipation due to Onsager [Phys. Rev. 37, 405 (1931)] and to Onsager and Machlup [Phys. Rev. 91, 1505 (1953)], and the principle of maximum second entropy due to Attard [J. Chem.. Phys. 122, 154101 (2005); Phys. Chem. Chem. Phys. 8, 3585 (2006)]. The approaches of Onsager and Attard are argued to be the only viable theories. These two are related, although their physical interpretation and mathematical approximations differ. A numerical comparison with computer simulation results indicates that Attard's expression is the only accurate theory. The implications for the Langevin and other stochastic differential equations are discussed.

  1. Energy transports by ocean and atmosphere based on an entropy extremum principle. I - Zonal averaged transports

    NASA Technical Reports Server (NTRS)

    Sohn, Byung-Ju; Smith, Eric A.

    1993-01-01

    The maximum entropy production principle suggested by Paltridge (1975) is applied to separating the satellite-determined required total transports into atmospheric and oceanic components. Instead of using the excessively restrictive equal energy dissipation hypothesis as a deterministic tool for separating transports between the atmosphere and ocean fluids, the satellite-inferred required 2D energy transports are imposed on Paltridge's energy balance model, which is then solved as a variational problem using the equal energy dissipation hypothesis only to provide an initial guess field. It is suggested that Southern Ocean transports are weaker than previously reported. It is argued that a maximum entropy production principle can serve as a governing rule on macroscale global climate, and, in conjunction with conventional satellite measurements of the net radiation balance, provides a means to decompose atmosphere and ocean transports from the total transport field.

  2. Direct measurement of the electrocaloric effect in poly(vinylidene fluoride-trifluoroethylene-chlorotrifluoroethylene) terpolymer films

    NASA Astrophysics Data System (ADS)

    Basso, Vittorio; Russo, Florence; Gerard, Jean-François; Pruvost, Sébastien

    2013-11-01

    We investigated the entropy change in poly(vinylidene fluoride-trifluoroethylene-chlorotrifluoroethylene) (P(VDF-TrFE-CTFE)) films in the temperature range between -5 ∘C and 60 ∘C by direct heat flux calorimetry using Peltier cell heat flux sensors. At the electric field E = 50 MVm-1 the isothermal entropy change attains a maximum of |Δs|=4.2 Jkg-1K-1 at 31∘C with an adiabatic temperature change ΔTad=1.1 K. At temperatures below the maximum, in the range from 25 ∘C to -5 ∘C, the entropy change |Δs | rapidly decreases and the unipolar P vs E relationship becomes hysteretic. This phenomenon is interpreted as the fact that the fluctuations of the polar segments of the polymer chain, responsible for the electrocaloric effect ECE in the polymer, becomes progressively frozen below the relaxor transition.

  3. Maximum Renyi entropy principle for systems with power-law Hamiltonians.

    PubMed

    Bashkirov, A G

    2004-09-24

    The Renyi distribution ensuring the maximum of Renyi entropy is investigated for a particular case of a power-law Hamiltonian. Both Lagrange parameters alpha and beta can be eliminated. It is found that beta does not depend on a Renyi parameter q and can be expressed in terms of an exponent kappa of the power-law Hamiltonian and an average energy U. The Renyi entropy for the resulting Renyi distribution reaches its maximal value at q=1/(1+kappa) that can be considered as the most probable value of q when we have no additional information on the behavior of the stochastic process. The Renyi distribution for such q becomes a power-law distribution with the exponent -(kappa+1). When q=1/(1+kappa)+epsilon (0

  4. Maximum one-shot dissipated work from Rényi divergences

    NASA Astrophysics Data System (ADS)

    Yunger Halpern, Nicole; Garner, Andrew J. P.; Dahlsten, Oscar C. O.; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  5. Maximum one-shot dissipated work from Rényi divergences.

    PubMed

    Yunger Halpern, Nicole; Garner, Andrew J P; Dahlsten, Oscar C O; Vedral, Vlatko

    2018-05-01

    Thermodynamics describes large-scale, slowly evolving systems. Two modern approaches generalize thermodynamics: fluctuation theorems, which concern finite-time nonequilibrium processes, and one-shot statistical mechanics, which concerns small scales and finite numbers of trials. Combining these approaches, we calculate a one-shot analog of the average dissipated work defined in fluctuation contexts: the cost of performing a protocol in finite time instead of quasistatically. The average dissipated work has been shown to be proportional to a relative entropy between phase-space densities, to a relative entropy between quantum states, and to a relative entropy between probability distributions over possible values of work. We derive one-shot analogs of all three equations, demonstrating that the order-infinity Rényi divergence is proportional to the maximum possible dissipated work in each case. These one-shot analogs of fluctuation-theorem results contribute to the unification of these two toolkits for small-scale, nonequilibrium statistical physics.

  6. A graphic approach to include dissipative-like effects in reversible thermal cycles

    NASA Astrophysics Data System (ADS)

    Gonzalez-Ayala, Julian; Arias-Hernandez, Luis Antonio; Angulo-Brown, Fernando

    2017-05-01

    Since the decade of 1980's, a connection between a family of maximum-work reversible thermal cycles and maximum-power finite-time endoreversible cycles has been established. The endoreversible cycles produce entropy at their couplings with the external heat baths. Thus, this kind of cycles can be optimized under criteria of merit that involve entropy production terms. Meanwhile the relation between the concept of work and power is quite direct, apparently, the finite-time objective functions involving entropy production have not reversible counterparts. In the present paper we show that it is also possible to establish a connection between irreversible cycle models and reversible ones by means of the concept of "geometric dissipation", which has to do with the equivalent role of a deficit of areas between some reversible cycles and the Carnot cycle and actual dissipative terms in a Curzon-Ahlborn engine.

  7. A Novel Method to Increase LinLog CMOS Sensors’ Performance in High Dynamic Range Scenarios

    PubMed Central

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J.; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor’s maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method. PMID:22164083

  8. A novel method to increase LinLog CMOS sensors' performance in high dynamic range scenarios.

    PubMed

    Martínez-Sánchez, Antonio; Fernández, Carlos; Navarro, Pedro J; Iborra, Andrés

    2011-01-01

    Images from high dynamic range (HDR) scenes must be obtained with minimum loss of information. For this purpose it is necessary to take full advantage of the quantification levels provided by the CCD/CMOS image sensor. LinLog CMOS sensors satisfy the above demand by offering an adjustable response curve that combines linear and logarithmic responses. This paper presents a novel method to quickly adjust the parameters that control the response curve of a LinLog CMOS image sensor. We propose to use an Adaptive Proportional-Integral-Derivative controller to adjust the exposure time of the sensor, together with control algorithms based on the saturation level and the entropy of the images. With this method the sensor's maximum dynamic range (120 dB) can be used to acquire good quality images from HDR scenes with fast, automatic adaptation to scene conditions. Adaptation to a new scene is rapid, with a sensor response adjustment of less than eight frames when working in real time video mode. At least 67% of the scene entropy can be retained with this method.

  9. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams.

    PubMed

    Gao, Lili; Zhou, Zai-Fa; Huang, Qing-An

    2017-11-08

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations.

  10. A Generalized Polynomial Chaos-Based Approach to Analyze the Impacts of Process Deviations on MEMS Beams

    PubMed Central

    Gao, Lili

    2017-01-01

    A microstructure beam is one of the fundamental elements in MEMS devices like cantilever sensors, RF/optical switches, varactors, resonators, etc. It is still difficult to precisely predict the performance of MEMS beams with the current available simulators due to the inevitable process deviations. Feasible numerical methods are required and can be used to improve the yield and profits of the MEMS devices. In this work, process deviations are considered to be stochastic variables, and a newly-developed numerical method, i.e., generalized polynomial chaos (GPC), is applied for the simulation of the MEMS beam. The doubly-clamped polybeam has been utilized to verify the accuracy of GPC, compared with our Monte Carlo (MC) approaches. Performance predictions have been made on the residual stress by achieving its distributions in GaAs Monolithic Microwave Integrated Circuit (MMIC)-based MEMS beams. The results show that errors are within 1% for the results of GPC approximations compared with the MC simulations. Appropriate choices of the 4-order GPC expansions with orthogonal terms have also succeeded in reducing the MC simulation labor. The mean value of the residual stress, concluded from experimental tests, shares an error about 1.1% with that of the 4-order GPC method. It takes a probability around 54.3% for the 4-order GPC approximation to attain the mean test value of the residual stress. The corresponding yield occupies over 90 percent around the mean within the twofold standard deviations. PMID:29117096

  11. Programmable Low-Power Low-Noise Capacitance to Voltage Converter for MEMS Accelerometers

    PubMed Central

    Royo, Guillermo; Sánchez-Azqueta, Carlos; Gimeno, Cecilia; Aldea, Concepción; Celma, Santiago

    2016-01-01

    In this work, we present a capacitance-to-voltage converter (CVC) for capacitive accelerometers based on microelectromechanical systems (MEMS). Based on a fully-differential transimpedance amplifier (TIA), it features a 34-dB transimpedance gain control and over one decade programmable bandwidth, from 75 kHz to 1.2 MHz. The TIA is aimed for low-cost low-power capacitive sensor applications. It has been designed in a standard 0.18-μm CMOS technology and its power consumption is only 54 μW. At the maximum transimpedance configuration, the TIA shows an equivalent input noise of 42 fA/Hz at 50 kHz, which corresponds to 100 μg/Hz. PMID:28042830

  12. Programmable Low-Power Low-Noise Capacitance to Voltage Converter for MEMS Accelerometers.

    PubMed

    Royo, Guillermo; Sánchez-Azqueta, Carlos; Gimeno, Cecilia; Aldea, Concepción; Celma, Santiago

    2016-12-30

    In this work, we present a capacitance-to-voltage converter (CVC) for capacitive accelerometers based on microelectromechanical systems (MEMS). Based on a fully-differential transimpedance amplifier (TIA), it features a 34-dB transimpedance gain control and over one decade programmable bandwidth, from 75 kHz to 1.2 MHz. The TIA is aimed for low-cost low-power capacitive sensor applications. It has been designed in a standard 0.18-μm CMOS technology and its power consumption is only 54 μW. At the maximum transimpedance configuration, the TIA shows an equivalent input noise of 42 fA/ Hz at 50 kHz, which corresponds to 100 μg/ Hz .

  13. 128×128 three-dimensional MEMS optical switch module with simultaneous optical path connection for optical cross-connect systems.

    PubMed

    Mizukami, Masato; Yamaguchi, Joji; Nemoto, Naru; Kawajiri, Yuko; Hirata, Hirooki; Uchiyama, Shingo; Makihara, Mitsuhiro; Sakata, Tomomi; Shimoyama, Nobuhiro; Oda, Kazuhiro

    2011-07-20

    A 128×128 three-dimensional MEMS optical switch module and a switching-control algorithm for high-speed connection and optical power stabilization are described. A prototype switch module enables the simultaneous switching of all optical paths. The insertion loss is less than 4.6 dB and is 2.3 dB on average. The switching time is less than 38 ms and is 8 ms on average. We confirmed that the maximum optical power can be obtained and optical power stabilization control is possible. The results confirm that the module is suitable for practical use in optical cross-connect systems. © 2011 Optical Society of America

  14. 5 V Compatible Two-Axis PZT Driven MEMS Scanning Mirror with Mechanical Leverage Structure for Miniature LiDAR Application.

    PubMed

    Ye, Liangchen; Zhang, Gaofei; You, Zheng

    2017-03-05

    The MEMS (Micro-Electronical Mechanical System) scanning mirror is an optical MEMS device that can scan laser beams across one or two dimensions. MEMS scanning mirrors can be applied in a variety of applications, such as laser display, bio-medical imaging and Light Detection and Ranging (LiDAR). These commercial applications have recently created a great demand for low-driving-voltage and low-power MEMS mirrors. However, no reported two-axis MEMS scanning mirror is available for usage in a universal supplying voltage such as 5 V. In this paper, we present an ultra-low voltage driven two-axis MEMS scanning mirror which is 5 V compatible. In order to realize low voltage and low power, a two-axis MEMS scanning mirror with mechanical leverage driven by PZT (Lead zirconate titanate) ceramic is designed, modeled, fabricated and characterized. To further decrease the power of the MEMS scanning mirror, a new method of impedance matching for PZT ceramic driven by a two-frequency mixed signal is established. As experimental results show, this MEMS scanning mirror reaches a two-axis scanning angle of 41.9° × 40.3° at a total driving voltage of 4.2 Vpp and total power of 16 mW. The effective diameter of reflection of the mirror is 2 mm and the operating frequencies of two-axis scanning are 947.51 Hz and 1464.66 Hz, respectively.

  15. 5 V Compatible Two-Axis PZT Driven MEMS Scanning Mirror with Mechanical Leverage Structure for Miniature LiDAR Application

    PubMed Central

    Ye, Liangchen; Zhang, Gaofei; You, Zheng

    2017-01-01

    The MEMS (Micro-Electronical Mechanical System) scanning mirror is an optical MEMS device that can scan laser beams across one or two dimensions. MEMS scanning mirrors can be applied in a variety of applications, such as laser display, bio-medical imaging and Light Detection and Ranging (LiDAR). These commercial applications have recently created a great demand for low-driving-voltage and low-power MEMS mirrors. However, no reported two-axis MEMS scanning mirror is available for usage in a universal supplying voltage such as 5 V. In this paper, we present an ultra-low voltage driven two-axis MEMS scanning mirror which is 5 V compatible. In order to realize low voltage and low power, a two-axis MEMS scanning mirror with mechanical leverage driven by PZT (Lead zirconate titanate) ceramic is designed, modeled, fabricated and characterized. To further decrease the power of the MEMS scanning mirror, a new method of impedance matching for PZT ceramic driven by a two-frequency mixed signal is established. As experimental results show, this MEMS scanning mirror reaches a two-axis scanning angle of 41.9° × 40.3° at a total driving voltage of 4.2 Vpp and total power of 16 mW. The effective diameter of reflection of the mirror is 2 mm and the operating frequencies of two-axis scanning are 947.51 Hz and 1464.66 Hz, respectively. PMID:28273880

  16. Thermally-induced voltage alteration for analysis of microelectromechanical devices

    DOEpatents

    Walraven, Jeremy A.; Cole, Jr., Edward I.

    2002-01-01

    A thermally-induced voltage alteration (TIVA) apparatus and method are disclosed for analyzing a microelectromechanical (MEM) device with or without on-board integrated circuitry. One embodiment of the TIVA apparatus uses constant-current biasing of the MEM device while scanning a focused laser beam over electrically-active members therein to produce localized heating which alters the power demand of the MEM device and thereby changes the voltage of the constant-current source. This changing voltage of the constant-current source can be measured and used in combination with the position of the focused and scanned laser beam to generate an image of any short-circuit defects in the MEM device (e.g. due to stiction or fabrication defects). In another embodiment of the TIVA apparatus, an image can be generated directly from a thermoelectric potential produced by localized laser heating at the location of any short-circuit defects in the MEM device, without any need for supplying power to the MEM device. The TIVA apparatus can be formed, in part, from a scanning optical microscope, and has applications for qualification testing or failure analysis of MEM devices.

  17. Use of thermal cycling to reduce adhesion of OTS coated coated MEMS cantilevers

    NASA Astrophysics Data System (ADS)

    Ali, Shaikh M.; Phinney, Leslie M.

    2003-01-01

    °Microelectromechanical systems (MEMS) have enormous potential to contribute in diverse fields such as automotive, health care, aerospace, consumer products, and biotechnology, but successful commercial applications of MEMS are still small in number. Reliability of MEMS is a major impediment to the commercialization of laboratory prototypes. Due to the multitude of MEMS applications and the numerous processing and packaging steps, MEMS are exposed to a variety of environmental conditions, making the prediction of operational reliability difficult. In this paper, we investigate the effects of operating temperature on the in-use adhesive failure of electrostatically actuated MEMS microcantilevers coated with octadecyltrichlorosilane (OTS) films. The cantilevers are subjected to repeated temperature cycles and electrostatically actuated at temperatures between 25°C and 300°C in ambient air. The experimental results indicate that temperature cycling of the OTS coated cantilevers in air reduces the sticking probability of the microcantilevers. The sticking probability of OTS coated cantilevers was highest during heating, which decreased during cooling, and was lowest during reheating. Modifications to the OTS release method to increase its yield are also discussed.

  18. Wavelength specific excitation of gold nanoparticle thin-films

    NASA Astrophysics Data System (ADS)

    Lucas, Thomas M.; James, Kurtis T.; Beharic, Jasmin; Moiseeva, Evgeniya V.; Keynton, Robert S.; O'Toole, Martin G.; Harnett, Cindy K.

    2014-01-01

    Advances in microelectromechanical systems (MEMS) continue to empower researchers with the ability to sense and actuate at the micro scale. Thermally driven MEMS components are often used for their rapid response and ability to apply relatively high forces. However, thermally driven MEMS often have high power consumption and require physical wiring to the device. This work demonstrates a basis for designing light-powered MEMS with a wavelength specific response. This is accomplished by patterning surface regions with a thin film containing gold nanoparticles that are tuned to have an absorption peak at a particular wavelength. The heating behavior of these patterned surfaces is selected by the wavelength of laser directed at the sample. This method also eliminates the need for wires to power a device. The results demonstrate that gold nanoparticle films are effective wavelength-selective absorbers. This "hybrid" of infrared absorbent gold nanoparticles and MEMS fabrication technology has potential applications in light-actuated switches and other mechanical structures that must bend at specific regions. Deposition methods and surface chemistry will be integrated with three-dimensional MEMS structures in the next phase of this work. The long-term goal of this project is a system of light-powered microactuators for exploring cellular responses to mechanical stimuli, increasing our fundamental understanding of tissue response to everyday mechanical stresses at the molecular level.

  19. Time-series analysis of sleep wake stage of rat EEG using time-dependent pattern entropy

    NASA Astrophysics Data System (ADS)

    Ishizaki, Ryuji; Shinba, Toshikazu; Mugishima, Go; Haraguchi, Hikaru; Inoue, Masayoshi

    2008-05-01

    We performed electroencephalography (EEG) for six male Wistar rats to clarify temporal behaviors at different levels of consciousness. Levels were identified both by conventional sleep analysis methods and by our novel entropy method. In our method, time-dependent pattern entropy is introduced, by which EEG is reduced to binary symbolic dynamics and the pattern of symbols in a sliding temporal window is considered. A high correlation was obtained between level of consciousness as measured by the conventional method and mean entropy in our entropy method. Mean entropy was maximal while awake (stage W) and decreased as sleep deepened. These results suggest that time-dependent pattern entropy may offer a promising method for future sleep research.

  20. Fabrication of Microhotplates Based on Laser Micromachining of Zirconium Oxide

    NASA Astrophysics Data System (ADS)

    Oblov, Konstantin; Ivanova, Anastasia; Soloviev, Sergey; Samotaev, Nikolay; Lipilin, Alexandr; Vasiliev, Alexey; Sokolov, Andrey

    We present a novel approach to the fabrication of MEMS devices, which can be used for gas sensors operating in harsh environment in wireless and autonomous information systems. MEMS platforms based on ZrO2/Y2O3 (YSZ) are applied in these devices. The methods of ceramic MEMS devices fabrication with laser micromachining are considered. It is shown that the application of YSZ membranes permits a decrease in MEMS power consumption at 4500C down to ∼75 mW at continuous heating and down to ∼ 1 mW at pulse heating mode. The application of the platforms is not restricted by gas sensors: they can be used for fast thermometers, bolometric matrices, flowmeteres and other MEMS devices working under harsh environmental conditions.

  1. Estimation of the magnetic entropy change by means of Landau theory and phenomenological model in La0.6Ca0.2 Sr0.2MnO3/Sb2O3 ceramic composites

    NASA Astrophysics Data System (ADS)

    Nasri, M.; Dhahri, E.; Hlil, E. K.

    2018-06-01

    In this paper, magnetocaloric properties of La0.6Ca0.2Sr0.2MnO3/Sb2O3 oxides have been investigated. The composite samples were prepared using the conventional solid-state reaction method. The second-order phase transition can be testified with the positive slope in Arrott plots. An excellent agreement has been found between the -ΔSM values estimated by Landau theory and those obtained using the classical Maxwell relation. The field dependence of the magnetic entropy change analysis shows a power law dependence,|ΔSM|≈Hn , with n(TC) = 0.65. Moreover, the scaling analysis of magnetic entropy change exhibits that ΔSM(T) curves collapse into a single universal curve, indicating that the observed paramagnetic to ferromagnetic phase transition is an authentic second-order phase transition. The maximum value of magnetic entropy change of composites is found to decrease slightly with the further increasing of Sb2O3 concentration. A phenomenological model was used to predict magnetocaloric properties of La0.6Ca0.2Sr0.2MnO3/Sb2O3 composites. The theoretical calculations are compared with the available experimental data.

  2. Steepest entropy ascent for two-state systems with slowly varying Hamiltonians

    NASA Astrophysics Data System (ADS)

    Militello, Benedetto

    2018-05-01

    The steepest entropy ascent approach is considered and applied to two-state systems. When the Hamiltonian of the system is time-dependent, the principle of maximum entropy production can still be exploited; arguments to support this fact are given. In the limit of slowly varying Hamiltonians, which allows for the adiabatic approximation for the unitary part of the dynamics, the system exhibits significant robustness to the thermalization process. Specific examples such as a spin in a rotating field and a generic two-state system undergoing an avoided crossing are considered.

  3. Entropy Inequality Violations from Ultraspinning Black Holes.

    PubMed

    Hennigar, Robie A; Mann, Robert B; Kubizňák, David

    2015-07-17

    We construct a new class of rotating anti-de Sitter (AdS) black hole solutions with noncompact event horizons of finite area in any dimension and study their thermodynamics. In four dimensions these black holes are solutions to gauged supergravity. We find that their entropy exceeds the maximum implied from the conjectured reverse isoperimetric inequality, which states that for a given thermodynamic volume, the black hole entropy is maximized for Schwarzschild-AdS space. We use this result to suggest more stringent conditions under which this conjecture may hold.

  4. The non-equilibrium statistical mechanics of a simple geophysical fluid dynamics model

    NASA Astrophysics Data System (ADS)

    Verkley, Wim; Severijns, Camiel

    2014-05-01

    Lorenz [1] has devised a dynamical system that has proved to be very useful as a benchmark system in geophysical fluid dynamics. The system in its simplest form consists of a periodic array of variables that can be associated with an atmospheric field on a latitude circle. The system is driven by a constant forcing, is damped by linear friction and has a simple advection term that causes the model to behave chaotically if the forcing is large enough. Our aim is to predict the statistics of Lorenz' model on the basis of a given average value of its total energy - obtained from a numerical integration - and the assumption of statistical stationarity. Our method is the principle of maximum entropy [2] which in this case reads: the information entropy of the system's probability density function shall be maximal under the constraints of normalization, a given value of the average total energy and statistical stationarity. Statistical stationarity is incorporated approximately by using `stationarity constraints', i.e., by requiring that the average first and possibly higher-order time-derivatives of the energy are zero in the maximization of entropy. The analysis [3] reveals that, if the first stationarity constraint is used, the resulting probability density function rather accurately reproduces the statistics of the individual variables. If the second stationarity constraint is used as well, the correlations between the variables are also reproduced quite adequately. The method can be generalized straightforwardly and holds the promise of a viable non-equilibrium statistical mechanics of the forced-dissipative systems of geophysical fluid dynamics. [1] E.N. Lorenz, 1996: Predictability - A problem partly solved, in Proc. Seminar on Predictability (ECMWF, Reading, Berkshire, UK), Vol. 1, pp. 1-18. [2] E.T. Jaynes, 2003: Probability Theory - The Logic of Science (Cambridge University Press, Cambridge). [3] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz, Eur. Phys. J. B, 87:7, http://dx.doi.org/10.1140/epjb/e2013-40681-2 (open access).

  5. Method and system for automated on-chip material and structural certification of MEMS devices

    DOEpatents

    Sinclair, Michael B.; DeBoer, Maarten P.; Smith, Norman F.; Jensen, Brian D.; Miller, Samuel L.

    2003-05-20

    A new approach toward MEMS quality control and materials characterization is provided by a combined test structure measurement and mechanical response modeling approach. Simple test structures are cofabricated with the MEMS devices being produced. These test structures are designed to isolate certain types of physical response, so that measurement of their behavior under applied stress can be easily interpreted as quality control and material properties information.

  6. Application of the Moment Method in the Slip and Transition Regime for Microfluidic Flows

    DTIC Science & Technology

    2011-01-01

    systems ( MEMS ), fluid flow at the micro- and nano-scale has received considerable attention [1]. A basic understanding of the nature of flow and heat ...Couette Flow Many MEMS devices contain oscillating parts where air (viscous) damping plays an important role. To understand the damping mechanisms...transfer in these devices is considered essential for efficient design and control of MEMS . Engineering applications for gas microflows include

  7. A MEMS-based, wireless, biometric-like security system

    NASA Astrophysics Data System (ADS)

    Cross, Joshua D.; Schneiter, John L.; Leiby, Grant A.; McCarter, Steven; Smith, Jeremiah; Budka, Thomas P.

    2010-04-01

    We present a system for secure identification applications that is based upon biometric-like MEMS chips. The MEMS chips have unique frequency signatures resulting from fabrication process variations. The MEMS chips possess something analogous to a "voiceprint". The chips are vacuum encapsulated, rugged, and suitable for low-cost, highvolume mass production. Furthermore, the fabrication process is fully integrated with standard CMOS fabrication methods. One is able to operate the MEMS-based identification system similarly to a conventional RFID system: the reader (essentially a custom network analyzer) detects the power reflected across a frequency spectrum from a MEMS chip in its vicinity. We demonstrate prototype "tags" - MEMS chips placed on a credit card-like substrate - to show how the system could be used in standard identification or authentication applications. We have integrated power scavenging to provide DC bias for the MEMS chips through the use of a 915 MHz source in the reader and a RF-DC conversion circuit on the tag. The system enables a high level of protection against typical RFID hacking attacks. There is no need for signal encryption, so back-end infrastructure is minimal. We believe this system would make a viable low-cost, high-security system for a variety of identification and authentication applications.

  8. The Development of a Portable Hard Disk Encryption/Decryption System with a MEMS Coded Lock

    PubMed Central

    Zhang, Weiping; Chen, Wenyuan; Tang, Jian; Xu, Peng; Li, Yibin; Li, Shengyong

    2009-01-01

    In this paper, a novel portable hard-disk encryption/decryption system with a MEMS coded lock is presented, which can authenticate the user and provide the key for the AES encryption/decryption module. The portable hard-disk encryption/decryption system is composed of the authentication module, the USB portable hard-disk interface card, the ATA protocol command decoder module, the data encryption/decryption module, the cipher key management module, the MEMS coded lock controlling circuit module, the MEMS coded lock and the hard disk. The ATA protocol circuit, the MEMS control circuit and AES encryption/decryption circuit are designed and realized by FPGA(Field Programmable Gate Array). The MEMS coded lock with two couplers and two groups of counter-meshing-gears (CMGs) are fabricated by a LIGA-like process and precision engineering method. The whole prototype was fabricated and tested. The test results show that the user's password could be correctly discriminated by the MEMS coded lock, and the AES encryption module could get the key from the MEMS coded lock. Moreover, the data in the hard-disk could be encrypted or decrypted, and the read-write speed of the dataflow could reach 17 MB/s in Ultra DMA mode. PMID:22291566

  9. MemBrain: An Easy-to-Use Online Webserver for Transmembrane Protein Structure Prediction

    NASA Astrophysics Data System (ADS)

    Yin, Xi; Yang, Jing; Xiao, Feng; Yang, Yang; Shen, Hong-Bin

    2018-03-01

    Membrane proteins are an important kind of proteins embedded in the membranes of cells and play crucial roles in living organisms, such as ion channels, transporters, receptors. Because it is difficult to determinate the membrane protein's structure by wet-lab experiments, accurate and fast amino acid sequence-based computational methods are highly desired. In this paper, we report an online prediction tool called MemBrain, whose input is the amino acid sequence. MemBrain consists of specialized modules for predicting transmembrane helices, residue-residue contacts and relative accessible surface area of α-helical membrane proteins. MemBrain achieves a prediction accuracy of 97.9% of A TMH, 87.1% of A P, 3.2 ± 3.0 of N-score, 3.1 ± 2.8 of C-score. MemBrain-Contact obtains 62%/64.1% prediction accuracy on training and independent dataset on top L/5 contact prediction, respectively. And MemBrain-Rasa achieves Pearson correlation coefficient of 0.733 and its mean absolute error of 13.593. These prediction results provide valuable hints for revealing the structure and function of membrane proteins. MemBrain web server is free for academic use and available at www.csbio.sjtu.edu.cn/bioinf/MemBrain/. [Figure not available: see fulltext.

  10. Toward the Application of the Maximum Entropy Production Principle to a Broader Range of Far From Equilibrium Dissipative Systems

    NASA Astrophysics Data System (ADS)

    Lineweaver, C. H.

    2005-12-01

    The principle of Maximum Entropy Production (MEP) is being usefully applied to a wide range of non-equilibrium processes including flows in planetary atmospheres and the bioenergetics of photosynthesis. Our goal of applying the principle of maximum entropy production to an even wider range of Far From Equilibrium Dissipative Systems (FFEDS) depends on the reproducibility of the evolution of the system from macro-state A to macro-state B. In an attempt to apply the principle of MEP to astronomical and cosmological structures, we investigate the problematic relationship between gravity and entropy. In the context of open and non-equilibrium systems, we use a generalization of the Gibbs free energy to include the sources of free energy extracted by non-living FFEDS such as hurricanes and convection cells. Redox potential gradients and thermal and pressure gradients provide the free energy for a broad range of FFEDS, both living and non-living. However, these gradients have to be within certain ranges. If the gradients are too weak, FFEDS do not appear. If the gradients are too strong FFEDS disappear. Living and non-living FFEDS often have different source gradients (redox potential gradients vs thermal and pressure gradients) and when they share the same gradient, they exploit different ranges of the gradient. In a preliminary attempt to distinguish living from non-living FFEDS, we investigate the parameter space of: type of gradient and steepness of gradient.

  11. Solar Flare Physics

    NASA Technical Reports Server (NTRS)

    Schmahl, Edward J.; Kundu, Mukul R.

    1998-01-01

    We have continued our previous efforts in studies of fourier imaging methods applied to hard X-ray flares. We have performed physical and theoretical analysis of rotating collimator grids submitted to GSFC(Goddard Space Flight Center) for the High Energy Solar Spectroscopic Imager (HESSI). We have produced simulation algorithms which are currently being used to test imaging software and hardware for HESSI. We have developed Maximum-Entropy, Maximum-Likelihood, and "CLEAN" methods for reconstructing HESSI images from count-rate profiles. This work is expected to continue through the launch of HESSI in July, 2000. Section 1 shows a poster presentation "Image Reconstruction from HESSI Photon Lists" at the Solar Physics Division Meeting, June 1998; Section 2 shows the text and viewgraphs prepared for "Imaging Simulations" at HESSI's Preliminary Design Review on July 30, 1998.

  12. Nonlinear dynamic modeling of a V-shaped metal based thermally driven MEMS actuator for RF switches

    NASA Astrophysics Data System (ADS)

    Bakri-Kassem, Maher; Dhaouadi, Rached; Arabi, Mohamed; Estahbanati, Shahabeddin V.; Abdel-Rahman, Eihab

    2018-05-01

    In this paper, we propose a new dynamic model to describe the nonlinear characteristics of a V-shaped (chevron) metallic-based thermally driven MEMS actuator. We developed two models for the thermal actuator with two configurations. The first MEMS configuration has a small tip connected to the shuttle, while the second configuration has a folded spring and a wide beam attached to the shuttle. A detailed finite element model (FEM) and a lumped element model (LEM) are proposed for each configuration to completely characterize the electro-thermal and thermo-mechanical behaviors. The nonlinear resistivity of the polysilicon layer is extracted from the measured current-voltage (I-V) characteristics of the actuator and the simulated corresponding temperatures in the FEM model, knowing the resistivity of the polysilicon at room temperature from the manufacture’s handbook. Both developed models include the nonlinear temperature-dependent material properties. Numerical simulations in comparison with experimental data using a dedicated MEMS test apparatus verify the accuracy of the proposed LEM model to represent the complex dynamics of the thermal MEMS actuator. The LEM and FEM simulation results show an accuracy ranging from a maximum of 13% error down to a minimum of 1.4% error. The actuator with the lower thermal load to air that includes a folded spring (FS), also known as high surface area actuator is compared to the actuator without FS, also known as low surface area actuator, in terms of the I-V characteristics, power consumption, and experimental static and dynamic responses of the tip displacement.

  13. Design, simulation, fabrication, and characterization of MEMS vibration energy harvesters

    NASA Astrophysics Data System (ADS)

    Oxaal, John

    Energy harvesting from ambient sources has been a longtime goal for microsystem engineers. The energy available from ambient sources is substantial and could be used to power wireless micro devices, making them fully autonomous. Self-powered wireless sensors could have many applications in for autonomous monitoring of residential, commercial, industrial, geological, or biological environments. Ambient vibrations are of particular interest for energy harvesting as they are ubiquitous and have ample kinetic energy. In this work a MEMS device for vibration energy harvesting using a variable capacitor structure is presented. The nonlinear electromechanical dynamics of a gap-closing type structure is experimentally studied. Important experimental considerations such as the importance of reducing off-axis vibration during testing, characterization methods, dust contamination, and the effect of grounding on parasitic capacitance are discussed. A comprehensive physics based model is developed and validated with two different microfabricated devices. To achieve maximal power, devices with high aspect ratio electrodes and a novel two-level stopper system are designed and fabricated. The maximum achieved power from the MEMS device when driven by sinusoidal vibrations was 3.38 muW. Vibrations from HVAC air ducts, which have a primary frequency of 65 Hz and amplitude of 155 mgrms, are targeted as the vibration source and devices are designed for maximal power harvesting potential at those conditions. Harvesting from the air ducts, the devices reached 118 nW of power. When normalized to the operating conditions, the best figure of merit of the devices tested was an order of magnitude above state-of-the-art of the devices (1.24E-6).

  14. Statistical theory on the analytical form of cloud particle size distributions

    NASA Astrophysics Data System (ADS)

    Wu, Wei; McFarquhar, Greg

    2017-11-01

    Several analytical forms of cloud particle size distributions (PSDs) have been used in numerical modeling and remote sensing retrieval studies of clouds and precipitation, including exponential, gamma, lognormal, and Weibull distributions. However, there is no satisfying physical explanation as to why certain distribution forms preferentially occur instead of others. Theoretically, the analytical form of a PSD can be derived by directly solving the general dynamic equation, but no analytical solutions have been found yet. Instead of using a process level approach, the use of the principle of maximum entropy (MaxEnt) for determining the analytical form of PSDs from the perspective of system is examined here. Here, the issue of variability under coordinate transformations that arises using the Gibbs/Shannon definition of entropy is identified, and the use of the concept of relative entropy to avoid these problems is discussed. Focusing on cloud physics, the four-parameter generalized gamma distribution is proposed as the analytical form of a PSD using the principle of maximum (relative) entropy with assumptions on power law relations between state variables, scale invariance and a further constraint on the expectation of one state variable (e.g. bulk water mass). DOE ASR.

  15. A homotopy algorithm for synthesizing robust controllers for flexible structures via the maximum entropy design equations

    NASA Technical Reports Server (NTRS)

    Collins, Emmanuel G., Jr.; Richter, Stephen

    1990-01-01

    One well known deficiency of LQG compensators is that they do not guarantee any measure of robustness. This deficiency is especially highlighted when considering control design for complex systems such as flexible structures. There has thus been a need to generalize LQG theory to incorporate robustness constraints. Here we describe the maximum entropy approach to robust control design for flexible structures, a generalization of LQG theory, pioneered by Hyland, which has proved useful in practice. The design equations consist of a set of coupled Riccati and Lyapunov equations. A homotopy algorithm that is used to solve these design equations is presented.

  16. Stimulus-dependent Maximum Entropy Models of Neural Population Codes

    PubMed Central

    Segev, Ronen; Schneidman, Elad

    2013-01-01

    Neural populations encode information about their stimulus in a collective fashion, by joint activity patterns of spiking and silence. A full account of this mapping from stimulus to neural activity is given by the conditional probability distribution over neural codewords given the sensory input. For large populations, direct sampling of these distributions is impossible, and so we must rely on constructing appropriate models. We show here that in a population of 100 retinal ganglion cells in the salamander retina responding to temporal white-noise stimuli, dependencies between cells play an important encoding role. We introduce the stimulus-dependent maximum entropy (SDME) model—a minimal extension of the canonical linear-nonlinear model of a single neuron, to a pairwise-coupled neural population. We find that the SDME model gives a more accurate account of single cell responses and in particular significantly outperforms uncoupled models in reproducing the distributions of population codewords emitted in response to a stimulus. We show how the SDME model, in conjunction with static maximum entropy models of population vocabulary, can be used to estimate information-theoretic quantities like average surprise and information transmission in a neural population. PMID:23516339

  17. Bivariate Rainfall and Runoff Analysis Using Shannon Entropy Theory

    NASA Astrophysics Data System (ADS)

    Rahimi, A.; Zhang, L.

    2012-12-01

    Rainfall-Runoff analysis is the key component for many hydrological and hydraulic designs in which the dependence of rainfall and runoff needs to be studied. It is known that the convenient bivariate distribution are often unable to model the rainfall-runoff variables due to that they either have constraints on the range of the dependence or fixed form for the marginal distributions. Thus, this paper presents an approach to derive the entropy-based joint rainfall-runoff distribution using Shannon entropy theory. The distribution derived can model the full range of dependence and allow different specified marginals. The modeling and estimation can be proceeded as: (i) univariate analysis of marginal distributions which includes two steps, (a) using the nonparametric statistics approach to detect modes and underlying probability density, and (b) fitting the appropriate parametric probability density functions; (ii) define the constraints based on the univariate analysis and the dependence structure; (iii) derive and validate the entropy-based joint distribution. As to validate the method, the rainfall-runoff data are collected from the small agricultural experimental watersheds located in semi-arid region near Riesel (Waco), Texas, maintained by the USDA. The results of unviariate analysis show that the rainfall variables follow the gamma distribution, whereas the runoff variables have mixed structure and follow the mixed-gamma distribution. With this information, the entropy-based joint distribution is derived using the first moments, the first moments of logarithm transformed rainfall and runoff, and the covariance between rainfall and runoff. The results of entropy-based joint distribution indicate: (1) the joint distribution derived successfully preserves the dependence between rainfall and runoff, and (2) the K-S goodness of fit statistical tests confirm the marginal distributions re-derived reveal the underlying univariate probability densities which further assure that the entropy-based joint rainfall-runoff distribution are satisfactorily derived. Overall, the study shows the Shannon entropy theory can be satisfactorily applied to model the dependence between rainfall and runoff. The study also shows that the entropy-based joint distribution is an appropriate approach to capture the dependence structure that cannot be captured by the convenient bivariate joint distributions. Joint Rainfall-Runoff Entropy Based PDF, and Corresponding Marginal PDF and Histogram for W12 Watershed The K-S Test Result and RMSE on Univariate Distributions Derived from the Maximum Entropy Based Joint Probability Distribution;

  18. High Order Entropy-Constrained Residual VQ for Lossless Compression of Images

    NASA Technical Reports Server (NTRS)

    Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen

    1995-01-01

    High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.

  19. High-Resolution Measurement of the Turbulent Frequency-Wavenumber Power Spectrum in a Laboratory Magnetosphere

    NASA Astrophysics Data System (ADS)

    Qian, T. M.; Mauel, M. E.

    2017-10-01

    In a laboratory magnetosphere, plasma is confined by a strong dipole magnet, where interchange and entropy mode turbulence can be studied and controlled in near steady-state conditions. Whole-plasma imaging shows turbulence dominated by long wavelength modes having chaotic amplitudes and phases. Here, we report for the first time, high-resolution measurement of the frequency-wavenumber power spectrum by applying the method of Capon to simultaneous multi-point measurement of electrostatic entropy modes using an array of floating potential probes. Unlike previously reported measurements in which ensemble correlation between two probes detected only the dominant wavenumber, Capon's ``maximum likelihood method'' uses all available probes to produce a frequency-wavenumber spectrum, showing the existence of modes propagating in both electron and ion magnetic drift directions. We also discuss the wider application of this technique to laboratory and magnetospheric plasmas with simultaneous multi-point measurements. Supported by NSF-DOE Partnership in Plasma Science Grant DE-FG02-00ER54585.

  20. A feasibility study on embedded micro-electromechanical sensors and systems (MEMS) for monitoring highway structures.

    DOT National Transportation Integrated Search

    2011-06-01

    Micro-electromechanical systems (MEMS) provide vast improvements over existing sensing methods in the context of structural health monitoring (SHM) of highway infrastructure systems, including improved system reliability, improved longevity and enhan...

  1. Entropy in self-similar shock profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.

    In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less

  2. Entropy in self-similar shock profiles

    DOE PAGES

    Margolin, Len G.; Reisner, Jon Michael; Jordan, Pedro M.

    2017-07-16

    In this paper, we study the structure of a gaseous shock, and in particular the distribution of entropy within, in both a thermodynamics and a statistical mechanics context. The problem of shock structure has a long and distinguished history that we review. We employ the Navier–Stokes equations to construct a self–similar version of Becker’s solution for a shock assuming a particular (physically plausible) Prandtl number; that solution reproduces the well–known result of Morduchow & Libby that features a maximum of the equilibrium entropy inside the shock profile. We then construct an entropy profile, based on gas kinetic theory, that ismore » smooth and monotonically increasing. The extension of equilibrium thermodynamics to irreversible processes is based in part on the assumption of local thermodynamic equilibrium. We show that this assumption is not valid except for the weakest shocks. Finally, we conclude by hypothesizing a thermodynamic nonequilibrium entropy and demonstrating that it closely estimates the gas kinetic nonequilibrium entropy within a shock.« less

  3. Finding the quantum thermoelectric with maximal efficiency and minimal entropy production at given power output

    NASA Astrophysics Data System (ADS)

    Whitney, Robert S.

    2015-03-01

    We investigate the nonlinear scattering theory for quantum systems with strong Seebeck and Peltier effects, and consider their use as heat engines and refrigerators with finite power outputs. This paper gives detailed derivations of the results summarized in a previous paper [R. S. Whitney, Phys. Rev. Lett. 112, 130601 (2014), 10.1103/PhysRevLett.112.130601]. It shows how to use the scattering theory to find (i) the quantum thermoelectric with maximum possible power output, and (ii) the quantum thermoelectric with maximum efficiency at given power output. The latter corresponds to a minimal entropy production at that power output. These quantities are of quantum origin since they depend on system size over electronic wavelength, and so have no analog in classical thermodynamics. The maximal efficiency coincides with Carnot efficiency at zero power output, but decreases with increasing power output. This gives a fundamental lower bound on entropy production, which means that reversibility (in the thermodynamic sense) is impossible for finite power output. The suppression of efficiency by (nonlinear) phonon and photon effects is addressed in detail; when these effects are strong, maximum efficiency coincides with maximum power. Finally, we show in particular limits (typically without magnetic fields) that relaxation within the quantum system does not allow the system to exceed the bounds derived for relaxation-free systems, however, a general proof of this remains elusive.

  4. Identification of Watershed-scale Critical Source Areas Using Bayesian Maximum Entropy Spatiotemporal Analysis

    NASA Astrophysics Data System (ADS)

    Roostaee, M.; Deng, Z.

    2017-12-01

    The states' environmental agencies are required by The Clean Water Act to assess all waterbodies and evaluate potential sources of impairments. Spatial and temporal distributions of water quality parameters are critical in identifying Critical Source Areas (CSAs). However, due to limitations in monetary resources and a large number of waterbodies, available monitoring stations are typically sparse with intermittent periods of data collection. Hence, scarcity of water quality data is a major obstacle in addressing sources of pollution through management strategies. In this study spatiotemporal Bayesian Maximum Entropy method (BME) is employed to model the inherent temporal and spatial variability of measured water quality indicators such as Dissolved Oxygen (DO) concentration for Turkey Creek Watershed. Turkey Creek is located in northern Louisiana and has been listed in 303(d) list for DO impairment since 2014 in Louisiana Water Quality Inventory Reports due to agricultural practices. BME method is proved to provide more accurate estimates than the methods of purely spatial analysis by incorporating space/time distribution and uncertainty in available measured soft and hard data. This model would be used to estimate DO concentration at unmonitored locations and times and subsequently identifying CSAs. The USDA's crop-specific land cover data layers of the watershed were then used to determine those practices/changes that led to low DO concentration in identified CSAs. Primary results revealed that cultivation of corn and soybean as well as urban runoff are main contributing sources in low dissolved oxygen in Turkey Creek Watershed.

  5. Ecosystem functioning and maximum entropy production: a quantitative test of hypotheses.

    PubMed

    Meysman, Filip J R; Bruers, Stijn

    2010-05-12

    The idea that entropy production puts a constraint on ecosystem functioning is quite popular in ecological thermodynamics. Yet, until now, such claims have received little quantitative verification. Here, we examine three 'entropy production' hypotheses that have been forwarded in the past. The first states that increased entropy production serves as a fingerprint of living systems. The other two hypotheses invoke stronger constraints. The state selection hypothesis states that when a system can attain multiple steady states, the stable state will show the highest entropy production rate. The gradient response principle requires that when the thermodynamic gradient increases, the system's new stable state should always be accompanied by a higher entropy production rate. We test these three hypotheses by applying them to a set of conventional food web models. Each time, we calculate the entropy production rate associated with the stable state of the ecosystem. This analysis shows that the first hypothesis holds for all the food webs tested: the living state shows always an increased entropy production over the abiotic state. In contrast, the state selection and gradient response hypotheses break down when the food web incorporates more than one trophic level, indicating that they are not generally valid.

  6. Towards operational interpretations of generalized entropies

    NASA Astrophysics Data System (ADS)

    Topsøe, Flemming

    2010-12-01

    The driving force behind our study has been to overcome the difficulties you encounter when you try to extend the clear and convincing operational interpretations of classical Boltzmann-Gibbs-Shannon entropy to other notions, especially to generalized entropies as proposed by Tsallis. Our approach is philosophical, based on speculations regarding the interplay between truth, belief and knowledge. The main result demonstrates that, accepting philosophically motivated assumptions, the only possible measures of entropy are those suggested by Tsallis - which, as we know, include classical entropy. This result constitutes, so it seems, a more transparent interpretation of entropy than previously available. However, further research to clarify the assumptions is still needed. Our study points to the thesis that one should never consider the notion of entropy in isolation - in order to enable a rich and technically smooth study, further concepts, such as divergence, score functions and descriptors or controls should be included in the discussion. This will clarify the distinction between Nature and Observer and facilitate a game theoretical discussion. The usefulness of this distinction and the subsequent exploitation of game theoretical results - such as those connected with the notion of Nash equilibrium - is demonstrated by a discussion of the Maximum Entropy Principle.

  7. Diffraction leveraged modulation of X-ray pulses using MEMS-based X-ray optics

    DOEpatents

    Lopez, Daniel; Shenoy, Gopal; Wang, Jin; Walko, Donald A.; Jung, Il-Woong; Mukhopadhyay, Deepkishore

    2016-08-09

    A method and apparatus are provided for implementing Bragg-diffraction leveraged modulation of X-ray pulses using MicroElectroMechanical systems (MEMS) based diffractive optics. An oscillating crystalline MEMS device generates a controllable time-window for diffraction of the incident X-ray radiation. The Bragg-diffraction leveraged modulation of X-ray pulses includes isolating a particular pulse, spatially separating individual pulses, and spreading a single pulse from an X-ray pulse-train.

  8. Magnetic field dependence of Griffith phase and magnetocaloric effect in Ca0.85Dy0.15MnO3

    NASA Astrophysics Data System (ADS)

    Nag, Ripan; Sarkar, Bidyut; Pal, Sudipta

    2018-03-01

    Temperature and Magnetic field dependent magnetization properties of electron doped polycrystalline sample Ca0.85Dy0.15MnO3 (CDMO) prepared by solid state reaction method have been studied. The sample undergoes ferromagnetic to paramagnetic phase transition at about 111k. From the study of magnetic properties in terms of Arrot plots it is observed that the phase transition is of 2nd order. The Griffith phase behavior of the sample is suppressed with the increase of the applied magnetic field strength H. We have estimated the magnetic entropy change from experimental magnetization and temperature data. For a magnetic field change of 8000 Oe, the maximum value of magnetic entropy change arrives at a value of 1.126 J-kg-1 k-1 in this magnetocaloric material.

  9. Deformation analysis of MEMS structures by modified digital moiré methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhanwei; Lou, Xinhao; Gao, Jianxin

    2010-11-01

    Quantitative deformation analysis of micro-fabricated electromechanical systems is of importance for the design and functional control of microsystems. In this paper, two modified digital moiré processing methods, Gaussian blurring algorithm combined with digital phase shifting and geometrical phase analysis (GPA) technique based on digital moiré method, are developed to quantitatively analyse the deformation behaviour of micro-electro-mechanical system (MEMS) structures. Measuring principles and experimental procedures of the two methods are described in detail. A digital moiré fringe pattern is generated by superimposing a specimen grating etched directly on a microstructure surface with a digital reference grating (DRG). Most of the grating noise is removed from the digital moiré fringes, which enables the phase distribution of the moiré fringes to be obtained directly. Strain measurement result of a MEMS structure demonstrates the feasibility of the two methods.

  10. Quantile-based Bayesian maximum entropy approach for spatiotemporal modeling of ambient air quality levels.

    PubMed

    Yu, Hwa-Lung; Wang, Chih-Hsin

    2013-02-05

    Understanding the daily changes in ambient air quality concentrations is important to the assessing human exposure and environmental health. However, the fine temporal scales (e.g., hourly) involved in this assessment often lead to high variability in air quality concentrations. This is because of the complex short-term physical and chemical mechanisms among the pollutants. Consequently, high heterogeneity is usually present in not only the averaged pollution levels, but also the intraday variance levels of the daily observations of ambient concentration across space and time. This characteristic decreases the estimation performance of common techniques. This study proposes a novel quantile-based Bayesian maximum entropy (QBME) method to account for the nonstationary and nonhomogeneous characteristics of ambient air pollution dynamics. The QBME method characterizes the spatiotemporal dependence among the ambient air quality levels based on their location-specific quantiles and accounts for spatiotemporal variations using a local weighted smoothing technique. The epistemic framework of the QBME method can allow researchers to further consider the uncertainty of space-time observations. This study presents the spatiotemporal modeling of daily CO and PM10 concentrations across Taiwan from 1998 to 2009 using the QBME method. Results show that the QBME method can effectively improve estimation accuracy in terms of lower mean absolute errors and standard deviations over space and time, especially for pollutants with strong nonhomogeneous variances across space. In addition, the epistemic framework can allow researchers to assimilate the site-specific secondary information where the observations are absent because of the common preferential sampling issues of environmental data. The proposed QBME method provides a practical and powerful framework for the spatiotemporal modeling of ambient pollutants.

  11. MEMS tunable optical filter based on multi-ring resonator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dessalegn, Hailu, E-mail: hailudessalegn@yahoo.com, E-mail: tsrinu@ece.iisc.ernet.in; Srinivas, T., E-mail: hailudessalegn@yahoo.com, E-mail: tsrinu@ece.iisc.ernet.in

    We propose a novel MEMS tunable optical filter with a flat-top pass band based on multi-ring resonator in an electrostatically actuated microcantilever for communication application. The filter is basically structured on a microcantilever beam and built in optical integrated ring resonator which is placed in one end of the beam to gain maximum stress on the resonator. Thus, when a DC voltage is applied, the beam will bend, that induces a stress and strain in the ring, which brings a change in refractive index and perimeter of the rings leading to change in the output spectrum shift, providing the tenabilitymore » as high as 0.68nm/μN and it is capable of tuning up to 1.7nm.« less

  12. Entropy of international trades

    NASA Astrophysics Data System (ADS)

    Oh, Chang-Young; Lee, D.-S.

    2017-05-01

    The organization of international trades is highly complex under the collective efforts towards economic profits of participating countries given inhomogeneous resources for production. Considering the trade flux as the probability of exporting a product from a country to another, we evaluate the entropy of the world trades in the period 1950-2000. The trade entropy has increased with time, and we show that it is mainly due to the extension of trade partnership. For a given number of trade partners, the mean trade entropy is about 60% of the maximum possible entropy, independent of time, which can be regarded as a characteristic of the trade fluxes' heterogeneity and is shown to be derived from the scaling and functional behaviors of the universal trade-flux distribution. The correlation and time evolution of the individual countries' gross-domestic products and the number of trade partners show that most countries achieved their economic growth partly by extending their trade relationship.

  13. Jarzynski equality in the context of maximum path entropy

    NASA Astrophysics Data System (ADS)

    González, Diego; Davis, Sergio

    2017-06-01

    In the global framework of finding an axiomatic derivation of nonequilibrium Statistical Mechanics from fundamental principles, such as the maximum path entropy - also known as Maximum Caliber principle -, this work proposes an alternative derivation of the well-known Jarzynski equality, a nonequilibrium identity of great importance today due to its applications to irreversible processes: biological systems (protein folding), mechanical systems, among others. This equality relates the free energy differences between two equilibrium thermodynamic states with the work performed when going between those states, through an average over a path ensemble. In this work the analysis of Jarzynski's equality will be performed using the formalism of inference over path space. This derivation highlights the wide generality of Jarzynski's original result, which could even be used in non-thermodynamical settings such as social systems, financial and ecological systems.

  14. A thickness-mode piezoelectric micromachined ultrasound transducer annular array using a PMN–PZT single crystal

    NASA Astrophysics Data System (ADS)

    Kang, Woojin; Jung, Joontaek; Lee, Wonjun; Ryu, Jungho; Choi, Hongsoo

    2018-07-01

    Micro-electromechanical system (MEMS) technologies were used to develop a thickness-mode piezoelectric micromachined ultrasonic transducer (Tm-pMUT) annular array utilizing a lead magnesium niobate–lead zirconate titanate (PMN–PZT) single crystal prepared by the solid-state single-crystal-growth method. Dicing is a conventional processing method for PMN–PZT single crystals, but MEMS technology can be adopted for the development of Tm-pMUT annular arrays and has various advantages, including fabrication reliability, repeatability, and a curved element shape. An inductively coupled plasma–reactive ion etching process was used to etch a brittle PMN–PZT single crystal selectively. Using this process, eight ring-shaped elements were realized in an area of 1  ×  1 cm2. The resonance frequency and effective electromechanical coupling coefficient of the Tm-pMUT annular array were 2.66 (±0.04) MHz, 3.18 (±0.03) MHz, and 30.05%, respectively, in the air. The maximum positive acoustic pressure in water, measured at a distance of 7.27 mm, was 40 kPa from the Tm-pMUT annular array driven by a 10 Vpp sine wave at 2.66 MHz without beamforming. The proposed Tm-pMUT annular array using a PMN–PZT single crystal has the potential for various applications, such as a fingerprint sensor, and for ultrasonic cell stimulation and low-intensity tissue stimulation.

  15. Temperature variation effects on stochastic characteristics for low-cost MEMS-based inertial sensor error

    NASA Astrophysics Data System (ADS)

    El-Diasty, M.; El-Rabbany, A.; Pagiatakis, S.

    2007-11-01

    We examine the effect of varying the temperature points on MEMS inertial sensors' noise models using Allan variance and least-squares spectral analysis (LSSA). Allan variance is a method of representing root-mean-square random drift error as a function of averaging times. LSSA is an alternative to the classical Fourier methods and has been applied successfully by a number of researchers in the study of the noise characteristics of experimental series. Static data sets are collected at different temperature points using two MEMS-based IMUs, namely MotionPakII and Crossbow AHRS300CC. The performance of the two MEMS inertial sensors is predicted from the Allan variance estimation results at different temperature points and the LSSA is used to study the noise characteristics and define the sensors' stochastic model parameters. It is shown that the stochastic characteristics of MEMS-based inertial sensors can be identified using Allan variance estimation and LSSA and the sensors' stochastic model parameters are temperature dependent. Also, the Kaiser window FIR low-pass filter is used to investigate the effect of de-noising stage on the stochastic model. It is shown that the stochastic model is also dependent on the chosen cut-off frequency.

  16. High resolution schemes and the entropy condition

    NASA Technical Reports Server (NTRS)

    Osher, S.; Chakravarthy, S.

    1983-01-01

    A systematic procedure for constructing semidiscrete, second order accurate, variation diminishing, five point band width, approximations to scalar conservation laws, is presented. These schemes are constructed to also satisfy a single discrete entropy inequality. Thus, in the convex flux case, convergence is proven to be the unique physically correct solution. For hyperbolic systems of conservation laws, this construction is used formally to extend the first author's first order accurate scheme, and show (under some minor technical hypotheses) that limit solutions satisfy an entropy inequality. Results concerning discrete shocks, a maximum principle, and maximal order of accuracy are obtained. Numerical applications are also presented.

  17. Automatically quantifying the scientific quality and sensationalism of news records mentioning pandemics: validating a maximum entropy machine-learning model.

    PubMed

    Hoffman, Steven J; Justicz, Victoria

    2016-07-01

    To develop and validate a method for automatically quantifying the scientific quality and sensationalism of individual news records. After retrieving 163,433 news records mentioning the Severe Acute Respiratory Syndrome (SARS) and H1N1 pandemics, a maximum entropy model for inductive machine learning was used to identify relationships among 500 randomly sampled news records that correlated with systematic human assessments of their scientific quality and sensationalism. These relationships were then computationally applied to automatically classify 10,000 additional randomly sampled news records. The model was validated by randomly sampling 200 records and comparing human assessments of them to the computer assessments. The computer model correctly assessed the relevance of 86% of news records, the quality of 65% of records, and the sensationalism of 73% of records, as compared to human assessments. Overall, the scientific quality of SARS and H1N1 news media coverage had potentially important shortcomings, but coverage was not too sensationalizing. Coverage slightly improved between the two pandemics. Automated methods can evaluate news records faster, cheaper, and possibly better than humans. The specific procedure implemented in this study can at the very least identify subsets of news records that are far more likely to have particular scientific and discursive qualities. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. A maximum entropy principle for inferring the distribution of 3D plasmoids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lingam, Manasvi; Comisso, Luca

    The principle of maximum entropy, a powerful and general method for inferring the distribution function given a set of constraints, is applied to deduce the overall distribution of 3D plasmoids (flux ropes/tubes) for systems where resistive MHD is applicable and large numbers of plasmoids are produced. The analysis is undertaken for the 3D case, with mass, total flux, and velocity serving as the variables of interest, on account of their physical and observational relevance. The distribution functions for the mass, width, total flux, and helicity exhibit a power-law behavior with exponents of -4/3, -2, -3, and -2, respectively, for smallmore » values, whilst all of them display an exponential falloff for large values. In contrast, the velocity distribution, as a function of v=|v|, is shown to be flat for v→0, and becomes a power law with an exponent of -7/3 for v→∞. Most of these results are nearly independent of the free parameters involved in this specific problem. In conclusion, a preliminary comparison of our results with the observational evidence is presented, and some of the ensuing space and astrophysical implications are briefly discussed.« less

  19. The DOSY experiment provides insights into the protegrin-lipid interaction

    NASA Astrophysics Data System (ADS)

    Malliavin, T. E.; Louis, V.; Delsuc, M. A.

    1998-02-01

    The measure of translational diffusion using PFG NMR has known a renewal of interest with the development of the DOSY experiments. The extraction of diffusion coefficients from these experiments requires an inverse Laplace transform. We present here the use of the Maximum Entropy technique to perform this transform, and an application of this method to investigate the interaction protegrin-lipid. We show that the analysis by DOSY experiments permits to determine some of the interaction features. La mesure de diffusion translationnelle par gradients de champs pulsés en RMN a connu un regain d'intérêt avec le développement des expériences de DOSY. L'extraction de coefficients de diffusion à partir de ces expériences nécessite l'application d'une transformée de Laplace inverse. Nous présentons ici l'utilisation de la méthode d'Entropie Maximum pour effectuer cette transformée, ainsi qu'une application de l'expérience de DOSY pour étudier une interaction protégrine-lipide. Nous montrons que l'analyse par l'expérience de DOSY permet de déterminer certaines des caractéristiques de cette interaction.

  20. Spatiotemporal modeling of PM2.5 concentrations at the national scale combining land use regression and Bayesian maximum entropy in China.

    PubMed

    Chen, Li; Gao, Shuang; Zhang, Hui; Sun, Yanling; Ma, Zhenxing; Vedal, Sverre; Mao, Jian; Bai, Zhipeng

    2018-05-03

    Concentrations of particulate matter with aerodynamic diameter <2.5 μm (PM 2.5 ) are relatively high in China. Estimation of PM 2.5 exposure is complex because PM 2.5 exhibits complex spatiotemporal patterns. To improve the validity of exposure predictions, several methods have been developed and applied worldwide. A hybrid approach combining a land use regression (LUR) model and Bayesian Maximum Entropy (BME) interpolation of the LUR space-time residuals were developed to estimate the PM 2.5 concentrations on a national scale in China. This hybrid model could potentially provide more valid predictions than a commonly-used LUR model. The LUR/BME model had good performance characteristics, with R 2  = 0.82 and root mean square error (RMSE) of 4.6 μg/m 3 . Prediction errors of the LUR/BME model were reduced by incorporating soft data accounting for data uncertainty, with the R 2 increasing by 6%. The performance of LUR/BME is better than OK/BME. The LUR/BME model is the most accurate fine spatial scale PM 2.5 model developed to date for China. Copyright © 2018. Published by Elsevier Ltd.

  1. A maximum entropy principle for inferring the distribution of 3D plasmoids

    DOE PAGES

    Lingam, Manasvi; Comisso, Luca

    2018-01-18

    The principle of maximum entropy, a powerful and general method for inferring the distribution function given a set of constraints, is applied to deduce the overall distribution of 3D plasmoids (flux ropes/tubes) for systems where resistive MHD is applicable and large numbers of plasmoids are produced. The analysis is undertaken for the 3D case, with mass, total flux, and velocity serving as the variables of interest, on account of their physical and observational relevance. The distribution functions for the mass, width, total flux, and helicity exhibit a power-law behavior with exponents of -4/3, -2, -3, and -2, respectively, for smallmore » values, whilst all of them display an exponential falloff for large values. In contrast, the velocity distribution, as a function of v=|v|, is shown to be flat for v→0, and becomes a power law with an exponent of -7/3 for v→∞. Most of these results are nearly independent of the free parameters involved in this specific problem. In conclusion, a preliminary comparison of our results with the observational evidence is presented, and some of the ensuing space and astrophysical implications are briefly discussed.« less

  2. MEMS for Tunable Photonic Metamaterial Applications

    NASA Astrophysics Data System (ADS)

    Stark, Thomas

    Photonic metamaterials are materials whose optical properties are derived from artificially-structured sub-wavelength unit cells, rather than from the bulk properties of the constituent materials. Examples of metamaterials include plasmonic materials, negative index materials, and electromagnetic cloaks. While advances in simulation tools and nanofabrication methods have allowed this field to grow over the past several decades, many challenges still exist. This thesis addresses two of these challenges: fabrication of photonic metamaterials with tunable responses and high-throughput nanofabrication methods for these materials. The design, fabrication, and optical characterization of a microelectromechanical systems (MEMS) tunable plasmonic spectrometer are presented. An array of holes in a gold film, with plasmon resonance in the mid-infrared, is suspended above a gold reflector, forming a Fabry-Perot interferometer of tunable length. The spectra exhibit the convolution of extraordinary optical transmission through the holes and Fabry-Perot resonances. Using MEMS, the interferometer length is modulated from 1.7 mum to 21.67 mum , thereby tuning the free spectral range from about 2900 wavenumbers to 230.7 wavenumbers and shifting the reflection minima and maxima across the infrared. Due to its broad spectral tunability in the fingerprint region of the mid-infrared, this device shows promise as a tunable biological sensing device. To address the issue of high-throughput, high-resolution fabrication of optical metamaterials, atomic calligraphy, a MEMS-based dynamic stencil lithography technique for resist-free fabrication of photonic metamaterials on unconventional substrates, has been developed. The MEMS consists of a moveable stencil, which can be actuated with nanometer precision using electrostatic comb drive actuators. A fabrication method and flip chip method have been developed, enabling evaporation of metals through the device handle for fabrication on an external substrate. While the MEMS can be used to fabricate over areas of approximately 100 square mum2, a piezoelectric step-and repeat system enables fabrication over cm length scales. Thus, this technique leverages the precision inherent to MEMS actuation, while enhancing nanofabrication thoughput. Fabricating metamaterials on new substrates will enable novel and tunable metamaterials. For example, by fabricating unit cells on a periodic auxetic mechanical scaffold, the optical properties can be tuned by straining the mechanical scaffold.

  3. Identification of capacitive MEMS accelerometer structure parameters for human body dynamics measurements.

    PubMed

    Benevicius, Vincas; Ostasevicius, Vytautas; Gaidys, Rimvydas

    2013-08-22

    Due to their small size, low weight, low cost and low energy consumption, MEMS accelerometers have achieved great commercial success in recent decades. The aim of this research work is to identify a MEMS accelerometer structure for human body dynamics measurements. Photogrammetry was used in order to measure possible maximum accelerations of human body parts and the bandwidth of the digital acceleration signal. As the primary structure the capacitive accelerometer configuration is chosen in such a way that sensing part measures on all three axes as it is 3D accelerometer and sensitivity on each axis is equal. Hill climbing optimization was used to find the structure parameters. Proof-mass displacements were simulated for all the acceleration range that was given by the optimization problem constraints. The final model was constructed in Comsol Multiphysics. Eigenfrequencies were calculated and model's response was found, when vibration stand displacement data was fed into the model as the base excitation law. Model output comparison with experimental data was conducted for all excitation frequencies used during the experiments.

  4. Experimental Investigation and Modeling of Scale Effects in Micro Jet Pumps

    NASA Astrophysics Data System (ADS)

    Gardner, William Geoffrey

    2011-12-01

    Since the mid-1990s there has been an active effort to develop hydrocarbon-fueled power generation and propulsion systems on the scale of centimeters or smaller. This effort led to the creation and expansion of a field of research focused around the design and reduction to practice of Power MEMS (microelectromechanical systems) devices, beginning first with microscale jet engines and a generation later more broadly encompassing MEMS devices which generate power or pump heat. Due to small device scale and fabrication techniques, design constraints are highly coupled and conventional solutions for device requirements may not be practicable. This thesis describes the experimental investigation, modeling and potential applications for two classes of microscale jet pumps: jet ejectors and jet injectors. These components pump fluids with no moving parts and can be integrated into Power MEMS devices to satisfy pumping requirements by supplementing or replacing existing solutions. This thesis presents models developed from first principles which predict losses experienced at small length scales and agree well with experimental results. The models further predict maximum achievable power densities at the onset of detrimental viscous losses.

  5. Method for fabricating five-level microelectromechanical structures and microelectromechanical transmission formed

    DOEpatents

    Rodgers, M. Steven; Sniegowski, Jeffry J.; Miller, Samuel L.; McWhorter, Paul J.

    2000-01-01

    A process for forming complex microelectromechanical (MEM) devices having five layers or levels of polysilicon, including four structural polysilicon layers wherein mechanical elements can be formed, and an underlying polysilicon layer forming a voltage reference plane. A particular type of MEM device that can be formed with the five-level polysilicon process is a MEM transmission for controlling or interlocking mechanical power transfer between an electrostatic motor and a self-assembling structure (e.g. a hinged pop-up mirror for use with an incident laser beam). The MEM transmission is based on an incomplete gear train and a bridging set of gears that can be moved into place to complete the gear train to enable power transfer. The MEM transmission has particular applications as a safety component for surety, and for this purpose can incorporate a pin-in-maze discriminator responsive to a coded input signal.

  6. Data fusion algorithm for rapid multi-mode dust concentration measurement system based on MEMS

    NASA Astrophysics Data System (ADS)

    Liao, Maohao; Lou, Wenzhong; Wang, Jinkui; Zhang, Yan

    2018-03-01

    As single measurement method cannot fully meet the technical requirements of dust concentration measurement, the multi-mode detection method is put forward, as well as the new requirements for data processing. This paper presents a new dust concentration measurement system which contains MEMS ultrasonic sensor and MEMS capacitance sensor, and presents a new data fusion algorithm for this multi-mode dust concentration measurement system. After analyzing the relation between the data of the composite measurement method, the data fusion algorithm based on Kalman filtering is established, which effectively improve the measurement accuracy, and ultimately forms a rapid data fusion model of dust concentration measurement. Test results show that the data fusion algorithm is able to realize the rapid and exact concentration detection.

  7. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production

    PubMed Central

    Kleidon, A.

    2010-01-01

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion. PMID:20368248

  8. A basic introduction to the thermodynamics of the Earth system far from equilibrium and maximum entropy production.

    PubMed

    Kleidon, A

    2010-05-12

    The Earth system is remarkably different from its planetary neighbours in that it shows pronounced, strong global cycling of matter. These global cycles result in the maintenance of a unique thermodynamic state of the Earth's atmosphere which is far from thermodynamic equilibrium (TE). Here, I provide a simple introduction of the thermodynamic basis to understand why Earth system processes operate so far away from TE. I use a simple toy model to illustrate the application of non-equilibrium thermodynamics and to classify applications of the proposed principle of maximum entropy production (MEP) to such processes into three different cases of contrasting flexibility in the boundary conditions. I then provide a brief overview of the different processes within the Earth system that produce entropy, review actual examples of MEP in environmental and ecological systems, and discuss the role of interactions among dissipative processes in making boundary conditions more flexible. I close with a brief summary and conclusion.

  9. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  10. Statistical mechanics of letters in words

    PubMed Central

    Stephens, Greg J.; Bialek, William

    2013-01-01

    We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ~92% of the multi-information in four-letter words and even “discovering” words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ~68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon. PMID:20866490

  11. A diameter-sensitive flow entropy method for reliability consideration in water distribution system design

    NASA Astrophysics Data System (ADS)

    Liu, Haixing; Savić, Dragan; Kapelan, Zoran; Zhao, Ming; Yuan, Yixing; Zhao, Hongbin

    2014-07-01

    Flow entropy is a measure of uniformity of pipe flows in water distribution systems. By maximizing flow entropy one can identify reliable layouts or connectivity in networks. In order to overcome the disadvantage of the common definition of flow entropy that does not consider the impact of pipe diameter on reliability, an extended definition of flow entropy, termed as diameter-sensitive flow entropy, is proposed. This new methodology is then assessed by using other reliability methods, including Monte Carlo Simulation, a pipe failure probability model, and a surrogate measure (resilience index) integrated with water demand and pipe failure uncertainty. The reliability assessment is based on a sample of WDS designs derived from an optimization process for each of the two benchmark networks. Correlation analysis is used to evaluate quantitatively the relationship between entropy and reliability. To ensure reliability, a comparative analysis between the flow entropy and the new method is conducted. The results demonstrate that the diameter-sensitive flow entropy shows consistently much stronger correlation with the three reliability measures than simple flow entropy. Therefore, the new flow entropy method can be taken as a better surrogate measure for reliability and could be potentially integrated into the optimal design problem of WDSs. Sensitivity analysis results show that the velocity parameters used in the new flow entropy has no significant impact on the relationship between diameter-sensitive flow entropy and reliability.

  12. Weak scale from the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Hamada, Yuta; Kawai, Hikaru; Kawana, Kiyoharu

    2015-03-01

    The theory of the multiverse and wormholes suggests that the parameters of the Standard Model (SM) are fixed in such a way that the radiation of the S3 universe at the final stage S_rad becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the SM, we can check whether S_rad actually becomes maximum at the observed values. In this paper, we regard S_rad at the final stage as a function of the weak scale (the Higgs expectation value) vh, and show that it becomes maximum around vh = {{O}} (300 GeV) when the dimensionless couplings in the SM, i.e., the Higgs self-coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by vh ˜ T_{BBN}2 / (M_{pl}ye5), where ye is the Yukawa coupling of electron, T_BBN is the temperature at which the Big Bang nucleosynthesis starts, and M_pl is the Planck mass.

  13. The performance evaluation model of mining project founded on the weight optimization entropy value method

    NASA Astrophysics Data System (ADS)

    Mao, Chao; Chen, Shou

    2017-01-01

    According to the traditional entropy value method still have low evaluation accuracy when evaluating the performance of mining projects, a performance evaluation model of mineral project founded on improved entropy is proposed. First establish a new weight assignment model founded on compatible matrix analysis of analytic hierarchy process (AHP) and entropy value method, when the compatibility matrix analysis to achieve consistency requirements, if it has differences between subjective weights and objective weights, moderately adjust both proportions, then on this basis, the fuzzy evaluation matrix for performance evaluation. The simulation experiments show that, compared with traditional entropy and compatible matrix analysis method, the proposed performance evaluation model of mining project based on improved entropy value method has higher accuracy assessment.

  14. Economic cycles and their synchronization: Spectral analysis of macroeconomic series from Italy, The Netherlands, and the UK

    NASA Astrophysics Data System (ADS)

    Sella, Lisa; Vivaldo, Gianna; Ghil, Michael; Groth, Andreas

    2010-05-01

    The present work applies several advanced spectral methods (Ghil et al., Rev. Geophys., 2002) to the analysis of macroeconomic fluctuations in Italy, The Netherlands, and the United Kingdom. These methods provide valuable time-and-frequency-domain tools that complement traditional time-domain analysis, and are thus fairly well known by now in the geosciences and life sciences, but not yet widespread in quantitative economics. In particular, they enable the identification and characterization of nonlinear trends and dominant cycles --- including low-frequency and seasonal components --- that characterize the behavior of each time series. We explore five fundamental indicators of the real (i.e., non-monetary), aggregate economy --- namely gross domestic product (GDP), consumption, fixed investments, exports and imports --- in a univariate as well as multivariate setting. A single-channel analysis by means of three independent spectral methods --- singular spectrum analysis (SSA), the multi-taper method (MTM), and the maximum-entropy method (MEM) --- reveals very similar near-annual cycles, as well as several longer periodicities, in the macroeconomic indicators of all the countries analyzed. Since each indicator represents different features of an economic system, we combine them to infer if common oscillatory modes are present, either among different indicators within the same country or among the same indicators across different countries. Multichannel-SSA (M-SSA) reinforces the previous results, and shows that the common modes agree in character with solutions of a non-equilibrium dynamic model (NEDyM) that produces endogenous business cycles (Hallegatte et al., JEBO, 2008). The presence of these modes in NEDyM results from adjustment delays and other nonequilibrium effects that were added to a neoclassical Solow (Q. J. Econ., 1956) growth model. Their confirmation by the present analysis has important consequences for the net impact of natural disasters on the economy of a country: Hallegatte and Ghil (Ecol. Econ., 2008) have shown that the presence of business cycles modifies substantially this impact with respect to their impact on an economy in or near equilibrium. The present work concludes with a study of the synchronization of economic fluctuations, which follows a similar study of macroeconomic indicators for the United States, presented in a nearby poster. Since business cycles are not country-specific phenomena, but show common characteristics across countries, our aim is to uncover hidden global behavior across the European economies (cf. Mazzi and Savio, Macmillan, 2006).

  15. CFD-ACE+: a CAD system for simulation and modeling of MEMS

    NASA Astrophysics Data System (ADS)

    Stout, Phillip J.; Yang, H. Q.; Dionne, Paul; Leonard, Andy; Tan, Zhiqiang; Przekwas, Andrzej J.; Krishnan, Anantha

    1999-03-01

    Computer aided design (CAD) systems are a key to designing and manufacturing MEMS with higher performance/reliability, reduced costs, shorter prototyping cycles and improved time- to-market. One such system is CFD-ACE+MEMS, a modeling and simulation environment for MEMS which includes grid generation, data visualization, graphical problem setup, and coupled fluidic, thermal, mechanical, electrostatic, and magnetic physical models. The fluid model is a 3D multi- block, structured/unstructured/hybrid, pressure-based, implicit Navier-Stokes code with capabilities for multi- component diffusion, multi-species transport, multi-step gas phase chemical reactions, surface reactions, and multi-media conjugate heat transfer. The thermal model solves the total enthalpy from of the energy equation. The energy equation includes unsteady, convective, conductive, species energy, viscous dissipation, work, and radiation terms. The electrostatic model solves Poisson's equation. Both the finite volume method and the boundary element method (BEM) are available for solving Poisson's equation. The BEM method is useful for unbounded problems. The magnetic model solves for the vector magnetic potential from Maxwell's equations including eddy currents but neglecting displacement currents. The mechanical model is a finite element stress/deformation solver which has been coupled to the flow, heat, electrostatic, and magnetic calculations to study flow, thermal electrostatically, and magnetically included deformations of structures. The mechanical or structural model can accommodate elastic and plastic materials, can handle large non-linear displacements, and can model isotropic and anisotropic materials. The thermal- mechanical coupling involves the solution of the steady state Navier equation with thermoelastic deformation. The electrostatic-mechanical coupling is a calculation of the pressure force due to surface charge on the mechanical structure. Results of CFD-ACE+MEMS modeling of MEMS such as cantilever beams, accelerometers, and comb drives are discussed.

  16. Predicting the potential environmental suitability for Theileria orientalis transmission in New Zealand cattle using maximum entropy niche modelling.

    PubMed

    Lawrence, K E; Summers, S R; Heath, A C G; McFadden, A M J; Pulford, D J; Pomroy, W E

    2016-07-15

    The tick-borne haemoparasite Theileria orientalis is the most important infectious cause of anaemia in New Zealand cattle. Since 2012 a previously unrecorded type, T. orientalis type 2 (Ikeda), has been associated with disease outbreaks of anaemia, lethargy, jaundice and deaths on over 1000 New Zealand cattle farms, with most of the affected farms found in the upper North Island. The aim of this study was to model the relative environmental suitability for T. orientalis transmission throughout New Zealand, to predict the proportion of cattle farms potentially suitable for active T. orientalis infection by region, island and the whole of New Zealand and to estimate the average relative environmental suitability per farm by region, island and the whole of New Zealand. The relative environmental suitability for T. orientalis transmission was estimated using the Maxent (maximum entropy) modelling program. The Maxent model predicted that 99% of North Island cattle farms (n=36,257), 64% South Island cattle farms (n=15,542) and 89% of New Zealand cattle farms overall (n=51,799) could potentially be suitable for T. orientalis transmission. The average relative environmental suitability of T. orientalis transmission at the farm level was 0.34 in the North Island, 0.02 in the South Island and 0.24 overall. The study showed that the potential spatial distribution of T. orientalis environmental suitability was much greater than presumed in the early part of the Theileria associated bovine anaemia (TABA) epidemic. Maximum entropy offers a computer efficient method of modelling the probability of habitat suitability for an arthropod vectored disease. This model could help estimate the boundaries of the endemically stable and endemically unstable areas for T. orientalis transmission within New Zealand and be of considerable value in informing practitioner and farmer biosecurity decisions in these respective areas. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Learning probability distributions from smooth observables and the maximum entropy principle: some remarks

    NASA Astrophysics Data System (ADS)

    Obuchi, Tomoyuki; Monasson, Rémi

    2015-09-01

    The maximum entropy principle (MEP) is a very useful working hypothesis in a wide variety of inference problems, ranging from biological to engineering tasks. To better understand the reasons of the success of MEP, we propose a statistical-mechanical formulation to treat the space of probability distributions constrained by the measures of (experimental) observables. In this paper we first review the results of a detailed analysis of the simplest case of randomly chosen observables. In addition, we investigate by numerical and analytical means the case of smooth observables, which is of practical relevance. Our preliminary results are presented and discussed with respect to the efficiency of the MEP.

  18. Sleep Estimates Using Microelectromechanical Systems (MEMS)

    PubMed Central

    te Lindert, Bart H. W.; Van Someren, Eus J. W.

    2013-01-01

    Study Objectives: Although currently more affordable than polysomnography, actigraphic sleep estimates have disadvantages. Brand-specific differences in data reduction impede pooling of data in large-scale cohorts and may not fully exploit movement information. Sleep estimate reliability might improve by advanced analyses of three-axial, linear accelerometry data sampled at a high rate, which is now feasible using microelectromechanical systems (MEMS). However, it might take some time before these analyses become available. To provide ongoing studies with backward compatibility while already switching from actigraphy to MEMS accelerometry, we designed and validated a method to transform accelerometry data into the traditional actigraphic movement counts, thus allowing for the use of validated algorithms to estimate sleep parameters. Design: Simultaneous actigraphy and MEMS-accelerometry recording. Setting: Home, unrestrained. Participants: Fifteen healthy adults (23-36 y, 10 males, 5 females). Interventions: None. Measurements: Actigraphic movement counts/15-sec and 50-Hz digitized MEMS-accelerometry. Analyses: Passing-Bablok regression optimized transformation of MEMS-accelerometry signals to movement counts. Kappa statistics calculated agreement between individual epochs scored as wake or sleep. Bland-Altman plots evaluated reliability of common sleep variables both between and within actigraphs and MEMS-accelerometers. Results: Agreement between epochs was almost perfect at the low, medium, and high threshold (kappa = 0.87 ± 0.05, 0.85 ± 0.06, and 0.83 ± 0.07). Sleep parameter agreement was better between two MEMS-accelerometers or a MEMS-accelerometer and an actigraph than between two actigraphs. Conclusions: The algorithm allows for continuity of outcome parameters in ongoing actigraphy studies that consider switching to MEMS-accelerometers. Its implementation makes backward compatibility feasible, while collecting raw data that, in time, could provide better sleep estimates and promote cross-study data pooling. Citation: te Lindert BHW; Van Someren EJW. Sleep estimates using microelectromechanical systems (MEMS). SLEEP 2013;36(5):781-789. PMID:23633761

  19. Maximum entropy production, carbon assimilation, and the spatial organization of vegetation in river basins

    PubMed Central

    del Jesus, Manuel; Foti, Romano; Rinaldo, Andrea; Rodriguez-Iturbe, Ignacio

    2012-01-01

    The spatial organization of functional vegetation types in river basins is a major determinant of their runoff production, biodiversity, and ecosystem services. The optimization of different objective functions has been suggested to control the adaptive behavior of plants and ecosystems, often without a compelling justification. Maximum entropy production (MEP), rooted in thermodynamics principles, provides a tool to justify the choice of the objective function controlling vegetation organization. The application of MEP at the ecosystem scale results in maximum productivity (i.e., maximum canopy photosynthesis) as the thermodynamic limit toward which the organization of vegetation appears to evolve. Maximum productivity, which incorporates complex hydrologic feedbacks, allows us to reproduce the spatial macroscopic organization of functional types of vegetation in a thoroughly monitored river basin, without the need for a reductionist description of the underlying microscopic dynamics. The methodology incorporates the stochastic characteristics of precipitation and the associated soil moisture on a spatially disaggregated framework. Our results suggest that the spatial organization of functional vegetation types in river basins naturally evolves toward configurations corresponding to dynamically accessible local maxima of the maximum productivity of the ecosystem. PMID:23213227

  20. Polymer-based wireless implantable sensor and platform for systems biology study

    NASA Astrophysics Data System (ADS)

    Xue, Ning

    Wireless implantable MEMS (microelectromechanical systems) devices have been developed over the past decade based on the combination of bio-MEMS and Radio frequency (RF) MEMS technology. These devices require the components of wireless telemetric antenna and the corresponding circuit. In the meanwhile, biocompatible material needs to be involved in the devices design. To supply maximum power upon the implantable device at given power supply from the external coil circuit, this dissertation theoretically analyzed the mutual inductance under the positions of variety of vertical distances, lateral displacements and angular misalignments between two coils in certain surgical coils misalignment situations. A planar spiral coil has been developed as the receiver coil of the coupling system. To get maximum induced voltage over the receiver circuit, different geometries of the power coil, system operation frequencies were investigated. An intraocular pressure (IOP) sensor has been developed consisting of only biocompatible matierials-SU-8 and gold. Its size is sufficiently small to be implanted in the eye. The measurement results showed that it has relatively linear pressure response, high resolution and relatively long working stability in saline environment. Finally, a simple and low cost micro-wells bio-chip has been developed with sole polydimethylsiloxane (PDMS) to be used for single cell or small group cells isolation. By performing atomic force microscopy (AFM), contact angle and x-ray photoelectron spectroscopy (XPS) measurements on the PDMS surfaces under various surface treatment conditions, the physical and chemical surface natures were thoroughly analyzed as the basis of study of cells attachment and isolation to the surfaces.

  1. Controlling the Shannon Entropy of Quantum Systems

    PubMed Central

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking. PMID:23818819

  2. Controlling the shannon entropy of quantum systems.

    PubMed

    Xing, Yifan; Wu, Jun

    2013-01-01

    This paper proposes a new quantum control method which controls the Shannon entropy of quantum systems. For both discrete and continuous entropies, controller design methods are proposed based on probability density function control, which can drive the quantum state to any target state. To drive the entropy to any target at any prespecified time, another discretization method is proposed for the discrete entropy case, and the conditions under which the entropy can be increased or decreased are discussed. Simulations are done on both two- and three-dimensional quantum systems, where division and prediction are used to achieve more accurate tracking.

  3. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data.

    PubMed

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points.

  4. Improving Estimations of Spatial Distribution of Soil Respiration Using the Bayesian Maximum Entropy Algorithm and Soil Temperature as Auxiliary Data

    PubMed Central

    Hu, Junguo; Zhou, Jian; Zhou, Guomo; Luo, Yiqi; Xu, Xiaojun; Li, Pingheng; Liang, Junyi

    2016-01-01

    Soil respiration inherently shows strong spatial variability. It is difficult to obtain an accurate characterization of soil respiration with an insufficient number of monitoring points. However, it is expensive and cumbersome to deploy many sensors. To solve this problem, we proposed employing the Bayesian Maximum Entropy (BME) algorithm, using soil temperature as auxiliary information, to study the spatial distribution of soil respiration. The BME algorithm used the soft data (auxiliary information) effectively to improve the estimation accuracy of the spatiotemporal distribution of soil respiration. Based on the functional relationship between soil temperature and soil respiration, the BME algorithm satisfactorily integrated soil temperature data into said spatial distribution. As a means of comparison, we also applied the Ordinary Kriging (OK) and Co-Kriging (Co-OK) methods. The results indicated that the root mean squared errors (RMSEs) and absolute values of bias for both Day 1 and Day 2 were the lowest for the BME method, thus demonstrating its higher estimation accuracy. Further, we compared the performance of the BME algorithm coupled with auxiliary information, namely soil temperature data, and the OK method without auxiliary information in the same study area for 9, 21, and 37 sampled points. The results showed that the RMSEs for the BME algorithm (0.972 and 1.193) were less than those for the OK method (1.146 and 1.539) when the number of sampled points was 9 and 37, respectively. This indicates that the former method using auxiliary information could reduce the required number of sampling points for studying spatial distribution of soil respiration. Thus, the BME algorithm, coupled with soil temperature data, can not only improve the accuracy of soil respiration spatial interpolation but can also reduce the number of sampling points. PMID:26807579

  5. Advanced packaging for Integrated Micro-Instruments

    NASA Technical Reports Server (NTRS)

    Lyke, James L.

    1995-01-01

    The relationship between packaging, microelectronics, and micro-electrical-mechanical systems (MEMS) is an important one, particularly when the edges of performance boundaries are pressed, as in the case of miniaturized systems. Packaging is a sort of physical backbone that enables the maximum performance of these systems to be realized, and the penalties imposed by conventional packing approaches is particularly limiting for MEMS devices. As such, advanced packaging approaches, such as multi-chip modules (MCM's) have been touted as a true means of electronic 'enablement' for a variety of application domains. Realizing an optimum system of packaging, however, in not as simple as replacing a set of single chip packages with a substrate of interconnections. Research at Phillips Laboratory has turned up a number of integrating options in the two- and three-dimensional rending of miniature systems with physical interconnection structures with intrinsically high performance. Not only do these structures motivate the redesign of integrated circuits (IC's) for lower power, but they possess interesting features that provide a framework for the direct integration of MEMS devices. Cost remains a barrier to the application of MEMS devices, even in space systems. Several innovations are suggested that will result in lower cost and more rapid cycle time. First, the novelty of a 'constant floor plan' MCM which encapsulates a variety of commonly used components into a stockable, easily customized assembly is discussed. Next, the use of low-cost substrates is examined. The anticipated advent of ultra-high density interconnect (UHDI) is suggested as the limit argument of advanced packaging. Finally, the concept of a heterogeneous 3-D MCM system is outlined that allows for the combination of different compatible packaging approaches into a uniformly dense structure that could also include MEMS-based sensors.

  6. Piezoelectric Lead Zirconate Titanate (PZT) Ring Shaped Contour-Mode MEMS Resonators

    NASA Astrophysics Data System (ADS)

    Kasambe, P. V.; Asgaonkar, V. V.; Bangera, A. D.; Lokre, A. S.; Rathod, S. S.; Bhoir, D. V.

    2018-02-01

    Flexibility in setting fundamental frequency of resonator independent of its motional resistance is one of the desired criteria in micro-electromechanical (MEMS) resonator design. It is observed that ring-shaped piezoelectric contour-mode MEMS resonators satisfy this design criterion than in case of rectangular plate MEMS resonators. Also ring-shaped contour-mode piezoelectric MEMS resonator has an advantage that its fundamental frequency is defined by in-plane dimensions, but they show variation of fundamental frequency with different Platinum (Pt) thickness referred as change in ratio of fNEW /fO . This paper presents the effects of variation in geometrical parameters and change in piezoelectric material on the resonant frequencies of Platinum piezoelectric-Aluminium ring-shaped contour-mode MEMS resonators and its electrical parameters. The proposed structure with Lead Zirconate Titanate (PZT) as the piezoelectric material was observed to be a piezoelectric material with minimal change in fundamental resonant frequency due to Platinum thickness variation. This structure was also found to exhibit extremely low motional resistance of 0.03 Ω as compared to the 31-35 Ω range obtained when using AlN as the piezoelectric material. CoventorWare 10 is used for the design, simulation and corresponding analysis of resonators which is Finite Element Method (FEM) analysis and design tool for MEMS devices.

  7. Design and Fabrication of High Gain Multi-element Multi-segment Quarter-sector Cylindrical Dielectric Resonator Antenna

    NASA Astrophysics Data System (ADS)

    Ranjan, Pinku; Gangwar, Ravi Kumar

    2017-12-01

    A novel design and analysis of quarter cylindrical dielectric resonator antenna (q-CDRA) with multi-element and multi-segment (MEMS) approach has been presented. The MEMS q-CDRA has been designed by splitting four identical quarters from a solid cylinder and then multi-segmentation approach has been utilized to design q-CDRA. The proposed antenna has been designed for enhancement in bandwidth as well as for high gain. For bandwidth enhancement, multi-segmentation method has been explained for the selection of dielectric constant of materials. The performance of the proposed MEMS q-CDRA has been demonstrated with design guideline of MEMS approach. To validate the antenna performance, three segments q-CDRA has been fabricated and analyzed practically. The simulated results have been in good agreement with measured one. The MEMS q-CDRA has wide impedance bandwidth (|S11|≤-10 dB) of 133.8 % with monopole-like radiation pattern. The proposed MEMS q-CDRA has been operating at TM01δ mode with the measured gain of 6.65 dBi and minimum gain of 4.5 dBi in entire operating frequency band (5.1-13.7 GHz). The proposed MEMS q-CDRA may find appropriate applications in WiMAX and WLAN band.

  8. Financial time series analysis based on effective phase transfer entropy

    NASA Astrophysics Data System (ADS)

    Yang, Pengbo; Shang, Pengjian; Lin, Aijing

    2017-02-01

    Transfer entropy is a powerful technique which is able to quantify the impact of one dynamic system on another system. In this paper, we propose the effective phase transfer entropy method based on the transfer entropy method. We use simulated data to test the performance of this method, and the experimental results confirm that the proposed approach is capable of detecting the information transfer between the systems. We also explore the relationship between effective phase transfer entropy and some variables, such as data size, coupling strength and noise. The effective phase transfer entropy is positively correlated with the data size and the coupling strength. Even in the presence of a large amount of noise, it can detect the information transfer between systems, and it is very robust to noise. Moreover, this measure is indeed able to accurately estimate the information flow between systems compared with phase transfer entropy. In order to reflect the application of this method in practice, we apply this method to financial time series and gain new insight into the interactions between systems. It is demonstrated that the effective phase transfer entropy can be used to detect some economic fluctuations in the financial market. To summarize, the effective phase transfer entropy method is a very efficient tool to estimate the information flow between systems.

  9. Hydrodynamic cavitation: from theory towards a new experimental approach

    NASA Astrophysics Data System (ADS)

    Lucia, Umberto; Gervino, Gianpiero

    2009-09-01

    Hydrodynamic cavitation is analysed by a global thermodynamics principle following an approach based on the maximum irreversible entropy variation that has already given promising results for open systems and has been successfully applied in specific engineering problems. In this paper we present a new phenomenological method to evaluate the conditions inducing cavitation. We think this method could be useful in the design of turbo-machineries and related technologies: it represents both an original physical approach to cavitation and an economical saving in planning because the theoretical analysis could allow engineers to reduce the experimental tests and the costs of the design process.

  10. Isotope Induced Proton Ordering in Partially Deuterated Aspirin

    NASA Astrophysics Data System (ADS)

    Schiebel, P.; Papoular, R. J.; Paulus, W.; Zimmermann, H.; Detken, A.; Haeberlen, U.; Prandl, W.

    1999-08-01

    We report the nuclear density distribution of partially deuterated aspirin, C8H5O4-CH2D, at 300 and 15 K, as determined by neutron diffraction coupled with maximum entropy method image reconstruction. While fully protonated and fully deuterated methyl groups in aspirin are delocalized at low temperatures due to quantum mechanical tunneling, we provide here direct evidence that in aspirin- CH2D at 15 K the methyl hydrogens are localized, while randomly distributed over three sites at 300 K. This is the first observation by diffraction methods of low-temperature isotopic ordering in condensed matter.

  11. On the morphological instability of a bubble during inertia-controlled growth

    NASA Astrophysics Data System (ADS)

    Martyushev, L. M.; Birzina, A. I.; Soboleva, A. S.

    2018-06-01

    The morphological stability of a spherical bubble growing under inertia control is analyzed. Based on the comparison of entropy productions for a distorted and undistorted surface and using the maximum entropy production principle, the morphological instability of the bubble under arbitrary amplitude distortions is shown. This result allows explaining a number of experiments where the surface roughness of bubbles was observed during their explosive-type growth.

  12. Entropy Production in Collisionless Systems. II. Arbitrary Phase-space Occupation Numbers

    NASA Astrophysics Data System (ADS)

    Barnes, Eric I.; Williams, Liliya L. R.

    2012-04-01

    We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.

  13. Trends in entropy production during ecosystem development in the Amazon Basin.

    PubMed

    Holdaway, Robert J; Sparrow, Ashley D; Coomes, David A

    2010-05-12

    Understanding successional trends in energy and matter exchange across the ecosystem-atmosphere boundary layer is an essential focus in ecological research; however, a general theory describing the observed pattern remains elusive. This paper examines whether the principle of maximum entropy production could provide the solution. A general framework is developed for calculating entropy production using data from terrestrial eddy covariance and micrometeorological studies. We apply this framework to data from eight tropical forest and pasture flux sites in the Amazon Basin and show that forest sites had consistently higher entropy production rates than pasture sites (0.461 versus 0.422 W m(-2) K(-1), respectively). It is suggested that during development, changes in canopy structure minimize surface albedo, and development of deeper root systems optimizes access to soil water and thus potential transpiration, resulting in lower surface temperatures and increased entropy production. We discuss our results in the context of a theoretical model of entropy production versus ecosystem developmental stage. We conclude that, although further work is required, entropy production could potentially provide a much-needed theoretical basis for understanding the effects of deforestation and land-use change on the land-surface energy balance.

  14. On the pH Dependence of the Potential of Maximum Entropy of Ir(111) Electrodes.

    PubMed

    Ganassin, Alberto; Sebastián, Paula; Climent, Víctor; Schuhmann, Wolfgang; Bandarenka, Aliaksandr S; Feliu, Juan

    2017-04-28

    Studies over the entropy of components forming the electrode/electrolyte interface can give fundamental insights into the properties of electrified interphases. In particular, the potential where the entropy of formation of the double layer is maximal (potential of maximum entropy, PME) is an important parameter for the characterization of electrochemical systems. Indeed, this parameter determines the majority of electrode processes. In this work, we determine PMEs for Ir(111) electrodes. The latter currently play an important role to understand electrocatalysis for energy provision; and at the same time, iridium is one of the most stable metals against corrosion. For the experiments, we used a combination of the laser induced potential transient to determine the PME, and CO charge-displacement to determine the potentials of zero total charge, (E PZTC ). Both PME and E PZTC were assessed for perchlorate solutions in the pH range from 1 to 4. Surprisingly, we found that those are located in the potential region where the adsorption of hydrogen and hydroxyl species takes place, respectively. The PMEs demonstrated a shift by ~30 mV per a pH unit (in the RHE scale). Connections between the PME and electrocatalytic properties of the electrode surface are discussed.

  15. Maximum Path Information and Fokker Planck Equation

    NASA Astrophysics Data System (ADS)

    Li, Wei; Wang A., Q.; LeMehaute, A.

    2008-04-01

    We present a rigorous method to derive the nonlinear Fokker-Planck (FP) equation of anomalous diffusion directly from a generalization of the principle of least action of Maupertuis proposed by Wang [Chaos, Solitons & Fractals 23 (2005) 1253] for smooth or quasi-smooth irregular dynamics evolving in Markovian process. The FP equation obtained may take two different but equivalent forms. It was also found that the diffusion constant may depend on both q (the index of Tsallis entropy [J. Stat. Phys. 52 (1988) 479] and the time t.

  16. Application of digital image processing techniques to astronomical imagery, 1979

    NASA Technical Reports Server (NTRS)

    Lorre, J. J.

    1979-01-01

    Several areas of applications of image processing to astronomy were identified and discussed. These areas include: (1) deconvolution for atmospheric seeing compensation; a comparison between maximum entropy and conventional Wiener algorithms; (2) polarization in galaxies from photographic plates; (3) time changes in M87 and methods of displaying these changes; (4) comparing emission line images in planetary nebulae; and (5) log intensity, hue saturation intensity, and principal component color enhancements of M82. Examples are presented of these techniques applied to a variety of objects.

  17. Approximation of the ruin probability using the scaled Laplace transform inversion

    PubMed Central

    Mnatsakanov, Robert M.; Sarkisian, Khachatur; Hakobyan, Artak

    2015-01-01

    The problem of recovering the ruin probability in the classical risk model based on the scaled Laplace transform inversion is studied. It is shown how to overcome the problem of evaluating the ruin probability at large values of an initial surplus process. Comparisons of proposed approximations with the ones based on the Laplace transform inversions using a fixed Talbot algorithm as well as on the ones using the Trefethen–Weideman–Schmelzer and maximum entropy methods are presented via a simulation study. PMID:26752796

  18. Developing the fuzzy c-means clustering algorithm based on maximum entropy for multitarget tracking in a cluttered environment

    NASA Astrophysics Data System (ADS)

    Chen, Xiao; Li, Yaan; Yu, Jing; Li, Yuxing

    2018-01-01

    For fast and more effective implementation of tracking multiple targets in a cluttered environment, we propose a multiple targets tracking (MTT) algorithm called maximum entropy fuzzy c-means clustering joint probabilistic data association that combines fuzzy c-means clustering and the joint probabilistic data association (PDA) algorithm. The algorithm uses the membership value to express the probability of the target originating from measurement. The membership value is obtained through fuzzy c-means clustering objective function optimized by the maximum entropy principle. When considering the effect of the public measurement, we use a correction factor to adjust the association probability matrix to estimate the state of the target. As this algorithm avoids confirmation matrix splitting, it can solve the high computational load problem of the joint PDA algorithm. The results of simulations and analysis conducted for tracking neighbor parallel targets and cross targets in a different density cluttered environment show that the proposed algorithm can realize MTT quickly and efficiently in a cluttered environment. Further, the performance of the proposed algorithm remains constant with increasing process noise variance. The proposed algorithm has the advantages of efficiency and low computational load, which can ensure optimum performance when tracking multiple targets in a dense cluttered environment.

  19. Formulating the shear stress distribution in circular open channels based on the Renyi entropy

    NASA Astrophysics Data System (ADS)

    Khozani, Zohreh Sheikh; Bonakdari, Hossein

    2018-01-01

    The principle of maximum entropy is employed to derive the shear stress distribution by maximizing the Renyi entropy subject to some constraints and by assuming that dimensionless shear stress is a random variable. A Renyi entropy-based equation can be used to model the shear stress distribution along the entire wetted perimeter of circular channels and circular channels with flat beds and deposited sediments. A wide range of experimental results for 12 hydraulic conditions with different Froude numbers (0.375 to 1.71) and flow depths (20.3 to 201.5 mm) were used to validate the derived shear stress distribution. For circular channels, model performance enhanced with increasing flow depth (mean relative error (RE) of 0.0414) and only deteriorated slightly at the greatest flow depth (RE of 0.0573). For circular channels with flat beds, the Renyi entropy model predicted the shear stress distribution well at lower sediment depth. The Renyi entropy model results were also compared with Shannon entropy model results. Both models performed well for circular channels, but for circular channels with flat beds the Renyi entropy model displayed superior performance in estimating the shear stress distribution. The Renyi entropy model was highly precise and predicted the shear stress distribution in a circular channel with RE of 0.0480 and in a circular channel with a flat bed with RE of 0.0488.

  20. A novel optimal configuration form redundant MEMS inertial sensors based on the orthogonal rotation method.

    PubMed

    Cheng, Jianhua; Dong, Jinlu; Landry, Rene; Chen, Daidai

    2014-07-29

    In order to improve the accuracy and reliability of micro-electro mechanical systems (MEMS) navigation systems, an orthogonal rotation method-based nine-gyro redundant MEMS configuration is presented. By analyzing the accuracy and reliability characteristics of an inertial navigation system (INS), criteria for redundant configuration design are introduced. Then the orthogonal rotation configuration is formed through a two-rotation of a set of orthogonal inertial sensors around a space vector. A feasible installation method is given for the real engineering realization of this proposed configuration. The performances of the novel configuration and another six configurations are comprehensively compared and analyzed. Simulation and experimentation are also conducted, and the results show that the orthogonal rotation configuration has the best reliability, accuracy and fault detection and isolation (FDI) performance when the number of gyros is nine.

Top