Sample records for expectation maximization em

  1. Deterministic quantum annealing expectation-maximization algorithm

    NASA Astrophysics Data System (ADS)

    Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki

    2017-11-01

    Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.

  2. Text Classification for Intelligent Portfolio Management

    DTIC Science & Technology

    2002-05-01

    years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training

  3. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kagie, Matthew J.; Lanterman, Aaron D.

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  4. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  5. A Comparative Study of Online Item Calibration Methods in Multidimensional Computerized Adaptive Testing

    ERIC Educational Resources Information Center

    Chen, Ping

    2017-01-01

    Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…

  6. An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models

    ERIC Educational Resources Information Center

    Lee, Taehun

    2010-01-01

    In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…

  7. A constraint-based evolutionary learning approach to the expectation maximization for optimal estimation of the hidden Markov model for speech signal modeling.

    PubMed

    Huda, Shamsul; Yearwood, John; Togneri, Roberto

    2009-02-01

    This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).

  8. Using an EM Covariance Matrix to Estimate Structural Equation Models with Missing Data: Choosing an Adjusted Sample Size to Improve the Accuracy of Inferences

    ERIC Educational Resources Information Center

    Enders, Craig K.; Peugh, James L.

    2004-01-01

    Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…

  9. Model-Based Clustering of Regression Time Series Data via APECM -- An AECM Algorithm Sung to an Even Faster Beat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Wei-Chen; Maitra, Ranjan

    2011-01-01

    We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less

  10. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    PubMed

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. A Heavy Tailed Expectation Maximization Hidden Markov Random Field Model with Applications to Segmentation of MRI

    PubMed Central

    Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego

    2017-01-01

    A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194

  12. Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images

    PubMed Central

    Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali

    2015-01-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077

  13. Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.

    PubMed

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui

    2013-12-01

    In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.

  14. Clustering performance comparison using K-means and expectation maximization algorithms.

    PubMed

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  15. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    NASA Astrophysics Data System (ADS)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  16. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    PubMed

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  17. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    PubMed

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Directly reconstructing principal components of heterogeneous particles from cryo-EM images.

    PubMed

    Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali

    2015-08-01

    Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Deterministic annealing for density estimation by multivariate normal mixtures

    NASA Astrophysics Data System (ADS)

    Kloppenburg, Martin; Tavan, Paul

    1997-03-01

    An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.

  20. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Youngrok

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less

  1. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    NASA Astrophysics Data System (ADS)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  2. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  3. Stochastic Approximation Methods for Latent Regression Item Response Models

    ERIC Educational Resources Information Center

    von Davier, Matthias; Sinharay, Sandip

    2010-01-01

    This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…

  4. A Probability Based Framework for Testing the Missing Data Mechanism

    ERIC Educational Resources Information Center

    Lin, Johnny Cheng-Han

    2013-01-01

    Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…

  5. Acceleration of image-based resolution modelling reconstruction using an expectation maximization nested algorithm.

    PubMed

    Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2013-08-07

    Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.

  6. The Effect of Missing Data Handling Methods on Goodness of Fit Indices in Confirmatory Factor Analysis

    ERIC Educational Resources Information Center

    Köse, Alper

    2014-01-01

    The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…

  7. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  8. EM in high-dimensional spaces.

    PubMed

    Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim

    2005-06-01

    This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.

  9. NREL, Giner Evaluated Polymer Electrolyte Membrane for Maximizing Renewable

    Science.gov Websites

    Energy on Grid | Energy Systems Integration Facility | NREL Giner NREL, Giner Evaluated Polymer -scale polymer electrolyte membrane (PEM) stack designed to maximize renewable energy on the grid by converting it to hydrogen when supply exceeds demand. Photo of a polymer electrolyte membrane stack in a

  10. Maximizing the Benefits of Plug-in Electric Vehicles - Continuum Magazine

    Science.gov Websites

    Testing and Integration Facility. Photo by Dennis Schroeder, NREL Maximizing the Benefits of Plug-in . Electric vehicle charging stations in NREL's parking garage. Photo by Dennis Schroder, NREL An NREL

  11. SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction

    PubMed Central

    Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.

    2015-01-01

    Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831

  12. Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM

    NASA Astrophysics Data System (ADS)

    Jiji, G. Wiselin; Dehmeshki, Jamshid

    2014-04-01

    Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.

  13. Estimation of multiple sound sources with data and model uncertainties using the EM and evidential EM algorithms

    NASA Astrophysics Data System (ADS)

    Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme

    2016-01-01

    This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.

  14. Application of the EM algorithm to radiographic images.

    PubMed

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  15. Estimation of snow in extratropical cyclones from multiple frequency airborne radar observations. An Expectation-Maximization approach

    NASA Astrophysics Data System (ADS)

    Grecu, M.; Tian, L.; Heymsfield, G. M.

    2017-12-01

    A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.

  16. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    ERIC Educational Resources Information Center

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  17. Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.

    PubMed

    Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias

    2016-12-01

    Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.

  18. EM algorithm applied for estimating non-stationary region boundaries using electrical impedance tomography

    NASA Astrophysics Data System (ADS)

    Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.

    2010-04-01

    EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.

  19. Semi-supervised Learning for Phenotyping Tasks.

    PubMed

    Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K

    2015-01-01

    Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.

  20. Nonlinear spatio-temporal filtering of dynamic PET data using a four-dimensional Gaussian filter and expectation-maximization deconvolution

    NASA Astrophysics Data System (ADS)

    Floberg, J. M.; Holden, J. E.

    2013-02-01

    We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.

  1. Adaptive Baseline Enhances EM-Based Policy Search: Validation in a View-Based Positioning Task of a Smartphone Balancer

    PubMed Central

    Wang, Jiexin; Uchibe, Eiji; Doya, Kenji

    2017-01-01

    EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910

  2. Evidential analysis of difference images for change detection of multitemporal remote sensing images

    NASA Astrophysics Data System (ADS)

    Chen, Yin; Peng, Lijuan; Cremers, Armin B.

    2018-03-01

    In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.

  3. Generalized expectation-maximization segmentation of brain MR images

    NASA Astrophysics Data System (ADS)

    Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.

    2006-03-01

    Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.

  4. The mean field theory in EM procedures for blind Markov random field image restoration.

    PubMed

    Zhang, J

    1993-01-01

    A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.

  5. Comparison between electrically evoked and voluntary isometric contractions for biceps brachii muscle oxidative metabolism using near-infrared spectroscopy.

    PubMed

    Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Nosaka, Kazunori

    2009-09-01

    This study compared voluntary (VOL) and electrically evoked isometric contractions by muscle stimulation (EMS) for changes in biceps brachii muscle oxygenation (tissue oxygenation index, DeltaTOI) and total haemoglobin concentration (DeltatHb = oxygenated haemoglobin + deoxygenated haemoglobin) determined by near-infrared spectroscopy. Twelve men performed EMS with one arm followed 24 h later by VOL with the contralateral arm, consisting of 30 repeated (1-s contraction, 1-s relaxation) isometric contractions at 30% of maximal voluntary contraction (MVC) for the first 60 s, and maximal intensity contractions thereafter (MVC for VOL and maximal tolerable current at 30 Hz for EMS) until MVC decreased approximately 30% of pre-exercise MVC. During the 30 contractions at 30% MVC, DeltaTOI decrease was significantly (P < 0.05) greater and DeltatHb was significantly (P < 0.05) lower for EMS than VOL, suggesting that the metabolic demand for oxygen in EMS is greater than VOL at the same torque level. However, during maximal intensity contractions, although EMS torque (approximately 40% of VOL) was significantly (P < 0.05) lower than VOL, DeltaTOI was similar and tHb was significantly (P < 0.05) lower for EMS than VOL towards the end, without significant differences between the two sessions in the recovery period. It is concluded that the oxygen demand of the activated biceps brachii muscle in EMS is comparable to VOL at maximal intensity.

  6. Estimation for general birth-death processes

    PubMed Central

    Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.

    2013-01-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261

  7. Estimation for general birth-death processes.

    PubMed

    Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A

    2014-04-01

    Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.

  8. Image segmentation by EM-based adaptive pulse coupled neural networks in brain magnetic resonance imaging.

    PubMed

    Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C

    2010-06-01

    We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Maximum likelihood estimates, from censored data, for mixed-Weibull distributions

    NASA Astrophysics Data System (ADS)

    Jiang, Siyuan; Kececioglu, Dimitri

    1992-06-01

    A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.

  10. Counting malaria parasites with a two-stage EM based algorithm using crowsourced data.

    PubMed

    Cabrera-Bean, Margarita; Pages-Zamora, Alba; Diaz-Vilor, Carles; Postigo-Camps, Maria; Cuadrado-Sanchez, Daniel; Luengo-Oroz, Miguel Angel

    2017-07-01

    Malaria eradication of the worldwide is currently one of the main WHO's global goals. In this work, we focus on the use of human-machine interaction strategies for low-cost fast reliable malaria diagnostic based on a crowdsourced approach. The addressed technical problem consists in detecting spots in images even under very harsh conditions when positive objects are very similar to some artifacts. The clicks or tags delivered by several annotators labeling an image are modeled as a robust finite mixture, and techniques based on the Expectation-Maximization (EM) algorithm are proposed for accurately counting malaria parasites on thick blood smears obtained by microscopic Giemsa-stained techniques. This approach outperforms other traditional methods as it is shown through experimentation with real data.

  11. Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J.; Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago

    2016-07-07

    Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work,more » we present the unfolding results using the EM algorithm.« less

  12. Detection of delamination defects in CFRP materials using ultrasonic signal processing.

    PubMed

    Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak

    2008-12-01

    In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.

  13. Maximize Energy Efficiency in Buildings | Climate Neutral Research Campuses

    Science.gov Websites

    Buildings on a research campus, especially laboratory buildings, often represent the most cost-effective plans, campuses can evaluate the following: Energy Management Building Management New Buildings Design

  14. Marketplace Impact | Transportation Research | NREL

    Science.gov Websites

    Marketplace Impact Marketplace Impact This is the December 2014 issue of the Transportation and Hydrogen Newsletter. An illustration showing the outside of an electric vehicle with some portion of the partnership with the private sector to maximize market impact. Illustration by Joshua Bauer/NREL Public

  15. Steve Frank | NREL

    Science.gov Websites

    Commercial Buildings Research Group. Steve's areas of expertise are electric power distribution systems, DC techniques for maximizing the energy efficiency of electrical distribution systems in commercial buildings

  16. Energy Systems Integration Facility Videos | Energy Systems Integration

    Science.gov Websites

    Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid

  17. Characterization of computer network events through simultaneous feature selection and clustering of intrusion alerts

    NASA Astrophysics Data System (ADS)

    Chen, Siyue; Leung, Henry; Dondo, Maxwell

    2014-05-01

    As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.

  18. Alternative Fuels Data Center

    Science.gov Websites

    remaining 85% of the appropriation to maximize total air pollution reduction and health benefits, improve air quality in areas disproportionately affected by air pollution, leverage additional matching funds

  19. Innovation and Entrepreneurship Events | NREL

    Science.gov Websites

    Innovation and Entrepreneurship Events Innovation and Entrepreneurship Events Industry Growth Forum NREL's annual Industry Growth Forum (IGF) provides clean energy innovators an opportunity to maximize communities. Learn more and register for the 2018 Industry Growth Forum. Text Version

  20. Distributed Optimization and Control | Grid Modernization | NREL

    Science.gov Websites

    developing an innovative, distributed photovoltaic (PV) inverter control architecture that maximizes PV communications systems to support distribution grid operations. The growth of PV capacity has introduced prescribed limits, while fast variations in PV output tend to cause transients that lead to wear-out of

  1. Integrated Energy Solutions Research | Integrated Energy Solutions | NREL

    Science.gov Websites

    that spans the height and width of the wall they are facing. Decision Science and Informatics Enabling decision makers with rigorous, technology-neutral, data-backed decision support to maximize the impact of security in energy systems through analysis, decision support, advanced energy technology development, and

  2. DefenseLink: Securing Afganistan, Stabilization & Growth

    Science.gov Websites

    and maximize legitimate agricultural food crops. Story Army Engineers Repair Levee for Neighborhood near the site of the new agricultural research center in Shindand, Afghanistan. U.S. Army photo by Lt

  3. Coral Bleaching Products - Office of Satellite and Product Operations

    Science.gov Websites

    weeks. One DHW is equivalent to one week of sea surface temperatures one degree Celsius greater than the expected summertime maximum. Two DHWs are equivalent to two weeks at one degree above the expected summertime maximum OR one week of two degrees above the expected summertime maximum. Also called Coral Reef

  4. Steganalysis feature improvement using expectation maximization

    NASA Astrophysics Data System (ADS)

    Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.

    2007-04-01

    Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.

  5. NREL to Assist in Development and Evaluation of Class 6 Plug-in Hybrid

    Science.gov Websites

    , and emissions, as well as the potential impacts on life-cycle costs, barriers to implementation, and application and maximizing potential energy efficiency, emissions, economic, and performance impacts."

  6. NREL + SolarCity: Maximizing Solar Power on Electrical Grids Video Text

    Science.gov Websites

    Electrical Grids video. RYAN HANLEY: The growth of distributed energy resources is becoming real and tangible . BRYAN HANNEGAN: Solar technologies, particularly those distributed, rooftop, PV solar technologies, add Hawaiian Electric Company was concerned about as far as installing distributed energy resources on their

  7. NREL Bridges Fuels and Engines R&D to Maximize Vehicle Efficiency and

    Science.gov Websites

    innovation-from fuel chemistry, conversion, and combustion to the evaluation of advanced fuels in actual -cylinder engine for advanced compression ignition fuels research will be installed and commissioned in the vehicle performance and emissions research, two engine dynamometer test cells for advanced fuels research

  8. Perovskite Patent Portfolio | Photovoltaic Research | NREL

    Science.gov Websites

    deposition of high-quality perovskite films. These techniques have been published in multiple peer-reviewed substrates that are suitable for high-throughput manufacturing and that can maximize the yield of the % to 3% increase in conversion efficiency when compared to a MAPbI3 film prepared with a standard

  9. NREL Fuels and Engines Research: Maximizing Vehicle Efficiency and

    Science.gov Websites

    Laboratory, we analyze the effects of fuel chemistry on ignition and the potential emissions impacts. Our lab research. It can be used to investigate fuel chemistry effects on current and near-term engine technology , independent control allows for deeper interrogation of fuel effects on future-generation engine strategies

  10. Maximizing Energy Savings for Small Business Text Version | Buildings |

    Science.gov Websites

    owners have a big opportunity to save money and energy, while cutting greenhouse gas emissions. Drawing have the money, nor time, to pursue something like that. Drawing of computer screen, showing NREL's energy and non-energy related benefits. Drawing of money, buildings, machinery, and furniture. Narrator

  11. Maximizing Energy Savings for Small Businesses | Buildings | NREL

    Science.gov Websites

    significant amounts of money and energy, increase profits, promote their business, and cut greenhouse gas goals and save money: NREL's four-page lender's guide with discussion on timing and low-cost methods for information and design and decision support guides, available for free download The USDA's Business and

  12. Multimodality Prediction of Chaotic Time Series with Sparse Hard-Cut EM Learning of the Gaussian Process Mixture Model

    NASA Astrophysics Data System (ADS)

    Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng

    2017-05-01

    The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.

  13. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    PubMed

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  14. American Energy Data Challenge - by IdeaScale

    Science.gov Websites

    part of this community and receive updates on Open Data by Design Contest. PUBLIC VOTING HAS CLOSED - WINNERS WILL BE ANNOUNCED THIS MONTH! Arrow About Contest #3: Open Data by Design The Department of Energy will award $17,500 in prizes for the best designs that maximize the potential of our open energy data

  15. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  16. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  17. Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model

    PubMed Central

    Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070

  18. Joint segmentation and deformable registration of brain scans guided by a tumor growth model.

    PubMed

    Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos

    2011-01-01

    This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.

  19. The EM Method in a Probabilistic Wavelet-Based MRI Denoising

    PubMed Central

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959

  20. The EM Method in a Probabilistic Wavelet-Based MRI Denoising.

    PubMed

    Martin-Fernandez, Marcos; Villullas, Sergio

    2015-01-01

    Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.

  1. Investigation of contrast-enhanced subtracted breast CT images with MAP-EM based on projection-based weighting imaging.

    PubMed

    Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo

    2018-06-01

    Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.

  2. Performance analysis of EM-based blind detection for ON-OFF keying modulation over atmospheric optical channels

    NASA Astrophysics Data System (ADS)

    Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad

    2018-04-01

    In the free-space optical (FSO) links, atmospheric turbulence lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF keying (OOK) is a popular signaling scheme in these systems. Over turbulence channel, to detect OOK symbols in a blind way, i.e., without sending pilot symbols, an expectation-maximization (EM)-based detection method was recently proposed in the literature related to free-space optical (FSO) communication. However, the performance of EM-based detection methods severely depends on the length of the observation interval (Ls). To choose the optimum values of Ls at target bit error rates (BER)s of FSO communications which are commonly lower than 10-9, Monte-Carlo simulations would be very cumbersome and require a very long processing time. To facilitate performance evaluation, in this letter we derive the analytic expressions for BER and outage probability. Numerical results validate the accuracy of our derived analytic expressions. Our results may serve to evaluate the optimum value for Ls without resorting to time-consuming Monte-Carlo simulations.

  3. Detection of the power lines in UAV remote sensed images using spectral-spatial methods.

    PubMed

    Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham

    2018-01-15

    In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Varying-energy CT imaging method based on EM-TV

    NASA Astrophysics Data System (ADS)

    Chen, Ping; Han, Yan

    2016-11-01

    For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.

  5. Sparse-view proton computed tomography using modulated proton beams.

    PubMed

    Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong

    2015-02-01

    Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.

  6. Simultaneously learning DNA motif along with its position and sequence rank preferences through expectation maximization algorithm.

    PubMed

    Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin

    2013-03-01

    Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.

  7. Genomic selection and complex trait prediction using a fast EM algorithm applied to genome-wide markers

    PubMed Central

    2010-01-01

    Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788

  8. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    PubMed Central

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-01-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991

  9. Whole-body direct 4D parametric PET imaging employing nested generalized Patlak expectation-maximization reconstruction

    NASA Astrophysics Data System (ADS)

    Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib

    2016-08-01

    Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.

  10. Robust EM Continual Reassessment Method in Oncology Dose Finding

    PubMed Central

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  11. Identification of the focal plane wavefront control system using E-M algorithm

    NASA Astrophysics Data System (ADS)

    Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert

    2017-09-01

    In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.

  12. Improved Correction of Atmospheric Pressure Data Obtained by Smartphones through Machine Learning

    PubMed Central

    Kim, Yong-Hyuk; Ha, Ji-Hun; Kim, Na-Young; Im, Hyo-Hyuc; Sim, Sangjin; Choi, Reno K. Y.

    2016-01-01

    A correction method using machine learning aims to improve the conventional linear regression (LR) based method for correction of atmospheric pressure data obtained by smartphones. The method proposed in this study conducts clustering and regression analysis with time domain classification. Data obtained in Gyeonggi-do, one of the most populous provinces in South Korea surrounding Seoul with the size of 10,000 km2, from July 2014 through December 2014, using smartphones were classified with respect to time of day (daytime or nighttime) as well as day of the week (weekday or weekend) and the user's mobility, prior to the expectation-maximization (EM) clustering. Subsequently, the results were analyzed for comparison by applying machine learning methods such as multilayer perceptron (MLP) and support vector regression (SVR). The results showed a mean absolute error (MAE) 26% lower on average when regression analysis was performed through EM clustering compared to that obtained without EM clustering. For machine learning methods, the MAE for SVR was around 31% lower for LR and about 19% lower for MLP. It is concluded that pressure data from smartphones are as good as the ones from national automatic weather station (AWS) network. PMID:27524999

  13. Inferential Precision in Single-Case Time-Series Data Streams: How Well Does the EM Procedure Perform When Missing Observations Occur in Autocorrelated Data?

    PubMed Central

    Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.

    2013-01-01

    The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454

  14. Estimation of mating system parameters in plant populations using marker loci with null alleles.

    PubMed

    Ross, H A

    1986-06-01

    An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.

  15. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    PubMed

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  16. Application and performance of an ML-EM algorithm in NEXT

    NASA Astrophysics Data System (ADS)

    Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2017-08-01

    The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.

  17. Test of 3D CT reconstructions by EM + TV algorithm from undersampled data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da

    2013-05-06

    Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less

  18. NREL Begins On-Site Validation of Drivetrain Gearbox and Bearings | News |

    Science.gov Websites

    drivetrain failure often leads to higher-than-expected operations and maintenance costs. NREL researchers operations and maintenance costs for the wind industry. The validation is expected to last through the spring

  19. Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors

    PubMed Central

    Pan, Jin; Ma, Boyuan

    2018-01-01

    This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323

  20. Effective regurgitant orifice area by the color Doppler flow convergence method for evaluating the severity of chronic aortic regurgitation. An animal study.

    PubMed

    Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J

    1996-02-01

    The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.

  1. Deep neural network and noise classification-based speech enhancement

    NASA Astrophysics Data System (ADS)

    Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei

    2017-07-01

    In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.

  2. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    PubMed

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  3. Probabilistic n/γ discrimination with robustness against outliers for use in neutron profile monitors

    NASA Astrophysics Data System (ADS)

    Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.

    2017-08-01

    A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.

  4. MUSIC-Expected maximization gaussian mixture methodology for clustering and detection of task-related neuronal firing rates.

    PubMed

    Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A

    2017-01-15

    Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Robust statistical reconstruction for charged particle tomography

    DOEpatents

    Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W

    2013-10-08

    Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.

  6. Algorithmic detectability threshold of the stochastic block model

    NASA Astrophysics Data System (ADS)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  7. Why Contextual Preference Reversals Maximize Expected Value

    PubMed Central

    2016-01-01

    Contextual preference reversals occur when a preference for one option over another is reversed by the addition of further options. It has been argued that the occurrence of preference reversals in human behavior shows that people violate the axioms of rational choice and that people are not, therefore, expected value maximizers. In contrast, we demonstrate that if a person is only able to make noisy calculations of expected value and noisy observations of the ordinal relations among option features, then the expected value maximizing choice is influenced by the addition of new options and does give rise to apparent preference reversals. We explore the implications of expected value maximizing choice, conditioned on noisy observations, for a range of contextual preference reversal types—including attraction, compromise, similarity, and phantom effects. These preference reversal types have played a key role in the development of models of human choice. We conclude that experiments demonstrating contextual preference reversals are not evidence for irrationality. They are, however, a consequence of expected value maximization given noisy observations. PMID:27337391

  8. A segmentation/clustering model for the analysis of array CGH data.

    PubMed

    Picard, F; Robin, S; Lebarbier, E; Daudin, J-J

    2007-09-01

    Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.

  9. Compression of strings with approximate repeats.

    PubMed

    Allison, L; Edgoose, T; Dix, T I

    1998-01-01

    We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.

  10. Inferential precision in single-case time-series data streams: how well does the em procedure perform when missing observations occur in autocorrelated data?

    PubMed

    Smith, Justin D; Borckardt, Jeffrey J; Nash, Michael R

    2012-09-01

    The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. Copyright © 2011. Published by Elsevier Ltd.

  11. Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.

    PubMed

    Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman

    2010-08-07

    We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.

  12. Mismatch removal via coherent spatial relations

    NASA Astrophysics Data System (ADS)

    Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen

    2014-07-01

    We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.

  13. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    PubMed

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  14. Gretchen Ohlhausen | NREL

    Science.gov Websites

    School of Mines studying Mechanical Engineering and Computer Science, expected to graduate in 2019 lithium-ion and lithium sulfur batteries. Education B.S. Mechanical Engineering, Colorado School of Mines Gretchen Ohlhausen Photo of Gretchen Ohlhausen Gretchen Ohlhausen Undergraduate III-Mechanical

  15. Transportation Options | Climate Neutral Research Campuses | NREL

    Science.gov Websites

    Transportation Options Transportation Options Transportation to, from, and within a research campus from business travel often enlarge the footprint more than expected. To understand options for climate

  16. Frederick Reines and the Neutrino

    Science.gov Websites

    the Detection of the Free Neutrino, DOE Technical Report, August 1953 The Free Antineutrino Absorption Cross Section. Part I. Measurement of the Free Antineutrino Absorption Cross Section. Part II. Expected

  17. Constrained Fisher Scoring for a Mixture of Factor Analyzers

    DTIC Science & Technology

    2016-09-01

    expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5

  18. Biceps brachii muscle oxygenation in electrical muscle stimulation.

    PubMed

    Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Ferrari, Marco; Nosaka, Kazunori

    2010-09-01

    The purpose of this study was to compare between electrical muscle stimulation (EMS) and maximal voluntary (VOL) isometric contractions of the elbow flexors for changes in biceps brachii muscle oxygenation (tissue oxygenation index, TOI) and haemodynamics (total haemoglobin volume, tHb = oxygenated-Hb + deoxygenated-Hb) determined by near-infrared spectroscopy (NIRS). The biceps brachii muscle of 10 healthy men (23-39 years) was electrically stimulated at high frequency (75 Hz) via surface electrodes to evoke 50 intermittent (4-s contraction, 15-s relaxation) isometric contractions at maximum tolerated current level (EMS session). The contralateral arm performed 50 intermittent (4-s contraction, 15-s relaxation) maximal voluntary isometric contractions (VOL session) in a counterbalanced order separated by 2-3 weeks. Results indicated that although the torque produced during EMS was approximately 50% of VOL (P<0.05), there was no significant difference in the changes in TOI amplitude or TOI slope between EMS and VOL over the 50 contractions. However, the TOI amplitude divided by peak torque was approximately 50% lower for EMS than VOL (P<0.05), which indicates EMS was less efficient than VOL. This seems likely because of the difference in the muscles involved in the force production between conditions. Mean decrease in tHb amplitude during the contraction phases was significantly (P<0.05) greater for EMS than VOL from the 10th contraction onwards, suggesting that the muscle blood volume was lower in EMS than VOL. It is concluded that local oxygen demand of the biceps brachii sampled by NIRS is similar between VOL and EMS.

  19. Torque decrease during submaximal evoked contractions of the quadriceps muscle is linked not only to muscle fatigue.

    PubMed

    Matkowski, Boris; Lepers, Romuald; Martin, Alain

    2015-05-01

    The aim of this study was to analyze the neuromuscular mechanisms involved in the torque decrease induced by submaximal electromyostimulation (EMS) of the quadriceps muscle. It was hypothesized that torque decrease after EMS would reflect the fatigability of the activated motor units (MUs), but also a reduction in the number of MUs recruited as a result of changes in axonal excitability threshold. Two experiments were performed on 20 men to analyze 1) the supramaximal twitch superimposed and evoked at rest during EMS (Experiment 1, n = 9) and 2) the twitch response and torque-frequency relation of the MUs activated by EMS (Experiment 2, n = 11). Torque loss was assessed by 15 EMS-evoked contractions (50 Hz; 6 s on/6 s off), elicited at a constant intensity that evoked 20% of the maximal voluntary contraction (MVC) torque. The same stimulation intensity delivered over the muscles was used to induce the torque-frequency relation and the single electrical pulse evoked after each EMS contraction (Experiment 2). In Experiment 1, supramaximal twitch was induced by femoral nerve stimulation. Torque decreased by ~60% during EMS-evoked contractions and by only ~18% during MVCs. This was accompanied by a rightward shift of the torque-frequency relation of MUs activated and an increase of the ratio between the superimposed and posttetanic maximal twitch evoked during EMS contraction. These findings suggest that the torque decrease observed during submaximal EMS-evoked contractions involved muscular mechanisms but also a reduction in the number of MUs recruited due to changes in axonal excitability. Copyright © 2015 the American Physiological Society.

  20. Exploring expectation effects in EMDR: does prior treatment knowledge affect the degrading effects of eye movements on memories?

    PubMed Central

    Littel, Marianne; van Schie, Kevin; van den Hout, Marcel A.

    2017-01-01

    ABSTRACT Background: Eye movement desensitization and reprocessing (EMDR) is an effective psychological treatment for posttraumatic stress disorder. Recalling a memory while simultaneously making eye movements (EM) decreases a memory’s vividness and/or emotionality. It has been argued that non-specific factors, such as treatment expectancy and experimental demand, may contribute to the EMDR’s effectiveness. Objective: The present study was designed to test whether expectations about the working mechanism of EMDR would alter the memory attenuating effects of EM. Two experiments were conducted. In Experiment 1, we examined the effects of pre-existing (non-manipulated) knowledge of EMDR in participants with and without prior knowledge. In Experiment 2, we experimentally manipulated prior knowledge by providing participants without prior knowledge with correct or incorrect information about EMDR’s working mechanism. Method: Participants in both experiments recalled two aversive, autobiographical memories during brief sets of EM (Recall+EM) or keeping eyes stationary (Recall Only). Before and after the intervention, participants scored their memories on vividness and emotionality. A Bayesian approach was used to compare two competing hypotheses on the effects of (existing/given) prior knowledge: (1) Prior (correct) knowledge increases the effects of Recall+EM vs. Recall Only, vs. (2) prior knowledge does not affect the effects of Recall+EM. Results: Recall+EM caused greater reductions in memory vividness and emotionality than Recall Only in all groups, including the incorrect information group. In Experiment 1, both hypotheses were supported by the data: prior knowledge boosted the effects of EM, but only modestly. In Experiment 2, the second hypothesis was clearly supported over the first: providing knowledge of the underlying mechanism of EMDR did not alter the effects of EM. Conclusions: Recall+EM appears to be quite robust against the effects of prior expectations. As Recall+EM is the core component of EMDR, expectancy effects probably contribute little to the effectiveness of EMDR treatment. PMID:29038685

  1. PEER Transportation Research Program | PEER Transportation Research Program

    Science.gov Websites

    methodologies, integrating fundamental knowledge, enabling technologies, and systems. We further expect that the Bayesian Framework for Performance Assessment and Risk Management of Transportation Systems subject to Earthquakes Directivity Modeling for NGA West2 Ground Motion Studies for Transportation Systems Performance

  2. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016 accomplishments and primary areas of focus for the Department of Energy's (DOE's) Office of Environmental Management and EM sites are presented. For DOE EM, these include Focusing on the Field, Teaming with Cleanup Partners, Developing New Technology, and Maximizing Cleanup Dollars. Major 2016 achievements are highlighted for EM, Richland Operations Office, Office of River Protection, Savannah River Site, Oak Ridge, Idaho, Waste Isolation Pilot Plant, Los Alamos, Portsmouth, Paducah, West Valley Demonstration Project, and the Nevada National Security Site,

  4. Multimodal Event Detection in Twitter Hashtag Networks

    DOE PAGES

    Yilmaz, Yasin; Hero, Alfred O.

    2016-07-01

    In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less

  5. Multiresolution texture models for brain tumor segmentation in MRI.

    PubMed

    Iftekharuddin, Khan M; Ahmed, Shaheen; Hossen, Jakir

    2011-01-01

    In this study we discuss different types of texture features such as Fractal Dimension (FD) and Multifractional Brownian Motion (mBm) for estimating random structures and varying appearance of brain tissues and tumors in magnetic resonance images (MRI). We use different selection techniques including KullBack - Leibler Divergence (KLD) for ranking different texture and intensity features. We then exploit graph cut, self organizing maps (SOM) and expectation maximization (EM) techniques to fuse selected features for brain tumors segmentation in multimodality T1, T2, and FLAIR MRI. We use different similarity metrics to evaluate quality and robustness of these selected features for tumor segmentation in MRI for real pediatric patients. We also demonstrate a non-patient-specific automated tumor prediction scheme by using improved AdaBoost classification based on these image features.

  6. Sampling-based ensemble segmentation against inter-operator variability

    NASA Astrophysics Data System (ADS)

    Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew

    2011-03-01

    Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).

  7. PREMIX: PRivacy-preserving EstiMation of Individual admiXture.

    PubMed

    Chen, Feng; Dow, Michelle; Ding, Sijie; Lu, Yao; Jiang, Xiaoqian; Tang, Hua; Wang, Shuang

    2016-01-01

    In this paper we proposed a framework: PRivacy-preserving EstiMation of Individual admiXture (PREMIX) using Intel software guard extensions (SGX). SGX is a suite of software and hardware architectures to enable efficient and secure computation over confidential data. PREMIX enables multiple sites to securely collaborate on estimating individual admixture within a secure enclave inside Intel SGX. We implemented a feature selection module to identify most discriminative Single Nucleotide Polymorphism (SNP) based on informativeness and an Expectation Maximization (EM)-based Maximum Likelihood estimator to identify the individual admixture. Experimental results based on both simulation and 1000 genome data demonstrated the efficiency and accuracy of the proposed framework. PREMIX ensures a high level of security as all operations on sensitive genomic data are conducted within a secure enclave using SGX.

  8. Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models

    NASA Astrophysics Data System (ADS)

    Chu, A.

    2014-12-01

    Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.

  9. Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors

    NASA Astrophysics Data System (ADS)

    Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin

    2014-03-01

    One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.

  10. Comparison of missing value imputation methods in time series: the case of Turkish meteorological data

    NASA Astrophysics Data System (ADS)

    Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci

    2013-04-01

    This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.

  11. Largest Generator Validation Yet at the National Wind Technology Center |

    Science.gov Websites

    are many fewer moving parts that can increase maintenance. The 5-MW dynamometer will simulate the is expected to increase by 8% from 2010 to 2020." But despite the fact that the increase in

  12. Cynthia Szydlek | NREL

    Science.gov Websites

    Cynthia Szydlek Photo of Cynthia Szydlek Cynthia Szydlek NWTC Training Coordinator/Project Support increased safety expectations and comply with comprehensive training requirements. She maintains the NWTC's Environmental, Health, and Safety (EHS) training and safety management systems and ensures all critical on-site

  13. Concentrating Solar Power Projects - Extresol-2 | Concentrating Solar Power

    Science.gov Websites

    Sesmero (Badajoz) Owner(s): ACS/Cobra Group (100%) Technology: Parabolic trough Turbine Capacity: Net : 158,000 MWh/yr (Expected/Planned) Contact(s): Manuel Cortes; Ana Salazar Company: ACS/Cobra Group Break Project Type: Commercial Participants Developer(s): ACS/Cobra Group Owner(s) (%): ACS/Cobra Group (100

  14. Concentrating Solar Power Projects - Extresol-3 | Concentrating Solar Power

    Science.gov Websites

    Sesmero (Badajoz) Owner(s): ACS/Cobra Group (100%) Technology: Parabolic trough Turbine Capacity: Net : 158,000 MWh/yr (Expected/Planned) Contact(s): Manuel Cortes; Ana Salazar Company: ACS/Cobra Group Break years Project Type: Commercial Participants Developer(s): ACS/Cobra Group Owner(s) (%): ACS/Cobra Group

  15. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    NASA Astrophysics Data System (ADS)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  16. Biomass and Solar Technologies Lauded | News | NREL

    Science.gov Websites

    4 » Biomass and Solar Technologies Lauded News Release: Biomass and Solar Technologies Lauded July security and reduce our reliance on foreign sources of oil." The Enzymatic Hydrolysis of Biomass Cellulose to Sugars technology is expected to allow a wide range of biomass resources to be used to produce

  17. Fermilab Today | Results for the Frontiers | 2015

    Science.gov Websites

    : Subatomic gryphons Sept. 24, 2015 CDF: More than expected Sept. 22, 2015 CDMS: The lightness of dark matter CDF: Happy hunting grounds April 3, 2015 PICO: Seeing dark matter March 27, 2015 CMS: Rule of three the weak and the charmed July 24, 2015 DES: Cosmic shear cosmology with the Dark Energy Survey July 17

  18. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, H; Xing, L; Liang, Z

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less

  19. Partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    PubMed Central

    Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194

  20. Center for Science and Technology Policy Research

    Science.gov Websites

    Expect Surprise: Hurricanes Harvey, Irma, Maria, and Beyond Ten Essentials for Action-Oriented and Second Goldstein Bruce Goldstein Ten Essentials for Action-Oriented and Second Order Energy Transitions

  1. Smart Grid, Smart Inverters for a Smart Energy Future | State, Local, and

    Science.gov Websites

    , legislation which defines the state's interconnection standards and permits the interconnection of smart the cost and benefits of advanced inverter enabling legislation. Expect conversations concerning

  2. NOAA releases final report of Sandy service assessment

    Science.gov Websites

    released a report on the National Weather Service's performance during hurricane/post tropical cyclone Sandy. The report, Hurricane/Post Tropical Cyclone Sandy Service Assessment, reaffirms that the National warnings for dangerous storms like Sandy, even when they are expected to become post-tropical cyclones by

  3. Defense Threat Reduction Agency > Careers > Onboarding > Sponsor Program

    Science.gov Websites

    critical role in the Onboarding Program. In addition to the traditional supervisory roles and explain expectations to ensure a smooth transition, and help you become successful in your new role. While will quickly become productive and effective in your new role. Reasonable Accommodations DTRA provides

  4. News | Fermilab news

    Science.gov Websites

    rundown on what to expect to come out of neutrino research in the coming years. Fermilab is America's his expertise in government and education to work supporting the LBNF/DUNE project. Five (more Committee visits Fermilab May 17, 2018 A five-member bipartisan delegation toured the laboratory, met a

  5. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  6. Statistical modeling, detection, and segmentation of stains in digitized fabric images

    NASA Astrophysics Data System (ADS)

    Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.

    2007-02-01

    This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.

  7. Inverse Ising problem in continuous time: A latent variable approach

    NASA Astrophysics Data System (ADS)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  8. Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution

    PubMed Central

    Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin

    2016-01-01

    The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114

  9. Probabilistic Hazards Outlook

    Science.gov Websites

    Home Site Map News Organization Search: Go www.nws.noaa.gov Search the CPC Go Download KML Day 3-7 . See static maps below this for the most up to date graphics. Categorical Outlooks Day 3-7 Day 8-14 EDT May 25 2018 Synopsis: The summer season is expected to move in quickly for much of the contiguous

  10. NOAA: Strong El Niño sets the stage for 2015-2016 winter weather

    Science.gov Websites

    El Niño, among the strongest on record, is expected to influence weather and climate patterns this NOAA HOME WEATHER OCEANS FISHERIES CHARTING SATELLITES CLIMATE RESEARCH COASTS CAREERS National Temperature. Temperature - U.S. Winter Outlook: 2015-2016 (Credit: NOAA) Forecasters at NOAA's Climate

  11. Alternative Fuels Data Center: Pennsylvania's Ethanol Corridor Project

    Science.gov Websites

    fuel to help lessen our dependence on imported foreign petroleum. " Seth Obetz, AMERIgreen Greater imported foreign petroleum." GPCC expects more stations will add their names to the list. When gas

  12. Concentrating Solar Power Projects - Aurora Solar Energy Project |

    Science.gov Websites

    development Start Year: 2020 Do you have more information, corrections, or comments? Background Technology : 495,000 MWh/yr (Expected) Contact(s): Webmaster Solar Key References: Fact sheet Break Ground: 2018 Start

  13. Concentrating Solar Power Projects - Dhursar | Concentrating Solar Power |

    Science.gov Websites

    : 125.0 MW Status: Operational Start Year: 2014 Do you have more information, corrections, or comments Electricity Generation: 280,000 MWh/yr (Expected) Contact(s): Webmaster Solar Start Production: November 11

  14. NREL Funding Reductions

    Science.gov Websites

    Energy Laboratory (NREL) announced today that it will further reduce its work force as a result of million. Recent indications, however, are that NREL's funding will be lowered by an additional $27 million employees. NREL Director Charles F. Gay said the additional funding cuts are a result of lower than expected

  15. Revolving Loan Funds | Climate Neutral Research Campuses | NREL

    Science.gov Websites

    sometimes interest-free) loan used as capital for research campus projects expected to yield a certain State University's $3 million energy sustainability loan fund issues interest-free loans for campuses as interest-free loans to departments for sustainability projects. Within five years, a project repays its

  16. Products and Services Notice - Naval Oceanography Portal

    Science.gov Websites

    Tropical Cyclone Formation Alert, Northwest Pacific Ocean Issued as required when tropical cyclone PGTW Tropical Cyclone Formation Alert, North Indian Ocean Issued as required when TC formation is , Southwest Pacific Ocean Issued as required when TC formation is expected in 12-24 hours WTPS31-35 PGTW

  17. News | News

    Science.gov Websites

    you the rundown on what to expect to come out of neutrino research in the coming years. Simone supporting the LBNF/DUNE project. Five (more) fascinating facts about DUNE May 17, 2018 Engineering the of the program for members and staff of the House Science Committee. Photo: Reidar Hahn A five-member

  18. Alternative Fuels Data Center: Installing B20 Equipment

    Science.gov Websites

    operations to share the fueling site with you. Secure Permits, Adhere to State Requirements The contractor is storage tanks. The contractor will register storage tanks with the state environmental agency, which must the contractor and client to ensure the completed project meets expectations. Maps & Data U.S

  19. NOAA expects below-normal Central Pacific hurricane season

    Science.gov Websites

    Hurricane Preparedness Week El Niño/Southern Oscillation (ENSO) Diagnostic Discussion FEMA Media Contact based upon the continuation of neutral El Niño - Southern Oscillation conditions. The Central Pacific

  20. Concentrating Solar Power Projects - Copiapó | Concentrating Solar Power |

    Science.gov Websites

    MW Status: Under development Start Year: 2019 Do you have more information, corrections, or comments Generation: 1,800,000 MWh/yr (Expected) Contact(s): Webmaster Solar Company: Solar Reserve Start Production

  1. Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery

    NASA Astrophysics Data System (ADS)

    Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn

    2015-04-01

    Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.

  2. Electrically-induced muscle fatigue affects feedforward mechanisms of control.

    PubMed

    Monjo, F; Forestier, N

    2015-08-01

    To investigate the effects of focal muscle fatigue induced by electromyostimulation (EMS) on Anticipatory Postural Adjustments (APAs) during arm flexions performed at maximal velocity. Fifteen healthy subjects performed self-paced arm flexions at maximal velocity before and after the completion of fatiguing electromyostimulation programs involving the medial and anterior deltoids and aiming to degrade movement peak acceleration. APA timing and magnitude were measured using surface electromyography. Following muscle fatigue, despite a lower mechanical disturbance evidenced by significant decreased peak accelerations (-12%, p<.001), APAs remained unchanged as compared to control trials (p>.11 for all analyses). The fatigue signals evoked by externally-generated contractions seem to be gated by the Central Nervous System and result in postural strategy changes which aim to increase the postural safety margin. EMS is widely used in rehabilitation and training programs for its neuromuscular function-related benefits. However and from a motor control viewpoint, the present results show that the use of EMS can lead to acute inaccuracies in predictive motor control. We propose that clinicians should investigate the chronic and global effects of EMS on motor control. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  3. U.S. Department of Defense Official Website

    Science.gov Websites

    DefenseLink.mil Aug. 04, 2015 War on Terror Transformation News Products Press Resources Images Websites Contact presidential election expected to be a major stabilizing effort in a lynchpin country in the war on terror

  4. Noise-enhanced convolutional neural networks.

    PubMed

    Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart

    2016-06-01

    Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Compensation of missing wedge effects with sequential statistical reconstruction in electron tomography.

    PubMed

    Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla

    2014-01-01

    Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.

  6. Impacts of Maximizing Tendencies on Experience-Based Decisions.

    PubMed

    Rim, Hye Bin

    2017-06-01

    Previous research on risky decisions has suggested that people tend to make different choices depending on whether they acquire the information from personally repeated experiences or from statistical summary descriptions. This phenomenon, called as a description-experience gap, was expected to be moderated by the individual difference in maximizing tendencies, a desire towards maximizing decisional outcome. Specifically, it was hypothesized that maximizers' willingness to engage in extensive information searching would lead maximizers to make experience-based decisions as payoff distributions were given explicitly. A total of 262 participants completed four decision problems. Results showed that maximizers, compared to non-maximizers, drew more samples before making a choice but reported lower confidence levels on both the accuracy of knowledge gained from experiences and the likelihood of satisfactory outcomes. Additionally, maximizers exhibited smaller description-experience gaps than non-maximizers as expected. The implications of the findings and unanswered questions for future research were discussed.

  7. A generative probabilistic model and discriminative extensions for brain lesion segmentation – with application to tumor and stroke

    PubMed Central

    Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-01-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702

  8. A Generative Probabilistic Model and Discriminative Extensions for Brain Lesion Segmentation--With Application to Tumor and Stroke.

    PubMed

    Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina

    2016-04-01

    We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.

  9. Automated tissue classification of pediatric brains from magnetic resonance images using age-specific atlases

    NASA Astrophysics Data System (ADS)

    Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent

    2016-03-01

    The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.

  10. Making adjustments to event annotations for improved biological event extraction.

    PubMed

    Baek, Seung-Cheol; Park, Jong C

    2016-09-16

    Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.

  11. NOAA predicts near-normal or below-normal 2014 Atlantic hurricane season

    Science.gov Websites

    Related link: Atlantic Basin Hurricane Season Outlook Discussion El Niño/Southern Oscillation (ENSO predicts near-normal or below-normal 2014 Atlantic hurricane season El Niño expected to develop and . The main driver of this year's outlook is the anticipated development of El Niño this summer. El NiÃ

  12. Golden Rays - July 2016 | Solar Research | NREL

    Science.gov Websites

    . See the video or read the NREL news release. Must Reads Side-by-Side Comparison of CPV Module and installations across the country, and the next million systems are expected to be installed during the next 2

  13. Concentrating Solar Power Projects - Thai Solar Energy 1 | Concentrating

    Science.gov Websites

    : Parabolic trough Turbine Capacity: Net: 5.0 MW Gross: 5.0 MW Status: Operational Start Year: 2012 Do you (Expected) Contact(s): Sören Hempel; Yuvaraj Pandian Company: Solarlite GmbH Start Production: January 25

  14. What variables affect public perceptions for EMS meeting general community needs?

    PubMed

    Blau, Gary; Hochner, Arthur; Portwood, James

    2012-01-01

    In the fall, 2010, a phone survey of 928 respondents examined two research questions: does the general public perceive Emergency Medical Services (EMS) as meeting their community needs? And what factors or correlates help to explain EMS meeting community needs? To maximize geographical representation across the contiguous United States, a clustered stratified sampling strategy was used based upon zip codes across the 48 states. Results showed strong support by the sample for perceiving that EMS was meeting their general community needs. 17 percent of the variance in EMS meeting community needs was collectively explained by the demographic and perceptual variables in the regression model. Of the correlates tested, the strongest relationship was found between greater admiration for EMS professionals and higher perception of EMS meeting community needs. Study limitations included sampling households with only landline (no cell) phones, using a simulated emergency situation, and not collecting gender data.

  15. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach.

    PubMed

    Alam, M S; Bognar, J G; Cain, S; Yasuda, B J

    1998-03-10

    During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.

  16. About Coast Guard Maritime Commons « Coast Guard Maritime Commons

    Science.gov Websites

    occasional post by our senior leaders (including the Commandant of the Coast Guard). External Link Disclaimer retains the discretion to determine which comments it will post and which it will not. We expect all contributors to be respectful. We will not post comments that contain personal attacks of any kind; refer to

  17. Investigation of probabilistic principal component analysis compared to proper orthogonal decomposition methods for basis extraction and missing data estimation

    NASA Astrophysics Data System (ADS)

    Lee, Kyunghoon

    To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)

  18. Multimodal Speaker Diarization.

    PubMed

    Noulas, A; Englebienne, G; Krose, B J A

    2012-01-01

    We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.

  19. Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data

    NASA Astrophysics Data System (ADS)

    Li, Lan; Chen, Erxue; Li, Zengyuan

    2013-01-01

    This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.

  20. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  1. Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.

    PubMed

    Vargas, A; Cornejo, N; Camp, A

    2018-07-01

    The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems

    NASA Astrophysics Data System (ADS)

    Wu, Sau-Hsuan; Kuo, C.-C. Jay

    2002-11-01

    The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.

  3. Multiclass feature selection for improved pediatric brain tumor segmentation

    NASA Astrophysics Data System (ADS)

    Ahmed, Shaheen; Iftekharuddin, Khan M.

    2012-03-01

    In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.

  4. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  5. NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT

    NASA Astrophysics Data System (ADS)

    Sohlberg, A.; Watabe, H.; Iida, H.

    2008-07-01

    Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.

  6. Sparse Bayesian learning for DOA estimation with mutual coupling.

    PubMed

    Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi

    2015-10-16

    Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.

  7. Mixture models with entropy regularization for community detection in networks

    NASA Astrophysics Data System (ADS)

    Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang

    2018-04-01

    Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.

  8. Statistical inference approach to structural reconstruction of complex networks from binary time series

    NASA Astrophysics Data System (ADS)

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  9. Statistical inference approach to structural reconstruction of complex networks from binary time series.

    PubMed

    Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng

    2018-02-01

    Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.

  10. [Comparison of different methods in dealing with HIV viral load data with diversified missing value mechanism on HIV positive MSM].

    PubMed

    Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y

    2017-11-10

    Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.

  11. A feasibility study on estimation of tissue mixture contributions in 3D arterial spin labeling sequence

    NASA Astrophysics Data System (ADS)

    Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing

    2017-03-01

    Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.

  12. Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.

    PubMed

    Shireman, Emilie; Steinley, Douglas; Brusco, Michael J

    2017-02-01

    Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.

  13. Dynamic versus isometric electromechanical delay in non-fatigued and fatigued muscle: A combined electromyographic, mechanomyographic, and force approach.

    PubMed

    Smith, Cory M; Housh, Terry J; Hill, Ethan C; Johnson, Glen O; Schmidt, Richard J

    2017-04-01

    This study used a combined electromyographic, mechanomyographic, and force approach to identify electromechanical delay (EMD) from the onsets of the electromyographic to force signals (EMD E-F ), onsets of the electromyographic to mechanomyogrpahic signals (EMD E-M ), and onsets of mechanomyographic to force signals (EMD M-F ). The purposes of the current study were to examine: (1) the differences in EMD E-F , EMD E-M , and EMD M-F from the vastus lateralis during maximal, voluntary dynamic (1 repetition maximum [1-RM]) and isometric (maximal voluntary isometric contraction [MVIC]) muscle actions; and (2) the effects of fatigue on EMD E-F , EMD M-F , and EMD E-M . Ten men performed pretest and posttest 1-RM and MVIC leg extension muscle actions. The fatiguing workbout consisted of 70% 1-RM dynamic constant external resistance leg extension muscle actions to failure. The results indicated that there were no significant differences between 1-RM and MVIC EMD E-F , EMD E-M , or EMD M-F. There were, however, significant fatigue-induced increases in EMD E-F (94% and 63%), EMD E-M (107%), and EMD M-F (63%) for both the 1-RM and MVIC measurements. Therefore, these findings demonstrated the effects of fatigue on EMD measures and supported comparisons among studies which examined dynamic or isometric EMD measures from the vastus lateralis using a combined electromyographic, mechanomyographic, and force approach. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Fragment assignment in the cloud with eXpress-D

    PubMed Central

    2013-01-01

    Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033

  15. Active inference and epistemic value.

    PubMed

    Friston, Karl; Rigoli, Francesco; Ognibene, Dimitri; Mathys, Christoph; Fitzgerald, Thomas; Pezzulo, Giovanni

    2015-01-01

    We offer a formal treatment of choice behavior based on the premise that agents minimize the expected free energy of future outcomes. Crucially, the negative free energy or quality of a policy can be decomposed into extrinsic and epistemic (or intrinsic) value. Minimizing expected free energy is therefore equivalent to maximizing extrinsic value or expected utility (defined in terms of prior preferences or goals), while maximizing information gain or intrinsic value (or reducing uncertainty about the causes of valuable outcomes). The resulting scheme resolves the exploration-exploitation dilemma: Epistemic value is maximized until there is no further information gain, after which exploitation is assured through maximization of extrinsic value. This is formally consistent with the Infomax principle, generalizing formulations of active vision based upon salience (Bayesian surprise) and optimal decisions based on expected utility and risk-sensitive (Kullback-Leibler) control. Furthermore, as with previous active inference formulations of discrete (Markovian) problems, ad hoc softmax parameters become the expected (Bayes-optimal) precision of beliefs about, or confidence in, policies. This article focuses on the basic theory, illustrating the ideas with simulations. A key aspect of these simulations is the similarity between precision updates and dopaminergic discharges observed in conditioning paradigms.

  16. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation.

    PubMed

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-07

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  17. Accelerated time-of-flight (TOF) PET image reconstruction using TOF bin subsetization and TOF weighting matrix pre-computation

    NASA Astrophysics Data System (ADS)

    Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib

    2016-02-01

    Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.

  18. EM Adaptive LASSO—A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes

    PubMed Central

    Mallick, Himel; Tiwari, Hemant K.

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062

  19. EM Adaptive LASSO-A Multilocus Modeling Strategy for Detecting SNPs Associated with Zero-inflated Count Phenotypes.

    PubMed

    Mallick, Himel; Tiwari, Hemant K

    2016-01-01

    Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.

  20. Electromagnetic Monitoring of Hydraulic Fracturing: Relationship to Permeability, Seismicity, and Stress

    NASA Astrophysics Data System (ADS)

    Thiel, Stephan

    2017-09-01

    Hydraulic fracking is a geoengineering application designed to enhance subsurface permeability to maximize fluid and gas flow. Fracking is commonly used in enhanced geothermal systems (EGS), tight shale gas, and coal seam gas (CSG) plays and in CO_2 storage scenarios. Common monitoring methods include microseismics and mapping small earthquakes with great resolution associated with fracture opening at reservoir depth. Recently, electromagnetic (EM) methods have been employed in the field to provide an alternative way of direct detection of fluids as they are pumped in the ground. Surface magnetotelluric (MT) measurements across EGS show subtle yet detectable changes during fracking derived from time-lapse MT deployments. Changes are directional and are predominantly aligned with current stress field, dictating preferential fracture orientation, supported by microseismic monitoring of frack-related earthquakes. Modeling studies prior to the injection are crucial for survey design and feasibility of monitoring fracks. In particular, knowledge of sediment thickness plays a fundamental role in resolving subtle changes. Numerical forward modeling studies clearly favor some form of downhole measurement to enhance sensitivity; however, these have yet to be conclusively demonstrated in the field. Nevertheless, real surface-based monitoring examples do not necessarily replicate the expected magnitude of change derived from forward modeling and are larger than expected in some cases from EGS and CSG systems. It appears the injected fluid volume alone cannot account for the surface change in resistivity, but connectedness of pore space is also significantly enhanced and nonlinear. Recent numerical studies emphasize the importance of percolation threshold of the fracture network on both electrical resistivity and permeability, which may play an important role in accounting for temporal changes in surface EM measurements during hydraulic fracking.

  1. Multi-ray-based system matrix generation for 3D PET reconstruction

    NASA Astrophysics Data System (ADS)

    Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi

    2008-12-01

    Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.

  2. Frequency-sensitive competitive learning for scalable balanced clustering on high-dimensional hyperspheres.

    PubMed

    Banerjee, Arindam; Ghosh, Joydeep

    2004-05-01

    Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.

  3. Simultaneous 99mtc/111in spect reconstruction using accelerated convolution-based forced detection monte carlo

    NASA Astrophysics Data System (ADS)

    Karamat, Muhammad I.; Farncombe, Troy H.

    2015-10-01

    Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.

  4. Alternative Fuels Data Center: ASTM Biodiesel Specifications

    Science.gov Websites

    purchased from ASTM International. Property Test Method Grade No.1-B S15 Grade No.1-B S500 Grade No.2-B S15 ASTM International. Property Test Method Grade B6 to B20 S15 B6 to B20 S500 j B6 to B20 S5000 Acid for the intended use and expected ambient temperatures. Test Methods D 4539 and D 6371 may be useful

  5. Functioning in the Real World: Using Storytelling to Improve Validity in the Assessment of Executive Functions.

    PubMed

    Annotti, Lee A; Teglasi, Hedwig

    2017-01-01

    Real-world contexts differ in the clarity of expectations for desired responses, as do assessment procedures, ranging along a continuum from maximal conditions that provide well-defined expectations to typical conditions that provide ill-defined expectations. Executive functions guide effective social interactions, but relations between them have not been studied with measures that are matched in the clarity of response expectations. In predicting teacher-rated social competence (SC) from kindergarteners' performance on tasks of executive functions (EFs), we found better model-data fit indexes when both measures were similar in the clarity of response expectations for the child. The maximal EF measure, the Developmental Neuropsychological Assessment, presents well-defined response expectations, and the typical EF measure, 5 scales from the Thematic Apperception Test (TAT), presents ill-defined response expectations (i.e., Abstraction, Perceptual Integration, Cognitive-Experiential Integration, and Associative Thinking). To assess SC under maximal and typical conditions, we used 2 teacher-rated questionnaires, with items, respectively, that emphasize well-defined and ill-defined expectations: the Behavior Rating Inventory: Behavioral Regulation Index and the Social Skills Improvement System: Social Competence Scale. Findings suggest that matching clarity of expectations improves generalization across measures and highlight the usefulness of the TAT to measure EF.

  6. Energy management system optimization for on-site facility staff - a case history of the New York State Office of Mental Health

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bagdon, M.J.; Martin, P.J.

    1997-06-01

    In 1994, Novus Engineering and EME Group began a project for the New York State Office of Mental Health (OMH) to maximize the use and benefit of energy management systems (EMS) installed at various large psychiatric hospitals throughout New York State. The project, which was funded and managed by the Dormitory Authority of the State of New York (DASNY), had three major objectives: (1) Maximize Energy Savings - Novus staff quickly learned that EMS systems as set up by contractors are far from optimal for generating energy savings. This part of the program revealed numerous opportunities for increased energy savings,more » such as: fine tuning proportional/integral/derivative (PID) loops to eliminate valve and damper hunting; adjusting temperature reset schedules to reduce energy consumption and provide more uniform temperature conditions throughout the facilities; and modifying equipment schedules. (2) Develop Monitoring Protocols - Large EMS systems are so complex that they require a systematic approach to daily, monthly and seasonal monitoring of building system conditions in order to locate system problems before they turn into trouble calls or equipment failures. In order to assist local facility staff in their monitoring efforts, Novus prepared user-friendly handbooks on each EMS. These included monitoring protocols tailored to each facility. (3) Provide Staff Training - When a new EMS is installed at a facility, it is frequently the maintenance staffs first exposure to a complex computerized system. Without proper training in what to look for, staff use of the EMS is generally very limited. With proper training, staff can be taught to take a pro-active approach to identify and solve problems before they get out of hand. The staff then realize that the EMS is a powerful preventative maintenance tool that can be used to make their work more effective and efficient. Case histories are presented.« less

  7. Estimates of cost-effectiveness of prehospital continuous positive airway pressure in the management of acute pulmonary edema.

    PubMed

    Hubble, Michael W; Richards, Michael E; Wilfong, Denise A

    2008-01-01

    To estimate the cost-effectiveness of continuous positive airway pressure (CPAP) in managing prehospital acute pulmonary edema in an urban EMS system. Using estimates from published reports on prehospital and emergency department CPAP, a cost-effectiveness model of implementing CPAP in a typical urban EMS system was derived from the societal perspective as well as the perspective of the implementing EMS system. To assess the robustness of the model, a series of univariate and multivariate sensitivity analyses was performed on the input variables. The cost of consumables, equipment, and training yielded a total cost of $89 per CPAP application. The theoretical system would be expected to use CPAP 4 times per 1000 EMS patients and is expected to save 0.75 additional lives per 1000 EMS patients at a cost of $490 per life saved. CPAP is also expected to result in approximately one less intubation per 6 CPAP applications and reduce hospitalization costs by $4075 per year for each CPAP application. Through sensitivity analyses the model was verified to be robust across a wide range of input variable assumptions. Previous studies have demonstrated the clinical effectiveness of CPAP in the management of acute pulmonary edema. Through a theoretical analysis which modeled the costs and clinical benefits of implementing CPAP in an urban EMS system, prehospital CPAP appears to be a cost-effective treatment.

  8. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    PubMed

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  9. [Imputation methods for missing data in educational diagnostic evaluation].

    PubMed

    Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José

    2012-02-01

    In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.

  10. An open source multivariate framework for n-tissue segmentation with evaluation on public data.

    PubMed

    Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C

    2011-12-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.

  11. An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data

    PubMed Central

    Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.

    2012-01-01

    We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993

  12. Distant Supervision with Transductive Learning for Adverse Drug Reaction Identification from Electronic Medical Records

    PubMed Central

    Ikeda, Mitsuru

    2017-01-01

    Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077

  13. The taxonomy statistic uncovers novel clinical patterns in a population of ischemic stroke patients.

    PubMed

    Tukiendorf, Andrzej; Kaźmierski, Radosław; Michalak, Sławomir

    2013-01-01

    In this paper, we describe a simple taxonomic approach for clinical data mining elaborated by Marczewski and Steinhaus (M-S), whose performance equals the advanced statistical methodology known as the expectation-maximization (E-M) algorithm. We tested these two methods on a cohort of ischemic stroke patients. The comparison of both methods revealed strong agreement. Direct agreement between M-S and E-M classifications reached 83%, while Cohen's coefficient of agreement was κ = 0.766(P < 0.0001). The statistical analysis conducted and the outcomes obtained in this paper revealed novel clinical patterns in ischemic stroke patients. The aim of the study was to evaluate the clinical usefulness of Marczewski-Steinhaus' taxonomic approach as a tool for the detection of novel patterns of data in ischemic stroke patients and the prediction of disease outcome. In terms of the identification of fairly frequent types of stroke patients using their age, National Institutes of Health Stroke Scale (NIHSS), and diabetes mellitus (DM) status, when dealing with rough characteristics of patients, four particular types of patients are recognized, which cannot be identified by means of routine clinical methods. Following the obtained taxonomical outcomes, the strong correlation between the health status at moment of admission to emergency department (ED) and the subsequent recovery of patients is established. Moreover, popularization and simplification of the ideas of advanced mathematicians may provide an unconventional explorative platform for clinical problems.

  14. Time-of-flight PET image reconstruction using origin ensembles.

    PubMed

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-07

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  15. Time-of-flight PET image reconstruction using origin ensembles

    NASA Astrophysics Data System (ADS)

    Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven

    2015-03-01

    The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.

  16. Differentiation of uric acid versus non-uric acid kidney stones in the presence of iodine using dual-energy CT

    NASA Astrophysics Data System (ADS)

    Wang, J.; Qu, M.; Leng, S.; McCollough, C. H.

    2010-04-01

    In this study, the feasibility of differentiating uric acid from non-uric acid kidney stones in the presence of iodinated contrast material was evaluated using dual-energy CT (DECT). Iodine subtraction was accomplished with a commercial three material decomposition algorithm to create a virtual non-contrast (VNC) image set. VNC images were then used to segment stone regions from tissue background. The DE ratio of each stone was calculated using the CT images acquired at two different energies with DECT using the stone map generated from the VNC images. The performance of DE ratio-based stone differentiation was evaluated at five different iodine concentrations (21, 42, 63, 84 and 105 mg/ml). The DE ratio of stones in iodine solution was found larger than those obtained in non-iodine cases. This is mainly caused by the partial volume effect around the boundary between the stone and iodine solution. The overestimation of the DE ratio leads to substantial overlap between different stone types. To address the partial volume effect, an expectation-maximization (EM) approach was implemented to estimate the contribution of iodine and stone within each image pixel in their mixture area. The DE ratio of each stone was corrected to maximally remove the influence of iodine solutions. The separation of uric-acid and non-uric-acid stone was improved in the presence of iodine solution.

  17. Reduced isothermal feature set for long wave infrared (LWIR) face recognition

    NASA Astrophysics Data System (ADS)

    Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel

    2017-06-01

    In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.

  18. How to Deal with Interval-Censored Data Practically while Assessing the Progression-Free Survival: A Step-by-Step Guide Using SAS and R Software.

    PubMed

    Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn

    2016-12-01

    We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.

  19. A Joint Model for Longitudinal Measurements and Survival Data in the Presence of Multiple Failure Types

    PubMed Central

    Elashoff, Robert M.; Li, Gang; Li, Ning

    2009-01-01

    Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112

  20. Automated Identification of the Heart Wall Throughout the Entire Cardiac Cycle Using Optimal Cardiac Phase for Extracted Features

    NASA Astrophysics Data System (ADS)

    Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi

    2011-07-01

    In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.

  1. Near-Earth Asteroid (NEA) Scout

    NASA Technical Reports Server (NTRS)

    McNutt, Leslie; Johnson, Les; Kahn, Peter; Castillo-Rogez, Julie; Frick, Andreas

    2014-01-01

    Near-Earth asteroids (NEAs) are the most easily accessible bodies in the solar system, and detections of NEAs are expected to grow exponentially in the near future, offering increasing target opportunities. As NASA continues to refine its plans to possibly explore these small worlds with human explorers, initial reconnaissance with comparatively inexpensive robotic precursors is necessary. Obtaining and analyzing relevant data about these bodies via robotic precursors before committing a crew to visit a NEA will significantly minimize crew and mission risk, as well as maximize exploration return potential. The Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL) are jointly examining a potential mission concept, tentatively called 'NEAScout,' utilizing a low-cost platform such as CubeSat in response to the current needs for affordable missions with exploration science value. The NEAScout mission concept would be treated as a secondary payload on the Space Launch System (SLS) Exploration Mission 1 (EM-1), the first planned flight of the SLS and the second un-crewed test flight of the Orion Multi-Purpose Crew Vehicle (MPCV).

  2. Near-Earth Asteroid Scout

    NASA Technical Reports Server (NTRS)

    McNutt, Leslie; Johnson, Les; Clardy, Dennon; Castillo-Rogez, Julie; Frick, Andreas; Jones, Laura

    2014-01-01

    Near-Earth Asteroids (NEAs) are an easily accessible object in Earth's vicinity. Detections of NEAs are expected to grow in the near future, offering increasing target opportunities. As NASA continues to refine its plans to possibly explore these small worlds with human explorers, initial reconnaissance with comparatively inexpensive robotic precursors is necessary. Obtaining and analyzing relevant data about these bodies via robotic precursors before committing a crew to visit a NEA will significantly minimize crew and mission risk, as well as maximize exploration return potential. The Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL) are jointly examining a mission concept, tentatively called 'NEA Scout,' utilizing a low-cost CubeSats platform in response to the current needs for affordable missions with exploration science value. The NEA Scout mission concept would be a secondary payload on the Space Launch System (SLS) Exploration Mission 1 (EM-1), the first planned flight of the SLS and the second un-crewed test flight of the Orion Multi-Purpose Crew Vehicle (MPCV).

  3. Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.

    PubMed

    Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji

    2016-09-01

    It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.

  4. A Zero- and K-Inflated Mixture Model for Health Questionnaire Data

    PubMed Central

    Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.

    2011-01-01

    In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673

  5. Half-blind remote sensing image restoration with partly unknown degradation

    NASA Astrophysics Data System (ADS)

    Xie, Meihua; Yan, Fengxia

    2017-01-01

    The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.

  6. Forecasting continuously increasing life expectancy: what implications?

    PubMed

    Le Bourg, Eric

    2012-04-01

    It has been proposed that life expectancy could linearly increase in the next decades and that median longevity of the youngest birth cohorts could reach 105 years or more. These forecasts have been criticized but it seems that their implications for future maximal lifespan (i.e. the lifespan of the last survivors) have not been considered. These implications make these forecasts untenable and it is less risky to hypothesize that life expectancy and maximal lifespan will reach an asymptotic limit in some decades from now. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Expected Annual Emergency Miles per Ambulance: An Indicator for Measuring Availability of Emergency Medical Services Resources

    ERIC Educational Resources Information Center

    Patterson, P. Daniel; Probst, Janice C.; Moore, Charity G.

    2006-01-01

    Context: To ensure equitable access to prehospital care, as recommended by the Rural and Frontier Emergency Medical Services (EMS) Agenda for the Future, policymakers will need a uniform measure of EMS infrastructure. Purpose and Methods: This paper proposes a county-level indicator of EMS resource availability that takes into consideration…

  8. The efficacy and safety of whole-body electromyostimulation in applying to human body: based from graded exercise test.

    PubMed

    Jee, Yong-Seok

    2018-02-01

    Recently, whole body-electromyostimulation (WB-EMS) has upgraded its functions and capabilities and has overcome limitations and inconveniences from past systems. Although the efficacy and safety of EMS have been examined in some studies, specific guidelines for applying WB-EMS are lacking. To determine the efficacy and safety of applying it in healthy men to improve cardiopulmonary and psychophysiological variables when applying WB-EMS. Sixty-four participants were randomly grouped into control group (without electrical stimuli) or WB-EMS group after a 6-week baseline period. The control group (n=33; female. 15; male, 18) wore the WB-EMS suit as much as the WB-EMS group (n=31; female, 15; male, 16). There were no abnormal changes in the cardiopulmonary variables (heart rate, systolic blood pressure [SBP], diastolic blood pressure, and oxygen uptake) during or after the graded exercise test (GXT) in both groups. There was a significant decrease in SBP and an increase of oxygen uptake from stages 3 to 5 of the GXT in the WB-EMS group. The psychophysiological factors for a WB-EMS group, which consisted of soreness, anxiety, fatigability, and sleeplessness were significantly decreased after the experiment. The application of WB-EMS in healthy young men did not negatively affect the cardiopulmonary and psychophysiological factors. Rather, the application of WB-EMS improved SBP and oxygen uptake in submaximal and maximal stages of GXT. This study also confirmed that 6 weeks of WB-EMS training can improve psychophysiological factors.

  9. Hierarchical trie packet classification algorithm based on expectation-maximization clustering.

    PubMed

    Bi, Xia-An; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.

  10. Mapping of electrical muscle stimulation using MRI

    NASA Technical Reports Server (NTRS)

    Adams, Gregory R.; Harris, Robert T.; Woodard, Daniel; Dudley, Gary A.

    1993-01-01

    The pattern of muscle contractile activity elicited by electromyostimulation (EMS) was mapped and compared to the contractile-activity pattern produced by voluntary effort. This was done by examining the patterns and the extent of contrast shift, as indicated by T2 values, im magnetic resonance (MR) images after isometric activity of the left m. quadriceps of human subjects was elicited by EMS (1-sec train of 500-microsec sine wave pulses at 50 Hz) or voluntary effort. The results suggest that, whereas EMS stimulates the same fibers repeatedly, thereby increasing the metabolic demand and T2 values, the voluntary efforts are performed by more diffuse asynchronous activation of skeletal muscle even at forces up to 75 percent of maximal to maintain performance.

  11. Volume versus value maximization illustrated for Douglas-fir with thinning

    Treesearch

    Kurt H. Riitters; J. Douglas Brodie; Chiang Kao

    1982-01-01

    Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...

  12. Noise-enhanced clustering and competitive learning algorithms.

    PubMed

    Osoba, Osonde; Kosko, Bart

    2013-01-01

    Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    PubMed

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  14. Matroshka AstroRad Radiation Experiment (MARE) on the Deep Space Gateway

    NASA Astrophysics Data System (ADS)

    Gaza, R.; Hussein, H.; Murrow, D.; Hopkins, J.; Waterman, G.; Milstein, O.; Berger, T.; Przybyla, B.; Aeckerlein, J.; Marsalek, K.; Matthiae, D.; Rutczynska, A.

    2018-02-01

    The Matroshka AstroRad Radiation Experiment is a science payload on Orion EM-1 flight. A research platform derived from MARE is proposed for the Deep Space Gateway. Feedback is invited on desired Deep Space Gateway design features to maximize its science potential.

  15. Electromagnetic field interactions with the human body: Observed effects and theories

    NASA Technical Reports Server (NTRS)

    Raines, J. K.

    1981-01-01

    The effects of nonionizing electromagnetic (EM) field interactions with the human body were reported and human related studies were collected. Nonionizing EM fields are linked to cancer in humans in three different ways: cause, means of detection, and effective treatment. Bad and benign effects are expected from nonionizing EM fields and much more knowledge is necessary to properly categorize and qualify EM field characteristics. It is concluded that knowledge of the boundary between categories, largely dependent on field intensity, is vital to proper future use of EM radiation for any purpose and the protection of the individual from hazard.

  16. SPECT reconstruction using DCT-induced tight framelet regularization

    NASA Astrophysics Data System (ADS)

    Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej

    2015-03-01

    Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.

  17. Mining patterns in persistent surveillance systems with smart query and visual analytics

    NASA Astrophysics Data System (ADS)

    Habibi, Mohammad S.; Shirkhodaie, Amir

    2013-05-01

    In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.

  18. Development of the Animal Management and Husbandry Online Placement Tool.

    PubMed

    Bates, Lucy; Crowther, Emma; Bell, Catriona; Kinnison, Tierney; Baillie, Sarah

    2013-01-01

    The workplace provides veterinary students with opportunities to develop a range of skills, making workplace learning an important part of veterinary education in many countries. Good preparation for work placements is vital to maximize learning; to this end, our group has developed a series of three computer-aided learning (CAL) packages to support students. The third of this series is the Animal Management and Husbandry Online Placement Tool (AMH OPT). Students need a sound knowledge of animal husbandry and the ability to handle the common domestic species. However, teaching these skills at university is not always practical and requires considerable resources. In the UK, the Royal College of Veterinary Surgeons (RCVS) requires students to complete 12 weeks of pre-clinical animal management and husbandry work placements or extramural studies (EMS). The aims are for students to improve their animal handling skills and awareness of husbandry systems, develop communication skills, and understand their future clients' needs. The AMH OPT is divided into several sections: Preparation, What to Expect, Working with People, Professionalism, Tips, and Frequently Asked Questions. Three stakeholder groups (university EMS coordinators, placement providers, and students) were consulted initially to guide the content and design and later to evaluate previews. Feedback from stakeholders was used in an iterative design process, resulting in a program that aims to facilitate student preparation, optimize the learning opportunities, and improve the experience for both students and placement providers. The CAL is available online and is open-access worldwide to support students during veterinary school.

  19. Fusing Continuous-Valued Medical Labels Using a Bayesian Model.

    PubMed

    Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D

    2015-12-01

    With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.

  20. Man-Portable Simultaneous Magnetometer and EM System (MSEMS)

    DTIC Science & Technology

    2008-12-01

    expensive fluxgate magnetometers . This is because the interleaving hardware is expecting a Larmor signal as input; it performs period counting of the...Larmor signal between EM61 pulses to convert the frequency-based Larmor signal into nanotesla. A fluxgate magnetometer does not employ the resonance...FINAL REPORT Man-Portable Simultaneous Magnetometer and EM System (MSEMS) ESTCP Project MM-0414 December 2008 Robert Siegel Science

  1. Hierarchical trie packet classification algorithm based on expectation-maximization clustering

    PubMed Central

    Bi, Xia-an; Zhao, Junxia

    2017-01-01

    With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476

  2. Maximizing carbon storage in the Appalachians: A method for considering the risk of disturbance events

    Treesearch

    Michael R. Vanderberg; Kevin Boston; John Bailey

    2011-01-01

    Accounting for the probability of loss due to disturbance events can influence the prediction of carbon flux over a planning horizon, and can affect the determination of optimal silvicultural regimes to maximize terrestrial carbon storage. A preliminary model that includes forest disturbance-related carbon loss was developed to maximize expected values of carbon stocks...

  3. Cerebral cortex activation mapping upon electrical muscle stimulation by 32-channel time-domain functional near-infrared spectroscopy.

    PubMed

    Re, Rebecca; Muthalib, Makii; Contini, Davide; Zucchelli, Lucia; Torricelli, Alessandro; Spinelli, Lorenzo; Caffini, Matteo; Ferrari, Marco; Quaresima, Valentina; Perrey, Stephane; Kerr, Graham

    2013-01-01

    The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.

  4. Expectation maximization-based likelihood inference for flexible cure rate models with Weibull lifetimes.

    PubMed

    Balakrishnan, Narayanaswamy; Pal, Suvra

    2016-08-01

    Recently, a flexible cure rate survival model has been developed by assuming the number of competing causes of the event of interest to follow the Conway-Maxwell-Poisson distribution. This model includes some of the well-known cure rate models discussed in the literature as special cases. Data obtained from cancer clinical trials are often right censored and expectation maximization algorithm can be used in this case to efficiently estimate the model parameters based on right censored data. In this paper, we consider the competing cause scenario and assuming the time-to-event to follow the Weibull distribution, we derive the necessary steps of the expectation maximization algorithm for estimating the parameters of different cure rate survival models. The standard errors of the maximum likelihood estimates are obtained by inverting the observed information matrix. The method of inference developed here is examined by means of an extensive Monte Carlo simulation study. Finally, we illustrate the proposed methodology with a real data on cancer recurrence. © The Author(s) 2013.

  5. Very Slow Search and Reach: Failure to Maximize Expected Gain in an Eye-Hand Coordination Task

    PubMed Central

    Zhang, Hang; Morvan, Camille; Etezad-Heydari, Louis-Alexandre; Maloney, Laurence T.

    2012-01-01

    We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt. PMID:23071430

  6. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  7. Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.

    PubMed

    Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús

    2008-10-01

    Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.

  8. Acceleration of the direct reconstruction of linear parametric images using nested algorithms.

    PubMed

    Wang, Guobao; Qi, Jinyi

    2010-03-07

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  9. A closed-form solution to tensor voting: theory and applications.

    PubMed

    Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard

    2012-08-01

    We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.

  10. Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions

    NASA Astrophysics Data System (ADS)

    Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga

    2015-03-01

    Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.

  11. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany

    PubMed Central

    Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun

    2017-01-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498

  12. Expectation maximization for hard X-ray count modulation profiles

    NASA Astrophysics Data System (ADS)

    Benvenuto, F.; Schwartz, R.; Piana, M.; Massone, A. M.

    2013-07-01

    Context. This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI) instrument. Aims: Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized to analyze count modulation profiles in solar hard X-ray imaging based on rotating modulation collimators. Methods: The algorithm described in this paper solves the maximum likelihood problem iteratively and encodes a positivity constraint into the iterative optimization scheme. The result is therefore a classical expectation maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, at the same time, a very satisfactory Cash-statistic (C-statistic). Results: The method is applied to both reproduce synthetic flaring configurations and reconstruct images from experimental data corresponding to three real events. In this second case, the performance of expectation maximization, when compared to Pixon image reconstruction, shows a comparable accuracy and a notably reduced computational burden; when compared to CLEAN, shows a better fidelity with respect to the measurements with a comparable computational effectiveness. Conclusions: If optimally stopped, expectation maximization represents a very reliable method for image reconstruction in the RHESSI context when count modulation profiles are used as input data.

  13. Statistical iterative reconstruction for streak artefact reduction when using multidetector CT to image the dento-alveolar structures.

    PubMed

    Dong, J; Hayakawa, Y; Kober, C

    2014-01-01

    When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.

  14. NASA EM Followup of LIGO-Virgo Candidate Events

    NASA Technical Reports Server (NTRS)

    Blackburn, Lindy L.

    2011-01-01

    We present a strategy for a follow-up of LIGO-Virgo candidate events using offline survey data from several NASA high-energy photon instruments aboard RXTE, Swift, and Fermi. Time and sky-location information provided by the GW trigger allows for a targeted search for prompt and afterglow EM signals. In doing so, we expect to be sensitive to signals which are too weak to be publicly reported as astrophysical EM events.

  15. Differential correlation for sequencing data.

    PubMed

    Siska, Charlotte; Kechris, Katerina

    2017-01-19

    Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.

  16. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    PubMed

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  17. Field use of maximal sprint speed by collared lizards (Crotaphytus collaris): compensation and sexual selection.

    PubMed

    Husak, Jerry F; Fox, Stanley F

    2006-09-01

    To understand how selection acts on performance capacity, the ecological role of the performance trait being measured must be determined. Knowing if and when an animal uses maximal performance capacity may give insight into what specific selective pressures may be acting on performance, because individuals are expected to use close to maximal capacity only in contexts important to survival or reproductive success. Furthermore, if an ecological context is important, poor performers are expected to compensate behaviorally. To understand the relative roles of natural and sexual selection on maximal sprint speed capacity we measured maximal sprint speed of collared lizards (Crotaphytus collaris) in the laboratory and field-realized sprint speed for the same individuals in three different contexts (foraging, escaping a predator, and responding to a rival intruder). Females used closer to maximal speed while escaping predators than in the other contexts. Adult males, on the other hand, used closer to maximal speed while responding to an unfamiliar male intruder tethered within their territory. Sprint speeds during foraging attempts were far below maximal capacity for all lizards. Yearlings appeared to compensate for having lower absolute maximal capacity by using a greater percentage of their maximal capacity while foraging and escaping predators than did adults of either sex. We also found evidence for compensation within age and sex classes, where slower individuals used a greater percentage of their maximal capacity than faster individuals. However, this was true only while foraging and escaping predators and not while responding to a rival. Collared lizards appeared to choose microhabitats near refugia such that maximal speed was not necessary to escape predators. Although natural selection for predator avoidance cannot be ruled out as a selective force acting on locomotor performance in collared lizards, intrasexual selection for territory maintenance may be more important for territorial males.

  18. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    PubMed

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in PSIR images (n = 49). The EWA algorithm was validated experimentally and in patient data with a low bias in both IR and PSIR LGE images. Thus, the use of EM and a weighted intensity as in the EWA algorithm, may serve as a clinical standard for the quantification of myocardial infarction in LGE CMR images. CHILL-MI: NCT01379261 . NCT01374321 .

  19. Application of distance-dependent resolution compensation and post-reconstruction filtering for myocardial SPECT

    NASA Astrophysics Data System (ADS)

    Hutton, Brian F.; Lau, Yiu H.

    1998-06-01

    Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.

  20. GLISTR: Glioma Image Segmentation and Registration

    PubMed Central

    Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos

    2015-01-01

    We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965

  1. Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat

    PubMed Central

    Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.

    2014-01-01

    Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410

  2. Fast estimation of diffusion tensors under Rician noise by the EM algorithm.

    PubMed

    Liu, Jia; Gasbarra, Dario; Railavo, Juha

    2016-01-15

    Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimizing convergence rates of alternating minimization reconstruction algorithms for real-time explosive detection applications

    NASA Astrophysics Data System (ADS)

    Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.

    2016-05-01

    X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.

  4. Confronting Diversity in the Community College Classroom: Six Maxims for Good Teaching.

    ERIC Educational Resources Information Center

    Gillett-Karam, Rosemary

    1992-01-01

    Emphasizes the leadership role of community college faculty in developing critical teaching strategies focusing attention on the needs of women and minorities. Describes six maxims of teaching excellence: engaging students' desire to learn, increasing opportunities, eliminating obstacles, empowering students through high expectations, offering…

  5. Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.

    ERIC Educational Resources Information Center

    Goetschel, Roy; Voxman, William

    1987-01-01

    Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)

  6. Finite element analysis of time-independent superconductivity. Ph.D. Thesis Final Report

    NASA Technical Reports Server (NTRS)

    Schuler, James J.

    1993-01-01

    The development of electromagnetic (EM) finite elements based upon a generalized four-potential variational principle is presented. The use of the four-potential variational principle allows for downstream coupling of EM fields with the thermal, mechanical, and quantum effects exhibited by superconducting materials. The use of variational methods to model an EM system allows for a greater range of applications than just the superconducting problem. The four-potential variational principle can be used to solve a broader range of EM problems than any of the currently available formulations. It also reduces the number of independent variables from six to four while easily dealing with conductor/insulator interfaces. This methodology was applied to a range of EM field problems. Results from all these problems predict EM quantities exceptionally well and are consistent with the expected physical behavior.

  7. REDUCING FUNGICIDE USAGE FOR POTATO PRODUCTION BY UNRAVELING TUBER AND FOLIAGE DEFENSE MECHANISMS AGAINST THE LATE BLIGHT PATHOGEN PHYTOPHTHORA INFESTANS

    EPA Science Inventory

    1. More than 215 million RNA sequences from potato tubers infected with P. infestans were generated. This represents a >1,300-fold increase in data generation relative to our previous expectation of 160,000 sequence reads. This increase was achieved by capi...

    2. The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.

      ERIC Educational Resources Information Center

      Thomas, Neal

      Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…

    3. The development and implementation of a Hospital Emergency Response Team (HERT) for out-of-hospital surgical care.

      PubMed

      Scott, Christopher; Putnam, Brant; Bricker, Scott; Schneider, Laura; Raby, Stephanie; Koenig, William; Gausche-Hill, Marianne

      2012-06-01

      Over the past two decades, Los Angeles County has implemented a Hospital Emergency Response Team (HERT) to provide on-scene, advanced surgical care of injured patients as an element of the local Emergency Medical Services (EMS) system. Since 2008, the primary responsibility of the team has been to perform surgical procedures in the austere field setting when prolonged extrication is anticipated. Following the maxim of "life over limb," the team is equipped to provide rapid amputation of an entrapped extremity as well as other procedures and medical care, such as anxiolytics and advanced pain control. This report describes the development and implementation of a local EMS system HERT.

    4. Investigation of dynamic SPECT measurements of the arterial input function in human subjects using simulation, phantom and human studies

      NASA Astrophysics Data System (ADS)

      Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.

      2012-01-01

      Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.

    5. The Self in Decision Making and Decision Implementation.

      ERIC Educational Resources Information Center

      Beach, Lee Roy; Mitchell, Terence R.

      Since the early 1950's the principal prescriptive model in the psychological study of decision making has been maximization of Subjective Expected Utility (SEU). This SEU maximization has come to be regarded as a description of how people go about making decisions. However, while observed decision processes sometimes resemble the SEU model,…

    6. The impact of temperature changes on vector-borne disease transmission: Culicoides midges and bluetongue virus.

      PubMed

      Brand, Samuel P C; Keeling, Matt J

      2017-03-01

      It is a long recognized fact that climatic variations, especially temperature, affect the life history of biting insects. This is particularly important when considering vector-borne diseases, especially in temperate regions where climatic fluctuations are large. In general, it has been found that most biological processes occur at a faster rate at higher temperatures, although not all processes change in the same manner. This differential response to temperature, often considered as a trade-off between onward transmission and vector life expectancy, leads to the total transmission potential of an infected vector being maximized at intermediate temperatures. Here we go beyond the concept of a static optimal temperature, and mathematically model how realistic temperature variation impacts transmission dynamics. We use bluetongue virus (BTV), under UK temperatures and transmitted by Culicoides midges, as a well-studied example where temperature fluctuations play a major role. We first consider an optimal temperature profile that maximizes transmission, and show that this is characterized by a warm day to maximize biting followed by cooler weather to maximize vector life expectancy. This understanding can then be related to recorded representative temperature patterns for England, the UK region which has experienced BTV cases, allowing us to infer historical transmissibility of BTV, as well as using forecasts of climate change to predict future transmissibility. Our results show that when BTV first invaded northern Europe in 2006 the cumulative transmission intensity was higher than any point in the last 50 years, although with climate change such high risks are the expected norm by 2050. Such predictions would indicate that regular BTV epizootics should be expected in the UK in the future. © 2017 The Author(s).

    7. On the role of budget sufficiency, cost efficiency, and uncertainty in species management

      USGS Publications Warehouse

      van der Burg, Max Post; Bly, Bartholomew B.; Vercauteren, Tammy; Grand, James B.; Tyre, Andrew J.

      2014-01-01

      Many conservation planning frameworks rely on the assumption that one should prioritize locations for management actions based on the highest predicted conservation value (i.e., abundance, occupancy). This strategy may underperform relative to the expected outcome if one is working with a limited budget or the predicted responses are uncertain. Yet, cost and tolerance to uncertainty rarely become part of species management plans. We used field data and predictive models to simulate a decision problem involving western burrowing owls (Athene cunicularia hypugaea) using prairie dog colonies (Cynomys ludovicianus) in western Nebraska. We considered 2 species management strategies: one maximized abundance and the other maximized abundance in a cost-efficient way. We then used heuristic decision algorithms to compare the 2 strategies in terms of how well they met a hypothetical conservation objective. Finally, we performed an info-gap decision analysis to determine how these strategies performed under different budget constraints and uncertainty about owl response. Our results suggested that when budgets were sufficient to manage all sites, the maximizing strategy was optimal and suggested investing more in expensive actions. This pattern persisted for restricted budgets up to approximately 50% of the sufficient budget. Below this budget, the cost-efficient strategy was optimal and suggested investing in cheaper actions. When uncertainty in the expected responses was introduced, the strategy that maximized abundance remained robust under a sufficient budget. Reducing the budget induced a slight trade-off between expected performance and robustness, which suggested that the most robust strategy depended both on one's budget and tolerance to uncertainty. Our results suggest that wildlife managers should explicitly account for budget limitations and be realistic about their expected levels of performance.

    8. Draft Genome Sequence of Lactobacillus crispatus EM-LC1, an Isolate with Antimicrobial Activity Cultured from an Elderly Subject

      PubMed Central

      Power, Susan E.; Harris, Hugh M. B.; Bottacini, Francesca; Ross, R. Paul; O’Toole, Paul W.

      2013-01-01

      Here we report the 1.86-Mb draft genome sequence of Lactobacillus crispatus EM-LC1, a fecal isolate with antimicrobial activity. This genome sequence is expected to provide insights into the antimicrobial activity of L. crispatus and improve our knowledge of its potential probiotic traits. PMID:24356836

    9. Temporal variations in magnetic signals generated by the piezomagnetic effect for dislocation sources in a uniform medium

      NASA Astrophysics Data System (ADS)

      Yamazaki, Ken'ichi

      2016-07-01

      Fault ruptures in the Earth's crust generate both elastic and electromagnetic (EM) waves. If the corresponding EM signals can be observed, then earthquakes could be detected before the first seismic waves arrive. In this study, I consider the piezomagnetic effect as a mechanism that converts elastic waves to EM energy, and I derive analytical formulas for the conversion process. The situation considered in this study is a whole-space model, in which elastic and EM properties are uniform and isotropic. In this situation, the governing equations of the elastic and EM fields, combined with the piezomagnetic constitutive law, can be solved analytically in the time domain by ignoring the displacement current term. Using the derived formulas, numerical examples are investigated, and the corresponding characteristics of the expected magnetic signals are resolved. I show that temporal variations in the magnetic field depend strongly on the electrical conductivity of the medium, meaning that precise detection of signals generated by the piezomagnetic effect is generally difficult. Expected amplitudes of piezomagnetic signals are estimated to be no larger than 0.3 nT for earthquakes with a moment magnitude of ≥7.0 at a source distance of 25 km; however, this conclusion may not extend to the detection of real earthquakes, because piezomagnetic stress sensitivity is currently poorly constrained.

    10. DOE Office of Scientific and Technical Information (OSTI.GOV)

      The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less

    11. Interval-based reconstruction for uncertainty quantification in PET

      NASA Astrophysics Data System (ADS)

      Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

      2018-02-01

      A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

    12. The benefits of social influence in optimized cultural markets.

      PubMed

      Abeliuk, Andrés; Berbeglia, Gerardo; Cebrian, Manuel; Van Hentenryck, Pascal

      2015-01-01

      Social influence has been shown to create significant unpredictability in cultural markets, providing one potential explanation why experts routinely fail at predicting commercial success of cultural products. As a result, social influence is often presented in a negative light. Here, we show the benefits of social influence for cultural markets. We present a policy that uses product quality, appeal, position bias and social influence to maximize expected profits in the market. Our computational experiments show that our profit-maximizing policy leverages social influence to produce significant performance benefits for the market, while our theoretical analysis proves that our policy outperforms in expectation any policy not displaying social signals. Our results contrast with earlier work which focused on showing the unpredictability and inequalities created by social influence. Not only do we show for the first time that, under our policy, dynamically showing consumers positive social signals increases the expected profit of the seller in cultural markets. We also show that, in reasonable settings, our profit-maximizing policy does not introduce significant unpredictability and identifies "blockbusters". Overall, these results shed new light on the nature of social influence and how it can be leveraged for the benefits of the market.

    13. Optimal Investment Under Transaction Costs: A Threshold Rebalanced Portfolio Approach

      NASA Astrophysics Data System (ADS)

      Tunc, Sait; Donmez, Mehmet Ali; Kozat, Suleyman Serdar

      2013-06-01

      We study optimal investment in a financial market having a finite number of assets from a signal processing perspective. We investigate how an investor should distribute capital over these assets and when he should reallocate the distribution of the funds over these assets to maximize the cumulative wealth over any investment period. In particular, we introduce a portfolio selection algorithm that maximizes the expected cumulative wealth in i.i.d. two-asset discrete-time markets where the market levies proportional transaction costs in buying and selling stocks. We achieve this using "threshold rebalanced portfolios", where trading occurs only if the portfolio breaches certain thresholds. Under the assumption that the relative price sequences have log-normal distribution from the Black-Scholes model, we evaluate the expected wealth under proportional transaction costs and find the threshold rebalanced portfolio that achieves the maximal expected cumulative wealth over any investment period. Our derivations can be readily extended to markets having more than two stocks, where these extensions are pointed out in the paper. As predicted from our derivations, we significantly improve the achieved wealth over portfolio selection algorithms from the literature on historical data sets.

    14. Matching Pupils and Teachers to Maximize Expected Outcomes.

      ERIC Educational Resources Information Center

      Ward, Joe H., Jr.; And Others

      To achieve a good teacher-pupil match, it is necessary (1) to predict the learning outcomes that will result when each student is instructed by each teacher, (2) to use the predicted performance to compute an Optimality Index for each teacher-pupil combination to indicate the quality of each combination toward maximizing learning for all students,…

  1. Single molecule analysis of B cell receptor motion during signaling activation

    NASA Astrophysics Data System (ADS)

    Rey Suarez, Ivan; Koo, Peter; Zhou, Shu; Wheatley, Brittany; Song, Wenxia; Mochrie, Simon; Upadhyaya, Arpita

    B cells are an essential part of the adaptive immune system. They patrol the body for signs of infection in the form of antigen on the surface of antigen presenting cells. B cell receptor (BCR) binding to antigen induces a signaling cascade that leads to B cell activation and spreading. During activation, BCR form signaling microclusters that later coalesce as the cell contracts. We have studied the dynamics of BCRs on activated murine primary B cells using single particle tracking. The tracks are analyzed using perturbation expectation-maximization (pEM), a systems-level analysis, which allows identification of different short-time diffusive states from single molecule tracks. We identified four dominant diffusive states, two of which correspond to BCRs interacting with signaling molecules. For wild-type cells, the number of BCR in signaling states increases as the cell spreads and then decreases during cell contraction. In contrast, cells lacking the actin regulatory protein, N-WASP, are unable to contract and BCRs remain in the signaling states for longer times. These observations indicate that actin cytoskeleton dynamics modulate BCR diffusion and clustering. Our results provide novel information regarding the timescale of interaction between BCR and signaling molecules.

  2. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  3. A Unified Framework for Brain Segmentation in MR Images

    PubMed Central

    Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.

    2015-01-01

    Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978

  4. Efficacy of texture, shape, and intensity features for robust posterior-fossa tumor segmentation in MRI

    NASA Astrophysics Data System (ADS)

    Ahmed, S.; Iftekharuddin, K. M.; Ogg, R. J.; Laningham, F. H.

    2009-02-01

    Our previous works suggest that fractal-based texture features are very useful for detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. In this work, we investigate and compare efficacy of our texture features such as fractal and multifractional Brownian motion (mBm), and intensity along with another useful level-set based shape feature in PF tumor segmentation. We study feature selection and ranking using Kullback -Leibler Divergence (KLD) and subsequent tumor segmentation; all in an integrated Expectation Maximization (EM) framework. We study the efficacy of all four features in both multimodality as well as disparate MRI modalities such as T1, T2 and FLAIR. Both KLD feature plots and information theoretic entropy measure suggest that mBm feature offers the maximum separation between tumor and non-tumor tissues in T1 and FLAIR MRI modalities. The same metrics show that intensity feature offers the maximum separation between tumor and non-tumor tissue in T2 MRI modality. The efficacies of these features are further validated in segmenting PF tumor using both single modality and multimodality MRI for six pediatric patients with over 520 real MR images.

  5. Application of hidden Markov models to biological data mining: a case study

    NASA Astrophysics Data System (ADS)

    Yin, Michael M.; Wang, Jason T.

    2000-04-01

    In this paper we present an example of biological data mining: the detection of splicing junction acceptors in eukaryotic genes. Identification or prediction of transcribed sequences from within genomic DNA has been a major rate-limiting step in the pursuit of genes. Programs currently available are far from being powerful enough to elucidate the gene structure completely. Here we develop a hidden Markov model (HMM) to represent the degeneracy features of splicing junction acceptor sites in eukaryotic genes. The HMM system is fully trained using an expectation maximization (EM) algorithm and the system performance is evaluated using the 10-way cross- validation method. Experimental results show that our HMM system can correctly classify more than 94% of the candidate sequences (including true and false acceptor sites) into right categories. About 90% of the true acceptor sites and 96% of the false acceptor sites in the test data are classified correctly. These results are very promising considering that only the local information in DNA is used. The proposed model will be a very important component of an effective and accurate gene structure detection system currently being developed in our lab.

  6. Multiple imputation of rainfall missing data in the Iberian Mediterranean context

    NASA Astrophysics Data System (ADS)

    Miró, Juan Javier; Caselles, Vicente; Estrela, María José

    2017-11-01

    Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.

  7. Improving the Accuracy and Training Speed of Motor Imagery Brain-Computer Interfaces Using Wavelet-Based Combined Feature Vectors and Gaussian Mixture Model-Supervectors.

    PubMed

    Lee, David; Park, Sang-Hoon; Lee, Sang-Goog

    2017-10-07

    In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.

  8. Bayesian inference on multiscale models for poisson intensity estimation: applications to photon-limited image denoising.

    PubMed

    Lefkimmiatis, Stamatios; Maragos, Petros; Papandreou, George

    2009-08-01

    We present an improved statistical model for analyzing Poisson processes, with applications to photon-limited imaging. We build on previous work, adopting a multiscale representation of the Poisson process in which the ratios of the underlying Poisson intensities (rates) in adjacent scales are modeled as mixtures of conjugate parametric distributions. Our main contributions include: 1) a rigorous and robust regularized expectation-maximization (EM) algorithm for maximum-likelihood estimation of the rate-ratio density parameters directly from the noisy observed Poisson data (counts); 2) extension of the method to work under a multiscale hidden Markov tree model (HMT) which couples the mixture label assignments in consecutive scales, thus modeling interscale coefficient dependencies in the vicinity of image edges; 3) exploration of a 2-D recursive quad-tree image representation, involving Dirichlet-mixture rate-ratio densities, instead of the conventional separable binary-tree image representation involving beta-mixture rate-ratio densities; and 4) a novel multiscale image representation, which we term Poisson-Haar decomposition, that better models the image edge structure, thus yielding improved performance. Experimental results on standard images with artificially simulated Poisson noise and on real photon-limited images demonstrate the effectiveness of the proposed techniques.

  9. Simultaneous Mean and Covariance Correction Filter for Orbit Estimation.

    PubMed

    Wang, Xiaoxu; Pan, Quan; Ding, Zhengtao; Ma, Zhengya

    2018-05-05

    This paper proposes a novel filtering design, from a viewpoint of identification instead of the conventional nonlinear estimation schemes (NESs), to improve the performance of orbit state estimation for a space target. First, a nonlinear perturbation is viewed or modeled as an unknown input (UI) coupled with the orbit state, to avoid the intractable nonlinear perturbation integral (INPI) required by NESs. Then, a simultaneous mean and covariance correction filter (SMCCF), based on a two-stage expectation maximization (EM) framework, is proposed to simply and analytically fit or identify the first two moments (FTM) of the perturbation (viewed as UI), instead of directly computing such the INPI in NESs. Orbit estimation performance is greatly improved by utilizing the fit UI-FTM to simultaneously correct the state estimation and its covariance. Third, depending on whether enough information is mined, SMCCF should outperform existing NESs or the standard identification algorithms (which view the UI as a constant independent of the state and only utilize the identified UI-mean to correct the state estimation, regardless of its covariance), since it further incorporates the useful covariance information in addition to the mean of the UI. Finally, our simulations demonstrate the superior performance of SMCCF via an orbit estimation example.

  10. Maximum Likelihood Estimations and EM Algorithms with Length-biased Data

    PubMed Central

    Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu

    2012-01-01

    SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840

  11. Variable selection for zero-inflated and overdispersed data with application to health care demand in Germany.

    PubMed

    Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun

    2015-09-01

    In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    NASA Astrophysics Data System (ADS)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  13. Estimating the Propagation of Interdependent Cascading Outages with Multi-Type Branching Processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Junjian; Ju, Wenyun; Sun, Kai

    In this paper, the multi-type branching process is applied to describe the statistics and interdependencies of line outages, the load shed, and isolated buses. The offspring mean matrix of the multi-type branching process is estimated by the Expectation Maximization (EM) algorithm and can quantify the extent of outage propagation. The joint distribution of two types of outages is estimated by the multi-type branching process via the Lagrange-Good inversion. The proposed model is tested with data generated by the AC OPA cascading simulations on the IEEE 118-bus system. The largest eigenvalues of the offspring mean matrix indicate that the system ismore » closer to criticality when considering the interdependence of different types of outages. Compared with empirically estimating the joint distribution of the total outages, good estimate is obtained by using the multitype branching process with a much smaller number of cascades, thus greatly improving the efficiency. It is shown that the multitype branching process can effectively predict the distribution of the load shed and isolated buses and their conditional largest possible total outages even when there are no data of them.« less

  14. Mapping of quantitative trait loci using the skew-normal distribution.

    PubMed

    Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos

    2007-11-01

    In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.

  15. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    PubMed

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  16. Ecological neighborhoods as a framework for umbrella species selection

    USGS Publications Warehouse

    Stuber, Erica F.; Fontaine, Joseph J.

    2018-01-01

    Umbrella species are typically chosen because they are expected to confer protection for other species assumed to have similar ecological requirements. Despite its popularity and substantial history, the value of the umbrella species concept has come into question because umbrella species chosen using heuristic methods, such as body or home range size, are not acting as adequate proxies for the metrics of interest: species richness or population abundance in a multi-species community for which protection is sought. How species associate with habitat across ecological scales has important implications for understanding population size and species richness, and therefore may be a better proxy for choosing an umbrella species. We determined the spatial scales of ecological neighborhoods important for predicting abundance of 8 potential umbrella species breeding in Nebraska using Bayesian latent indicator scale selection in N-mixture models accounting for imperfect detection. We compare the conservation value measured as collective avian abundance under different umbrella species selected following commonly used criteria and selected based on identifying spatial land cover characteristics within ecological neighborhoods that maximize collective abundance. Using traditional criteria to select an umbrella species resulted in sub-maximal expected collective abundance in 86% of cases compared to selecting an umbrella species based on land cover characteristics that maximized collective abundance directly. We conclude that directly assessing the expected quantitative outcomes, rather than ecological proxies, is likely the most efficient method to maximize the potential for conservation success under the umbrella species concept.

  17. 2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm

    NASA Astrophysics Data System (ADS)

    Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.

    2017-08-01

    Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).

  18. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.

    PubMed

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.

  19. Automated detection of pulmonary embolism (PE) in computed tomographic pulmonary angiographic (CTPA) images: multiscale hierachical expectation-maximization segmentation of vessels and PEs

    NASA Astrophysics Data System (ADS)

    Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.

    2007-03-01

    CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.

  20. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    PubMed

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  1. Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU

    PubMed Central

    Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan

    2013-01-01

    This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507

  2. Methods for a longitudinal quantitative outcome with a multivariate Gaussian distribution multi-dimensionally censored by therapeutic intervention.

    PubMed

    Sun, Wanjie; Larsen, Michael D; Lachin, John M

    2014-04-15

    In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.

  3. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.

    PubMed

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.

  4. Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method

    PubMed Central

    Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng

    2016-01-01

    In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929

  5. Searching for gamma-ray counterparts to gravitational waves from merging binary neutron stars with the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Patricelli, B.; Stamerra, A.; Razzano, M.; Pian, E.; Cella, G.

    2018-05-01

    The merger of binary neutron star (BNS) systems are predicted to be progenitors of short gamma-ray bursts (GRBs); the definitive probe of this association came with the recent detection of gravitational waves (GWs) from a BNS merger by Advanced LIGO and Advanced Virgo (GW170817), in coincidence with the short GRB 170817A observed by Fermi-GBM and INTEGRAL. Short GRBs are also expected to emit very-high energy (VHE, > 10S0 GeV) photons and VHE electromagnetic (EM) upper limits have been set with observations performed by ground-based gamma-ray detectors and during the intense EM follow-up campaign associated with GW170817/GRB 170817A. In the next years, the searches for VHE EM counterparts will become more effective thanks to the Cherenkov Telescope Array (CTA): this instrument will be fundamental for the EM follow-up of transient GW events at VHE, owing to its unprecedented sensitivity, rapid response (few tens of seconds) and capability to monitor large sky areas via survey-mode operation. We present a comprehensive study on the prospects for joint GW and VHE EM observations of merging BNSs with Advanced LIGO, Advanced Virgo and CTA, based on detailed simulations of the multi-messenger emission and detection. We propose a new observational strategy optimized on the prior assumptions about the EM emission. The method can be further generalized to include other electromagnetic emission models. According to this study CTA will cover most of the region of the GW skymap for the intermediate and most energetic on-axis GRBs associated to the GW event. We estimate the expected joint GW and VHE EM detection rates and we found this rate goes from 0.08 up to 0.5 events per year for the most energetic EM sources.

  6. [Erythromycin restores oxidative stress-induced corticosteroid responsiveness of human THP-1 cells by up-regulating the expression of histone deacetylase 2].

    PubMed

    Zhang, Yang; He, Zhiyi; Sun, Xuejiao; Li, Zhanhua; Zhao, Lin; Mao, Congzheng; Huang, Dongmei; Zhang, Jianquan; Zhong, Xiaoning

    2014-04-01

    To investigate the effect of erythromycin (EM) on corticosteroid insensitivity of human THP-1 cells induced by cigarette smoke extract (CSE) and its mechanism. THP-1 cells were treated with EM followed by CSE stimulation. Histone deacetylase-2 (HDAC2) short interference RNA (HDAC2-siRNA) was transfected into the cells using Lipofectamine(TM); 2000. Interleukin-8 (IL-8) level in supernatants was measured by ELISA and HDAC2 expression was determined by real-time quantitative PCR (qRT-PCR) and Western blotting. The inhibition ratio of IL-8 in the EM group was significantly higher than that in the CSE group, but lower than that in the control group (P<0.05). The half-maximal inhibitory concentration of dexamethasone (IC50;-Dex) in the EM group was lower than that in the CSE group, but higher than that in the control group (P<0.05). The expression of HDAC2 protein in the EM group was higher than that in the CSE group, but lower than that in the control group (P<0.05). Besides, HDAC2 mRNA and HDAC2 protein expressions were lower in the HDAC2-siRNA group than in the scrambled oligonucleotide (SC) group. EM could reverse HDAC2 mRNA and HDAC2 protein reduction induced by HDAC2-siRNA (P<0.05). Corticosteroid sensitivity of THP-1 cells could be reduced by CSE. EM could reverse the corticosteroid insensitivity by up-regulating the expression of HDAC2 protein.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wellman, Dawn M.; Triplett, Mark B.; Freshley, Mark D.

    DOE-EM, Office of Groundwater and Soil Remediation and DOE Richland, in collaboration with the Hanford site and Pacific Northwest National Laboratory, have established the Deep Vadose Zone Applied Field Research Center (DVZ-AFRC). The DVZ-AFRC leverages DOE investments in basic science from the Office of Science, applied research from DOE EM Office of Technology Innovation and Development, and site operation (e.g., site contractors [CH2M HILL Plateau Remediation Contractor and Washington River Protection Solutions], DOE-EM RL and ORP) in a collaborative effort to address the complex region of the deep vadose zone. Although the aim, goal, motivation, and contractual obligation of eachmore » organization is different, the integration of these activities into the framework of the DVZ-AFRC brings the resources and creativity of many to provide sites with viable alternative remedial strategies to current baseline approaches for persistent contaminants and deep vadose zone contamination. This cooperative strategy removes stove pipes, prevents duplication of efforts, maximizes resources, and facilitates development of the scientific foundation needed to make sound and defensible remedial decisions that will successfully meet the target cleanup goals for one of DOE EM's most intractable problems, in a manner that is acceptable by regulators.« less

  8. Instruments used in the assessment of expectation toward a spine surgery: an integrative review.

    PubMed

    Nepomuceno, Eliane; Silveira, Renata Cristina de Campos Pereira; Dessotte, Carina Aparecida Marosti; Furuya, Rejane Kiyomi; Arantes, Eliana De Cássia; Cunha, Débora Cristine Prévide Teixeira da; Dantas, Rosana Aparecida Spadoti

    2016-01-01

    To identify and describe the instruments used to assess patients' expectations toward spine surgery. An integrative review was carried out in the databases PubMed, CINAHL, LILACS and PsycINFO. A total of 4,402 publications were identified, of which 25 met the selection criteria. Of the studies selected, only three used tools that had confirmed validity and reliability to be applied; in five studies, clinical scores were used, and were modified for the assessment of patients' expectations, and in 17 studies the researchers developed scales without an adequate description of the method used for their development and validation. The assessment of patients' expectations has been methodologically conducted in different ways. Until the completion of this integrative review, only two valid and reliable instruments had been used in three of the selected studies. Identificar e descrever os instrumentos usados para avaliar a expectativa dos pacientes diante do tratamento cirúrgico da coluna vertebral. Revisão Integrativa realizada nas bases de dados PubMed, CINAHL, LILACS e PsycINFO. Identificamos 4.402 publicações, das quais 25 atenderam aos critérios de seleção. Dos estudos selecionados, apenas em três os autores utilizaram instrumentos que possuíam validade e confiabilidade confirmadas para serem aplicados; em cinco estudos foram utilizados escores clínicos, modificados para a avaliação das expectativas dos pacientes, e em dezessete os pesquisadores elaboraram escalas sem adequada descrição do método usado para o seu desenvolvimento e validação. A avaliação das expectativas dos pacientes tem sido metodologicamente conduzida de diferentes maneiras. Até a finalização desta revisão integrativa, apenas dois instrumentos, válidos e confiáveis, haviam sido utilizados em três dos estudos selecionados.

  9. Characteristics and Preliminary Observations of the Influence of Electromyostimulation on the Size and Function of Human Skeletal Muscle During 30 Days of Simulated Microgravity

    NASA Technical Reports Server (NTRS)

    Duvoisin, Marc R.; Convertino, Victor A; Buchanan, Paul; Gollinick, Philip D.; Dudley, Gary A.

    1989-01-01

    During 30 days (d) of bedrest, the practicality of using Elec- troMyoStimulation (EMS) as a deterrent to atrophy and strength loss of lower limb musculature was examined. An EMS system was developed that provided variable but quantifiable levels of EMS, and measured torque. The dominant log of three male subjects was stimulated twice daily in a 3-d on/1-d off cycle during bedrest. The non-dominant leg of each subject acted as a control. A stimulator, using a 0.3 ms monophasic 60 Hz pulse waveform, activated muscle tissue for 4 s. The output waveform from the stimulator was sequenced to the Knee Extensors (KE), Knee Flex- ors (KF), Ankle Extensors (AE), and Ankle Flexors (AF), and caused three isometric contractions of each muscle group per minute. Subject tolerance determined EMS Intensity. Each muscle group received four 5-min bouts of EMS each session with a 10 -min rest between bouts. EMS and torque levels for each muscle action were recorded directly an a computer. Overall average EMS Intensity was 197, 197, 195, and 188 mA for the KE, KF, AF, and AE, respectively. Overall average torque development for these muscle groups was 70, 16, 12, and 27 Nm, respectively. EMS intensity doubled during the study, and average torque increased 2.5 times. Average maximum torque throughout a session reached 54% of maximal voluntary for the KE and 29% for the KF. Reductions in leg volume, muscle compartment size, cross-sectional area of slow and fast-twitch fibers, strength, and aerobic enzyme activities, and increased log compliance were attenuated in the legs which received EMS during bedrest. These results indicate that similar EMS levels induce different torques among different muscle groups and that repeated exposure to EMS increases tolerance and torque development. Longer orien- tation periods, therefore, may enhance its effectiveness. Our preliminary data suggest that the efficacy of EMS as an effective countermeasure for muscle atrophy and strength loss during long duration space travel warrants further investigation.

  10. Balancing Data, Time, and Expectations: The Complex Decision-­Making Environment of Enrollment Management

    ERIC Educational Resources Information Center

    Johnson, Adam W.

    2016-01-01

    As a growing entity within higher education organizational structures, enrollment managers (EMs) are primarily tasked with projecting, recruiting, and retaining the student population of their campuses. Enrollment managers are expected by institutional presidents as well as through industry standards to make data-driven planning decisions to reach…

  11. Balancing Data, Time, and Expectations: The Complex Decision-Making Environment of Enrollment Management

    ERIC Educational Resources Information Center

    Johnson, Adam W.

    2013-01-01

    As a growing entity within higher education organizational structures, enrollment managers (EMs) are primarily tasked with projecting, recruiting, and retaining the student population of their campuses. Enrollment managers are expected by institutional presidents as well as through industry standards to make data-driven planning decisions to reach…

  12. The performance of monotonic and new non-monotonic gradient ascent reconstruction algorithms for high-resolution neuroreceptor PET imaging.

    PubMed

    Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C

    2011-07-07

    Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.

  13. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  14. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  15. Empiric antibiotic treatment of erythema migrans-like skin lesions as a function of geography: a clinical and cost effectiveness modeling study.

    PubMed

    Lantos, Paul M; Brinkerhoff, R Jory; Wormser, Gary P; Clemen, Robert

    2013-12-01

    The skin lesion of early Lyme disease, erythema migrans (EM), is so characteristic that routine practice is to treat all such patients with antibiotics. Because other skin lesions may resemble EM, it is not known whether presumptive treatment of EM is appropriate in regions where Lyme disease is rare. We constructed a decision model to compare the cost and clinical effectiveness of three strategies for the management of EM: Treat All, Observe, and Serology as a function of the probability that an EM-like lesion is Lyme disease. Treat All was found to be the preferred strategy in regions that are endemic for Lyme disease. Where Lyme disease is rare, Observe is the preferred strategy, as presumptive treatment would be expected to produce excessive harm and increased costs. Where Lyme disease is rare, clinicians and public health officials should consider observing patients with EM-like lesions who lack travel to Lyme disease-endemic areas.

  16. When Does Reward Maximization Lead to Matching Law?

    PubMed Central

    Sakai, Yutaka; Fukai, Tomoki

    2008-01-01

    What kind of strategies subjects follow in various behavioral circumstances has been a central issue in decision making. In particular, which behavioral strategy, maximizing or matching, is more fundamental to animal's decision behavior has been a matter of debate. Here, we prove that any algorithm to achieve the stationary condition for maximizing the average reward should lead to matching when it ignores the dependence of the expected outcome on subject's past choices. We may term this strategy of partial reward maximization “matching strategy”. Then, this strategy is applied to the case where the subject's decision system updates the information for making a decision. Such information includes subject's past actions or sensory stimuli, and the internal storage of this information is often called “state variables”. We demonstrate that the matching strategy provides an easy way to maximize reward when combined with the exploration of the state variables that correctly represent the crucial information for reward maximization. Our results reveal for the first time how a strategy to achieve matching behavior is beneficial to reward maximization, achieving a novel insight into the relationship between maximizing and matching. PMID:19030101

  17. Changes of contractile responses due to simulated weightlessness in rat soleus muscle

    NASA Astrophysics Data System (ADS)

    Elkhammari, A.; Noireaud, J.; Léoty, C.

    1994-08-01

    Some contractile and electrophysiological properties of muscle fibers isolated from the slow-twitch soleus (SOL) and fast-twitch extensor digitorum longus (EDL) muscles of rats were compared with those measured in SOL muscles from suspended rats. In suspendede SOL (21 days of tail-suspension) membrane potential (Em), intracellular sodium activity (aiNa) and the slope of the relationship between Em and log [K]o were typical of fast-twitch muscles. The relation between the maximal amplitude of K-contractures vs Em was steeper for control SOL than for EDL and suspended SOL muscles. After suspension, in SOL muscles the contractile threshold and the inactivation curves for K-contractures were shifted to more positive Em. Repriming of K-contractures was unaffected by suspencion. The exposure of isolated fibers to perchlorate (ClO4-)-containing (6-40 mM) solutions resulted ina similar concentration-dependent shift to more negative Em of activation curves for EDL and suspended SOL muscles. On exposure to a Na-free TEA solution, SOL from control and suspended rats, in contrast to EDL muscles, generated slow contractile responses. Suspended SOL showed a reduced sensitivity to the contracture-producing effect of caffeine compared to control muscles. These results suggested that the modification observed due to suspension could be encounted by changes in the characteristics of muscle fibers from slow to fast-twitch type.

  18. Effect of Superimposed Electromyostimulation on Back Extensor Strengthening: A Pilot Study.

    PubMed

    Park, Jae Hyeon; Seo, Kwan Sik; Lee, Shi-Uk

    2016-09-01

    Park, JH, Seo, KS, and Lee, S-U. Effect of superimposed electromyostimulation on back extensor strengthening: a pilot study. J Strength Cond Res 30(9): 2470-2475, 2016-Electromyostimulation (EMS) superimposed on voluntary contraction (VC) can increase muscle strength. However, no study has examined the effect of superimposing EMS on back extensor strengthening. The purpose of this study was to determine the effect of superimposed EMS on back extensor strengthening in healthy adults. Twenty healthy men, 20-29 years of age, without low-back pain were recruited. In the EMS group, electrodes were attached to bilateral L2 and L4 paraspinal muscles. Stimulation intensity was set for maximally tolerable intensity. With VC, EMS was superimposed for 10 seconds followed by a 20-second rest period. The same protocol was used in the sham stimulation (SS) group, except that the stimulation intensity was set at the lowest intensity (5 mA). All subjects performed back extension exercise using a Swiss ball, with 10 repetitions per set, 2 sets each day, 5 times a week for 2 weeks. The primary outcome measure was the change in isokinetic strength of the back extensor using an isokinetic dynamometer. Additionally, endurance was measured using the Sorensen test. After 2 weeks of back extension exercise, the peak torque and endurance increased significantly in both groups (p ≤ 0.05). Effect size between the EMS group and the SS group was medium in strength and endurance. However, there was no statistically significant difference between 2 groups. In conclusion, 2 weeks of back extensor strengthening exercise was effective for strength and endurance. Superimposing EMS on back extensor strengthening exercise could provide an additional effect on increasing strength.

  19. Effects of combined electromyostimulation and gymnastics training in prepubertal girls.

    PubMed

    Deley, Gaëlle; Cometti, Carole; Fatnassi, Anaïs; Paizis, Christos; Babault, Nicolas

    2011-02-01

    This study investigated the effects of a 6-week combined electromyostimulation (EMS) and gymnastic training program on muscle strength and vertical jump performance of prepubertal gymnasts. Sixteen young women gymnasts (age 12.4 ± 1.2 yrs) participated in this study, with 8 in the EMS group and the remaining 8 as controls. EMS was conducted on knee extensor muscles for 20 minutes 3 times a week during the first 3 weeks and once a week during the last 3 weeks. Gymnasts from both groups underwent similar gymnastics training 5-6 times a week. Isokinetic torque of the knee extensors was determined at different eccentric and concentric angular velocities ranging from -60 to +240° per second. Jumping ability was evaluated using squat jump (SJ), counter movement jump (CMJ), reactivity test, and 3 gymnastic-specific jumps. After the first 3 weeks of EMS, maximal voluntary torque was increased (+40.0 ± 10.0%, +35.3 ± 11.8%, and +50.6 ± 7.7% for -60, +60, and +240°s⁻¹, respectively; p < 0.05), as well as SJ, reactivity test and specific jump performances (+20.9 ± 8.3%, +20.4 ± 26.2% and +14.9 ± 17.2% respectively; p < 0.05). Six weeks of EMS were necessary to improve the CMJ (+10.1 ± 10.0%, p < 0.05). Improvements in jump ability were still maintained 1 month after the end of the EMS training program. To conclude, these results first demonstrate that in prepubertal gymnasts, a 6-week EMS program, combined with the daily gymnastic training, induced significant increases both in knee extensor muscle strength and nonspecific and some specific jump performances.

  20. Species Tree Inference Using a Mixture Model.

    PubMed

    Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens

    2015-09-01

    Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic analyses using gene tree parsimony. Bioinformatics 24(13):1540-1541). Our method is competitive with PHYLDOG in terms of accuracy and runs significantly faster and our method outperforms Duptree in accuracy. The analysis constituted by MixTreEM without DLRS may also be used for selecting the target species tree, yielding a fast and yet accurate algorithm for larger data sets. MixTreEM is freely available at http://prime.scilifelab.se/mixtreem/. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  1. Work Placement in UK Undergraduate Programmes. Student Expectations and Experiences.

    ERIC Educational Resources Information Center

    Leslie, David; Richardson, Anne

    1999-01-01

    A survey of 189 pre- and 106 post-sandwich work-experience students in tourism suggested that potential benefits were not being maximized. Students needed better preparation for the work experience, especially in terms of their expectations. The work experience needed better design, and the role of industry tutors needed clarification. (SK)

  2. Career Preference among Universities' Faculty: Literature Review

    ERIC Educational Resources Information Center

    Alenzi, Faris Q.; Salem, Mohamed L.

    2007-01-01

    Why do people enter academic life? What are their expectations? How can they maximize their experience and achievements, both short- and long-term? How much should they move towards commercialization? What can they do to improve their career? How much autonomy can they reasonably expect? What are the key issues for academics and aspiring academics…

  3. Picking battles wisely: plant behaviour under competition.

    PubMed

    Novoplansky, Ariel

    2009-06-01

    Plants are limited in their ability to choose their neighbours, but they are able to orchestrate a wide spectrum of rational competitive behaviours that increase their prospects to prevail under various ecological settings. Through the perception of neighbours, plants are able to anticipate probable competitive interactions and modify their competitive behaviours to maximize their long-term gains. Specifically, plants can minimize competitive encounters by avoiding their neighbours; maximize their competitive effects by aggressively confronting their neighbours; or tolerate the competitive effects of their neighbours. However, the adaptive values of these non-mutually exclusive options are expected to depend strongly on the plants' evolutionary background and to change dynamically according to their past development, and relative sizes and vigour. Additionally, the magnitude of competitive responsiveness is expected to be positively correlated with the reliability of the environmental information regarding the expected competitive interactions and the expected time left for further plastic modifications. Concurrent competition over external and internal resources and morphogenetic signals may enable some plants to increase their efficiency and external competitive performance by discriminately allocating limited resources to their more promising organs at the expense of failing or less successful organs.

  4. Rack 'em, pack 'em and stack 'em: challenges and opportunities in teaching large classes in higher education.

    PubMed

    Kumar, Saravana

    2013-01-01

    The higher education sector is undergoing tremendous change, driven by complex driving forces including financial, administrative, and organisational and stakeholder expectations. It is in this challenging environment, educators are required to maintain and improve the quality of teaching and learning outcomes while contending with increasing class sizes. Despite mixed evidence on the effectiveness of large classes on student outcomes, large classes continue to play an important part in higher education. While large classes pose numerous challenges, they also provide opportunities for innovative solutions. This paper provides an overview of these challenges and highlights opportunities for innovative solutions.

  5. Intent to treat analysis of in vitro fertilization and preimplantation genetic screening versus expectant management in patients with recurrent pregnancy loss.

    PubMed

    Murugappan, Gayathree; Shahine, Lora K; Perfetto, Candice O; Hickok, Lee R; Lathi, Ruth B

    2016-08-01

    In an intent to treat analysis, are clinical outcomes improved in recurrent pregnancy loss (RPL) patients undergoing IVF and preimplantation genetic screening (PGS) compared with patients who are expectantly managed (EM)? Among all attempts at PGS or EM among RPL patients, clinical outcomes including pregnancy rate, live birth (LB) rate and clinical miscarriage (CM) rate were similar. The standard of care for management of patients with RPL is EM. Due to the prevalence of aneuploidy in CM, PGS has been proposed as an alternate strategy for reducing CM rates and improving LB rates. Retrospective cohort study of 300 RPL patients treated between 2009 and 2014. Among two academic fertility centers, 112 RPL patients desired PGS and 188 patients chose EM. Main outcomes measured were pregnancy rate and LB per attempt and CM rate per pregnancy. One attempt was defined as an IVF cycle followed by a fresh embryo transfer or a frozen embryo transfer (PGS group) and 6 months trying to conceive (EM group). In the IVF group, 168 retrievals were performed and 38 cycles canceled their planned PGS. Cycles in which PGS was intended but cancelled had a significantly lower LB rate (15 versus 36%, P = 0.01) and higher CM rate (50 versus 14%, P < 0.01) compared with cycles that completed PGS despite similar maternal ages. Of the 130 completed PGS cycles, 74% (n = 96) yielded at least one euploid embryo. Clinical pregnancy rate per euploid embryo transfer was 72% and LB rate per euploid embryo transfer was 57%. Among all attempts at PGS or EM, clinical outcomes were similar. Median time to pregnancy was 6.5 months in the PGS group and 3.0 months in the EM group. The largest limitation is the retrospective study design, in which patients who elected for IVF/PGS may have had different clinical prognoses than patients who elected for expectant management. In addition, the definition of one attempt at conception for PGS and EM groups was different between the groups and can introduce potential confounders. For example, it was not confirmed that patients in the EM group were trying to conceive for each month of the 6-month period. Success rates with PGS are limited by the high incidence of cycles that intend but cancel PGS or cycles that do not reach transfer. Counseling RPL patients on their treatment options should include not only success rates with PGS per euploid embryo transferred, but also LB rate per initiated PGS cycle. Furthermore, patients who express an urgency to conceive should be counseled that PGS may not accelerate time to conception. None. N/A. N/A. N/A. © The Author 2016. Published by Oxford University Press on behalf of the European Society of Human Reproduction and Embryology. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. How do we assign punishment? The impact of minimal and maximal standards on the evaluation of deviants.

    PubMed

    Kessler, Thomas; Neumann, Jörg; Mummendey, Amélie; Berthold, Anne; Schubert, Thomas; Waldzus, Sven

    2010-09-01

    To explain the determinants of negative behavior toward deviants (e.g., punishment), this article examines how people evaluate others on the basis of two types of standards: minimal and maximal. Minimal standards focus on an absolute cutoff point for appropriate behavior; accordingly, the evaluation of others varies dichotomously between acceptable or unacceptable. Maximal standards focus on the degree of deviation from that standard; accordingly, the evaluation of others varies gradually from positive to less positive. This framework leads to the prediction that violation of minimal standards should elicit punishment regardless of the degree of deviation, whereas punishment in response to violations of maximal standards should depend on the degree of deviation. Four studies assessed or manipulated the type of standard and degree of deviation displayed by a target. Results consistently showed the expected interaction between type of standard (minimal and maximal) and degree of deviation on punishment behavior.

  7. Societal preferences for distributive justice in the allocation of health care resources: a latent class discrete choice experiment.

    PubMed

    Skedgel, Chris; Wailoo, Allan; Akehurst, Ron

    2015-01-01

    Economic theory suggests that resources should be allocated in a way that produces the greatest outputs, on the grounds that maximizing output allows for a redistribution that could benefit everyone. In health care, this is known as QALY (quality-adjusted life-year) maximization. This justification for QALY maximization may not hold, though, as it is difficult to reallocate health. Therefore, the allocation of health care should be seen as a matter of distributive justice as well as efficiency. A discrete choice experiment was undertaken to test consistency with the principles of QALY maximization and to quantify the willingness to trade life-year gains for distributive justice. An empirical ethics process was used to identify attributes that appeared relevant and ethically justified: patient age, severity (decomposed into initial quality and life expectancy), final health state, duration of benefit, and distributional concerns. Only 3% of respondents maximized QALYs with every choice, but scenarios with larger aggregate QALY gains were chosen more often and a majority of respondents maximized QALYs in a majority of their choices. However, respondents also appeared willing to prioritize smaller gains to preferred groups over larger gains to less preferred groups. Marginal analyses found a statistically significant preference for younger patients and a wider distribution of gains, as well as an aversion to patients with the shortest life expectancy or a poor final health state. These results support the existence of an equity-efficiency tradeoff and suggest that well-being could be enhanced by giving priority to programs that best satisfy societal preferences. Societal preferences could be incorporated through the use of explicit equity weights, although more research is required before such weights can be used in priority setting. © The Author(s) 2014.

  8. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    PubMed

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.

  9. 77 FR 50728 - International Mail Rates

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-08-22

    ...-tiered rate structure for Inbound Expedited Services to the EMS Cooperative's expectation that all members will participate in the Pay- for-Performance Plan, and (2) a 2012 listing of countries indicating...

  10. A deep convolutional neural network approach to single-particle recognition in cryo-electron microscopy.

    PubMed

    Zhu, Yanan; Ouyang, Qi; Mao, Youdong

    2017-07-21

    Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.

  11. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  12. Classification of longitudinal data through a semiparametric mixed-effects model based on lasso-type estimators.

    PubMed

    Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian

    2015-06-01

    We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.

  13. Genetic diversity of the HLA-G coding region in Amerindian populations from the Brazilian Amazon: a possible role of natural selection.

    PubMed

    Mendes-Junior, C T; Castelli, E C; Meyer, D; Simões, A L; Donadi, E A

    2013-12-01

    HLA-G has an important role in the modulation of the maternal immune system during pregnancy, and evidence that balancing selection acts in the promoter and 3'UTR regions has been previously reported. To determine whether selection acts on the HLA-G coding region in the Amazon Rainforest, exons 2, 3 and 4 were analyzed in a sample of 142 Amerindians from nine villages of five isolated tribes that inhabit the Central Amazon. Six previously described single-nucleotide polymorphisms (SNPs) were identified and the Expectation-Maximization (EM) and PHASE algorithms were used to computationally reconstruct SNP haplotypes (HLA-G alleles). A new HLA-G allele, which originated in Amerindian populations by a crossing-over event between two widespread HLA-G alleles, was identified in 18 individuals. Neutrality tests evidenced that natural selection has a complex part in the HLA-G coding region. Although balancing selection is the type of selection that shapes variability at a local level (Native American populations), we have also shown that purifying selection may occur on a worldwide scale. Moreover, the balancing selection does not seem to act on the coding region as strongly as it acts on the flanking regulatory regions, and such coding signature may actually reflect a hitchhiking effect.

  14. Robust decoding of selective auditory attention from MEG in a competing-speaker environment via state-space modeling✩

    PubMed Central

    Akram, Sahar; Presacco, Alessandro; Simon, Jonathan Z.; Shamma, Shihab A.; Babadi, Behtash

    2015-01-01

    The underlying mechanism of how the human brain solves the cocktail party problem is largely unknown. Recent neuroimaging studies, however, suggest salient temporal correlations between the auditory neural response and the attended auditory object. Using magnetoencephalography (MEG) recordings of the neural responses of human subjects, we propose a decoding approach for tracking the attentional state while subjects are selectively listening to one of the two speech streams embedded in a competing-speaker environment. We develop a biophysically-inspired state-space model to account for the modulation of the neural response with respect to the attentional state of the listener. The constructed decoder is based on a maximum a posteriori (MAP) estimate of the state parameters via the Expectation Maximization (EM) algorithm. Using only the envelope of the two speech streams as covariates, the proposed decoder enables us to track the attentional state of the listener with a temporal resolution of the order of seconds, together with statistical confidence intervals. We evaluate the performance of the proposed model using numerical simulations and experimentally measured evoked MEG responses from the human brain. Our analysis reveals considerable performance gains provided by the state-space model in terms of temporal resolution, computational complexity and decoding accuracy. PMID:26436490

  15. Multi-Topic Tracking Model for dynamic social network

    NASA Astrophysics Data System (ADS)

    Li, Yuhua; Liu, Changzheng; Zhao, Ming; Li, Ruixuan; Xiao, Hailing; Wang, Kai; Zhang, Jun

    2016-07-01

    The topic tracking problem has attracted much attention in the last decades. However, existing approaches rarely consider network structures and textual topics together. In this paper, we propose a novel statistical model based on dynamic bayesian network, namely Multi-Topic Tracking Model for Dynamic Social Network (MTTD). It takes influence phenomenon, selection phenomenon, document generative process and the evolution of textual topics into account. Specifically, in our MTTD model, Gibbs Random Field is defined to model the influence of historical status of users in the network and the interdependency between them in order to consider the influence phenomenon. To address the selection phenomenon, a stochastic block model is used to model the link generation process based on the users' interests to topics. Probabilistic Latent Semantic Analysis (PLSA) is used to describe the document generative process according to the users' interests. Finally, the dependence on the historical topic status is also considered to ensure the continuity of the topic itself in topic evolution model. Expectation Maximization (EM) algorithm is utilized to estimate parameters in the proposed MTTD model. Empirical experiments on real datasets show that the MTTD model performs better than Popular Event Tracking (PET) and Dynamic Topic Model (DTM) in generalization performance, topic interpretability performance, topic content evolution and topic popularity evolution performance.

  16. Weakly Supervised Dictionary Learning

    NASA Astrophysics Data System (ADS)

    You, Zeyu; Raich, Raviv; Fern, Xiaoli Z.; Kim, Jinsub

    2018-05-01

    We present a probabilistic modeling and inference framework for discriminative analysis dictionary learning under a weak supervision setting. Dictionary learning approaches have been widely used for tasks such as low-level signal denoising and restoration as well as high-level classification tasks, which can be applied to audio and image analysis. Synthesis dictionary learning aims at jointly learning a dictionary and corresponding sparse coefficients to provide accurate data representation. This approach is useful for denoising and signal restoration, but may lead to sub-optimal classification performance. By contrast, analysis dictionary learning provides a transform that maps data to a sparse discriminative representation suitable for classification. We consider the problem of analysis dictionary learning for time-series data under a weak supervision setting in which signals are assigned with a global label instead of an instantaneous label signal. We propose a discriminative probabilistic model that incorporates both label information and sparsity constraints on the underlying latent instantaneous label signal using cardinality control. We present the expectation maximization (EM) procedure for maximum likelihood estimation (MLE) of the proposed model. To facilitate a computationally efficient E-step, we propose both a chain and a novel tree graph reformulation of the graphical model. The performance of the proposed model is demonstrated on both synthetic and real-world data.

  17. Comparison between the loading capacities of columns packed with partially and totally porous fine particles. What is the effective surface area available for adsorption?

    PubMed

    Gritti, Fabrice; Guiochon, Georges

    2007-12-28

    The adsorption isotherms of phenol, caffeine, insulin, and lysozyme were measured on two C(18)-bonded silica columns. The first one was packed with classical totally porous particles (3 microm Luna(2)-C(18)from Phenomenex, Torrance, CA, USA), the second one with shell particles (2.7 microm Halo-C(18) from Advanced Materials Technology, Wilmington, DE, USA). The measurements were made at room temperature (T=295+/-1K), using mainly frontal analysis (FA) and also elution by characteristic points (FACP) when necessary. The adsorption energy distributions (AEDs) were estimated by the iterative numerical expectation-maximization (EM) procedure and served to justify the choice of the best adsorption isotherm model for each compound. The best isotherm parameters were derived from either the best fit of the experimental data to a multi-Langmuir isotherm model (MLRA) or from the AED results (equilibrium constants and saturation capacities), when the convergence of the EM program was achieved. The experiments show than the loading capacity of the Luna column is more than twice that of the Halo column for low-molecular-weight compounds. This result was expected; it is in good agreement with the values of the accessible surface area of these two materials, which were calculated from the pore size volume distributions. The pore size volume distributions are validated by the excellent agreement between the calculated and measured exclusion volumes of polystyrene standards by inverse size exclusion chromatography (ISEC). In contrast, the loading capacity ratio of the two columns is 1.5 or less with insulin and lysozyme. This is due to a significant exclusion of these two proteins from the internal pore volumes of the two packing materials. This result raises the problem of the determination of the effective surface area of the packing material, particularly in the case of proteins. This area is about 40 and 30% of the total surface area for insulin and for lysozyme, respectively, based on the pore size volume distribution validated by the ISEC method. The ISEC experiments showed that the largest and the smallest mesopores have rather a cylindrical and a spherical shape, respectively, for both packing materials.

  18. Joint state and parameter estimation of the hemodynamic model by particle smoother expectation maximization method

    NASA Astrophysics Data System (ADS)

    Aslan, Serdar; Taylan Cemgil, Ali; Akın, Ata

    2016-08-01

    Objective. In this paper, we aimed for the robust estimation of the parameters and states of the hemodynamic model by using blood oxygen level dependent signal. Approach. In the fMRI literature, there are only a few successful methods that are able to make a joint estimation of the states and parameters of the hemodynamic model. In this paper, we implemented a maximum likelihood based method called the particle smoother expectation maximization (PSEM) algorithm for the joint state and parameter estimation. Main results. Former sequential Monte Carlo methods were only reliable in the hemodynamic state estimates. They were claimed to outperform the local linearization (LL) filter and the extended Kalman filter (EKF). The PSEM algorithm is compared with the most successful method called square-root cubature Kalman smoother (SCKS) for both state and parameter estimation. SCKS was found to be better than the dynamic expectation maximization (DEM) algorithm, which was shown to be a better estimator than EKF, LL and particle filters. Significance. PSEM was more accurate than SCKS for both the state and the parameter estimation. Hence, PSEM seems to be the most accurate method for the system identification and state estimation for the hemodynamic model inversion literature. This paper do not compare its results with Tikhonov-regularized Newton—CKF (TNF-CKF), a recent robust method which works in filtering sense.

  19. Competitive Facility Location with Random Demands

    NASA Astrophysics Data System (ADS)

    Uno, Takeshi; Katagiri, Hideki; Kato, Kosuke

    2009-10-01

    This paper proposes a new location problem of competitive facilities, e.g. shops and stores, with uncertain demands in the plane. By representing the demands for facilities as random variables, the location problem is formulated to a stochastic programming problem, and for finding its solution, three deterministic programming problems: expectation maximizing problem, probability maximizing problem, and satisfying level maximizing problem are considered. After showing that one of their optimal solutions can be found by solving 0-1 programming problems, their solution method is proposed by improving the tabu search algorithm with strategic vibration. Efficiency of the solution method is shown by applying to numerical examples of the facility location problems.

  20. Physical renormalization condition for de Sitter QED

    NASA Astrophysics Data System (ADS)

    Hayashinaka, Takahiro; Xue, She-Sheng

    2018-05-01

    We considered a new renormalization condition for the vacuum expectation values of the scalar and spinor currents induced by a homogeneous and constant electric field background in de Sitter spacetime. Following a semiclassical argument, the condition named maximal subtraction imposes the exponential suppression on the massive charged particle limit of the renormalized currents. The maximal subtraction changes the behaviors of the induced currents previously obtained by the conventional minimal subtraction scheme. The maximal subtraction is favored for a couple of physically decent predictions including the identical asymptotic behavior of the scalar and spinor currents, the removal of the IR hyperconductivity from the scalar current, and the finite current for the massless fermion.

  1. Trust regions in Kriging-based optimization with expected improvement

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  2. Faculty Mentoring Practices in Academic Emergency Medicine.

    PubMed

    Welch, Julie; Sawtelle, Stacy; Cheng, David; Perkins, Tony; Ownbey, Misha; MacNeill, Emily; Hockberger, Robert; Rusyniak, Daniel

    2017-03-01

    Mentoring is considered a fundamental component of career success and satisfaction in academic medicine. However, there is no national standard for faculty mentoring in academic emergency medicine (EM) and a paucity of literature on the subject. The objective was to conduct a descriptive study of faculty mentoring programs and practices in academic departments of EM. An electronic survey instrument was sent to 135 department chairs of EM in the United States. The survey queried faculty demographics, mentoring practices, structure, training, expectations, and outcome measures. Chi-square and Wilcoxon rank-sum tests were used to compare metrics of mentoring effectiveness (i.e., number of publications and National Institutes of Health [NIH] funding) across mentoring variables of interest. Thirty-nine of 135 departments completed the survey, with a heterogeneous mix of faculty classifications. While only 43.6% of departments had formal mentoring programs, many augmented faculty mentoring with project or skills-based mentoring (66.7%), peer mentoring (53.8%), and mentoring committees (18%). Although the majority of departments expected faculty to participate in mentoring relationships, only half offered some form of mentoring training. The mean number of faculty publications per department per year was 52.8, and 11 departments fell within the top 35 NIH-funded EM departments. There was an association between higher levels of perceived mentoring success and both higher NIH funding (p = 0.022) and higher departmental publications rates (p = 0.022). In addition, higher NIH funding was associated with mentoring relationships that were assigned (80%), self-identified (20%), or mixed (22%; p = 0.026). Our findings help to characterize the variability of faculty mentoring in EM, identify opportunities for improvement, and underscore the need to learn from other successful mentoring programs. This study can serve as a basis to share mentoring practices and stimulate conversation around strategies to improve faculty mentoring in EM. © 2016 by the Society for Academic Emergency Medicine.

  3. Network approach for decision making under risk—How do we choose among probabilistic options with the same expected value?

    PubMed Central

    Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing ‘goal’ and ‘time’ factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight. PMID:29702665

  4. Network approach for decision making under risk-How do we choose among probabilistic options with the same expected value?

    PubMed

    Pan, Wei; Chen, Yi-Shin

    2018-01-01

    Conventional decision theory suggests that under risk, people choose option(s) by maximizing the expected utility. However, theories deal ambiguously with different options that have the same expected utility. A network approach is proposed by introducing 'goal' and 'time' factors to reduce the ambiguity in strategies for calculating the time-dependent probability of reaching a goal. As such, a mathematical foundation that explains the irrational behavior of choosing an option with a lower expected utility is revealed, which could imply that humans possess rationality in foresight.

  5. Can Monkeys Make Investments Based on Maximized Pay-off?

    PubMed Central

    Steelandt, Sophie; Dufour, Valérie; Broihanne, Marie-Hélène; Thierry, Bernard

    2011-01-01

    Animals can maximize benefits but it is not known if they adjust their investment according to expected pay-offs. We investigated whether monkeys can use different investment strategies in an exchange task. We tested eight capuchin monkeys (Cebus apella) and thirteen macaques (Macaca fascicularis, Macaca tonkeana) in an experiment where they could adapt their investment to the food amounts proposed by two different experimenters. One, the doubling partner, returned a reward that was twice the amount given by the subject, whereas the other, the fixed partner, always returned a constant amount regardless of the amount given. To maximize pay-offs, subjects should invest a maximal amount with the first partner and a minimal amount with the second. When tested with the fixed partner only, one third of monkeys learned to remove a maximal amount of food for immediate consumption before investing a minimal one. With both partners, most subjects failed to maximize pay-offs by using different decision rules with each partner' quality. A single Tonkean macaque succeeded in investing a maximal amount to one experimenter and a minimal amount to the other. The fact that only one of over 21 subjects learned to maximize benefits in adapting investment according to experimenters' quality indicates that such a task is difficult for monkeys, albeit not impossible. PMID:21423777

  6. Evidence for surprise minimization over value maximization in choice behavior

    PubMed Central

    Schwartenbeck, Philipp; FitzGerald, Thomas H. B.; Mathys, Christoph; Dolan, Ray; Kronbichler, Martin; Friston, Karl

    2015-01-01

    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents’ to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus ‘keep their options open’. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations. PMID:26564686

  7. Risk aversion and risk seeking in multicriteria forest management: a Markov decision process approach

    Treesearch

    Joseph Buongiorno; Mo Zhou; Craig Johnston

    2017-01-01

    Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value.  The other method used the certainty...

  8. Effects of intensity on muscle-specific voluntary electromechanical delay and relaxation electromechanical delay.

    PubMed

    Smith, Cory M; Housh, Terry J; Hill, Ethan C; Keller, Josh L; Johnson, Glen O; Schmidt, Richard J

    2018-06-01

    The purposes of this study were to examine: 1) the potential muscle-specific differences in voluntary electromechanical delay (EMD) and relaxation electromechanical delay (R-EMD), and 2) the effects of intensity on EMD and R-EMD during step incremental isometric muscle actions from 10 to 100% maximal voluntary isometric contraction (MVIC). EMD and R-EMD measures were calculated from the simultaneous assessments of electromyography, mechanomyography, and force production from the vastus lateralis (VL), vastus medialis (VM), and rectus femoris (RF) during step isometric muscle actions. There were no differences between the VL, VM, and RF for the voluntary EMD E-M (onsets of the electromyographic to mechanomyographic signals), EMD M-F (onsets the mechanomyographic to force production), or EMD E-F (onsets of the electromyographic signal to force production) as well as R-EMD E-M (cessation of electromyographic to mechanomyographic signal), R-EMD M-F (cessation of mechanomyographic signal to force cessation), or R-EMD E-F (cessation of electromyorgraphic signal to force cessation) at any intensity. There were decreases in all EMD and R-EMD measures with increases in intensity. The relative contributions from EMD E-M and EMD M-F to EMD E-F as well as R-EMD E-M and R-EMD M-F to R-EMD E-F remained similar across all intensities. The superficial muscles of the quadriceps femoris shared similar EMD and R-EMD measurements.

  9. Biotransformation of L-tyrosine to Dopamine by a Calcium Alginate Immobilized Mutant Strain of Aspergillus oryzae.

    PubMed

    Ali, Sikander; Nawaz, Wajeeha

    2016-08-01

    The present research work is concerned with the biotransformation of L-tyrosine to dopamine (DA) by calcium alginate entrapped conidiospores of a mutant strain of Aspergillus oryzae. Different strains of A. oryzae were isolated from soil. Out of 13 isolated strains, isolate-2 (I-2) was found to be a better DA producer. The wild-type I-2 was chemically improved by treating it with different concentrations of ethyl methyl sulfonate (EMS). Among seven mutant variants, EMS-6 exhibiting maximal DA activity of 43 μg/ml was selected. The strain was further exposed with L-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of selected mutant variant A. oryzae EMS-6 strain were entrapped in calcium alginate beads. Different parameters for immobilization were investigated. The activity was further improved from 44 to 62 μg/ml under optimized conditions (1.5 % sodium alginate, 2 ml inoculum, and 2 mm bead size). The best resistant mutant variable exhibited over threefold increase in DA activity (62 μg/ml) than did wild-type I-2 (21 μg/ml) in the reaction mixture. From the results presented in the study, it was observed that high titers of DA activity in vitro could effectively be achieved by the EMS-induced mutagenesis of filamentous fungus culture used.

  10. Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images

    PubMed Central

    Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.

    2012-01-01

    SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773

  11. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    NASA Astrophysics Data System (ADS)

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets—consisting of 20 and 18 volumes, respectively—provided by the Internet Brain Segmentation Repository.

  12. Performance comparison of first-order conditional estimation with interaction and Bayesian estimation methods for estimating the population parameters and its distribution from data sets with a low number of subjects.

    PubMed

    Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol

    2017-12-01

    Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.

  13. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction.

    PubMed

    Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin

    2011-06-07

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.

  14. Maximal sfermion flavour violation in super-GUTs

    DOE PAGES

    Ellis, John; Olive, Keith A.; Velasco-Sevilla, Liliana

    2016-10-20

    We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m 0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m 1/2, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m 1/2 and generation independent. In this case, the input scalar masses m 0 may violate flavour maximally, amore » scenario we call MaxSFV, and there is no supersymmetric flavour problem. As a result, we illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity« less

  15. The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology.

    PubMed

    Jara-Ettinger, Julian; Gweon, Hyowon; Schulz, Laura E; Tenenbaum, Joshua B

    2016-08-01

    We propose that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they expect to obtain relative to the costs they expect to incur. This 'naïve utility calculus' allows both children and adults observe the behavior of others and infer their beliefs and desires, their longer-term knowledge and preferences, and even their character: who is knowledgeable or competent, who is praiseworthy or blameworthy, who is friendly, indifferent, or an enemy. We review studies providing support for the naïve utility calculus, and we show how it captures much of the rich social reasoning humans engage in from infancy. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Speeded Reaching Movements around Invisible Obstacles

    PubMed Central

    Hudson, Todd E.; Wolfe, Uta; Maloney, Laurence T.

    2012-01-01

    We analyze the problem of obstacle avoidance from a Bayesian decision-theoretic perspective using an experimental task in which reaches around a virtual obstacle were made toward targets on an upright monitor. Subjects received monetary rewards for touching the target and incurred losses for accidentally touching the intervening obstacle. The locations of target-obstacle pairs within the workspace were varied from trial to trial. We compared human performance to that of a Bayesian ideal movement planner (who chooses motor strategies maximizing expected gain) using the Dominance Test employed in Hudson et al. (2007). The ideal movement planner suffers from the same sources of noise as the human, but selects movement plans that maximize expected gain in the presence of that noise. We find good agreement between the predictions of the model and actual performance in most but not all experimental conditions. PMID:23028276

  17. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    PubMed

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  18. An Effective Post-Filtering Framework for 3-D PET Image Denoising Based on Noise and Sensitivity Characteristics

    NASA Astrophysics Data System (ADS)

    Kim, Ji Hye; Ahn, Il Jun; Nam, Woo Hyun; Ra, Jong Beom

    2015-02-01

    Positron emission tomography (PET) images usually suffer from a noticeable amount of statistical noise. In order to reduce this noise, a post-filtering process is usually adopted. However, the performance of this approach is limited because the denoising process is mostly performed on the basis of the Gaussian random noise. It has been reported that in a PET image reconstructed by the expectation-maximization (EM), the noise variance of each voxel depends on its mean value, unlike in the case of Gaussian noise. In addition, we observe that the variance also varies with the spatial sensitivity distribution in a PET system, which reflects both the solid angle determined by a given scanner geometry and the attenuation information of a scanned object. Thus, if a post-filtering process based on the Gaussian random noise is applied to PET images without consideration of the noise characteristics along with the spatial sensitivity distribution, the spatially variant non-Gaussian noise cannot be reduced effectively. In the proposed framework, to effectively reduce the noise in PET images reconstructed by the 3-D ordinary Poisson ordered subset EM (3-D OP-OSEM), we first denormalize an image according to the sensitivity of each voxel so that the voxel mean value can represent its statistical properties reliably. Based on our observation that each noisy denormalized voxel has a linear relationship between the mean and variance, we try to convert this non-Gaussian noise image to a Gaussian noise image. We then apply a block matching 4-D algorithm that is optimized for noise reduction of the Gaussian noise image, and reconvert and renormalize the result to obtain a final denoised image. Using simulated phantom data and clinical patient data, we demonstrate that the proposed framework can effectively suppress the noise over the whole region of a PET image while minimizing degradation of the image resolution.

  19. A unified EM approach to bladder wall segmentation with coupled level-set constraints

    PubMed Central

    Han, Hao; Li, Lihong; Duan, Chaijie; Zhang, Hao; Zhao, Yang; Liang, Zhengrong

    2013-01-01

    Magnetic resonance (MR) imaging-based virtual cystoscopy (VCys), as a non-invasive, safe and cost-effective technique, has shown its promising virtue for early diagnosis and recurrence management of bladder carcinoma. One primary goal of VCys is to identify bladder lesions with abnormal bladder wall thickness, and consequently a precise segmentation of the inner and outer borders of the wall is required. In this paper, we propose a unified expectation-maximization (EM) approach to the maximum-a-posteriori (MAP) solution of bladder wall segmentation, by integrating a novel adaptive Markov random field (AMRF) model and the coupled level-set (CLS) information into the prior term. The proposed approach is applied to the segmentation of T1-weighted MR images, where the wall is enhanced while the urine and surrounding soft tissues are suppressed. By introducing scale-adaptive neighborhoods as well as adaptive weights into the conventional MRF model, the AMRF model takes into account the local information more accurately. In order to mitigate the influence of image artifacts adjacent to the bladder wall and to preserve the continuity of the wall surface, we apply geometrical constraints on the wall using our previously developed CLS method. This paper not only evaluates the robustness of the presented approach against the known ground truth of simulated digital phantoms, but further compares its performance with our previous CLS approach via both volunteer and patient studies. Statistical analysis on experts’ scores of the segmented borders from both approaches demonstrates that our new scheme is more effective in extracting the bladder wall. Based on the wall thickness calibrated from the segmented single-layer borders, a three-dimensional virtual bladder model can be constructed and the wall thickness can be mapped on to the model, where the bladder lesions will be eventually detected via experts’ visualization and/or computer-aided detection. PMID:24001932

  20. PCA based clustering for brain tumor segmentation of T1w MRI images.

    PubMed

    Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay

    2017-03-01

    Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Fault Identification by Unsupervised Learning Algorithm

    NASA Astrophysics Data System (ADS)

    Nandan, S.; Mannu, U.

    2012-12-01

    Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.

  2. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    NASA Astrophysics Data System (ADS)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.

  3. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.

    PubMed

    Novosad, Philip; Reader, Andrew J

    2016-06-21

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.

  4. Assessing the Value of Information of Geophysical Data For Groundwater Management

    NASA Astrophysics Data System (ADS)

    Trainor, W. J.; Caers, J. K.; Mukerji, T.; Auken, E.; Knight, R. J.

    2008-12-01

    Effective groundwater management requires hydrogeologic models informed by various data sources. The long-term goal of our research is to develop methodologies that quantify the value of information (VOI) of geophysical data for water managers. We present an initial sensitivity study on assessing the reliability of airborne electro-magnetic (EM) data for detecting channel orientation. The reliability results are used to calculate VOI regarding decisions of artificial recharge to mitigate seawater intrusion. To demonstrate how a hydrogeologic problem can be framed in decision analysis terms, a hypothetical example is built, where water managers are considering artificial recharge to remediate seawater intrusion. Is the cost of recharge justified given the large uncertainty of subsurface heterogeneity that may interfere in a successful recharge? Thus, the decision is should recharge be performed, and if yes, where should recharge wells be located? This decision is difficult because of the large uncertainty of the aquifer heterogeneity that influences flow. The expected value of all possible outcomes to the decision without gathering additional EM information is the prior value VPRIOR. The value of information (VOI) is calculated as the expected gain in value after including the relevant new information, or the difference between the value after a free experiment (VFE) and the value prior (VPRIOR): VOI = VFE - VPRIOR Airborne EM has been used to detect confining clay layers and flow barriers. However, geophysical information rarely identifies the subsurface perfectly. Many challenges impact data quality and the resulting models (interpretation uncertainty). To evaluate how well airborne EM data detect the orientation of subsurface channel systems, 125 alternative binary, fluvial lithology models are generated, each categorized into one of three subsurface scenarios: northwest, southwest and mixed channel orientation. Using rock property relations, the lithology models are converted into electrical resistivity models for EM forward modeling, to generate time-domain EM data. Noise is added to the late times of the EM data to better represent typical airborne acquisition. Inversions are performed to obtain 125 inverted resistivity images. From the images, we calculate the angle of maximum spatial correlation at every cell, and compare it with the truth - the original lithology model. These synthetic models serve as a proxy to estimate misclassification probabilities of channel orientation from actual EM data. The misclassification probabilities are then used in the VOI calculations. Results are presented demonstrating how the reliability measure and the pumping schedule can impact VOI. Lastly, reliability and VOI are calculated and compared for land-based EM data, which has different spatial sampling and resolution than air-borne data.

  5. Anechoic Chamber test of the Electromagnetic Measurement System ground test unit

    NASA Astrophysics Data System (ADS)

    Stevenson, L. E.; Scott, L. D.; Oakes, E. T.

    1987-04-01

    The Electromagnetic Measurement System (EMMS) will acquire data on electromagnetic (EM) environments at key weapon locations on various aircraft certified for nuclear weapons. The high-frequency ground unit of the EMMS consists of an instrumented B61 bomb case that will measure (with current probes) the localized current density resulting from an applied EM field. For this portion of the EMMS, the first system test was performed in the Anechoic Chamber Facility at Sandia National Laboratories, Albuquerque, New Mexico. The EMMS pod was subjected to EM radiation at microwave frequencies of 1, 3, and 10 GHz. At each frequency, the EMMS pod was rotated at many positions relative to the microwave source so that the individual current probes were exposed to a direct line-of-sight illumination. The variations between the measured and calculated electric fields for the current probes with direct illumination by the EM source are within a few db. The results obtained from the anechoic test were better than expected and verify that the high frequency ground portion of the EMMS will accurately measure the EM environments for which it was designed.

  6. Cryo-EM in drug discovery: achievements, limitations and prospects.

    PubMed

    Renaud, Jean-Paul; Chari, Ashwin; Ciferri, Claudio; Liu, Wen-Ti; Rémigy, Hervé-William; Stark, Holger; Wiesmann, Christian

    2018-06-08

    Cryo-electron microscopy (cryo-EM) of non-crystalline single particles is a biophysical technique that can be used to determine the structure of biological macromolecules and assemblies. Historically, its potential for application in drug discovery has been heavily limited by two issues: the minimum size of the structures it can be used to study and the resolution of the images. However, recent technological advances - including the development of direct electron detectors and more effective computational image analysis techniques - are revolutionizing the utility of cryo-EM, leading to a burst of high-resolution structures of large macromolecular assemblies. These advances have raised hopes that single-particle cryo-EM might soon become an important tool for drug discovery, particularly if they could enable structural determination for 'intractable' targets that are still not accessible to X-ray crystallographic analysis. This article describes the recent advances in the field and critically assesses their relevance for drug discovery as well as discussing at what stages of the drug discovery pipeline cryo-EM can be useful today and what to expect in the near future.

  7. Study of spectroscopic properties of nanosized particles of core-shell morphology

    NASA Astrophysics Data System (ADS)

    Bzhalava, T. N.; Kervalishvili, P. J.

    2018-03-01

    Method of studying spectroscopic properties of nanosized particles and estimation of resonance wavelength range for determination of specific and unique “spectral” signatures in purpose of sensing, identification of nanobioparticles, viruses is proposed. Elaboration of relevant models of viruses, estimation of spectral response on interaction of electromagnetic (EM) field and viral nanoparticle is the goal of proposed methodology. Core-shell physical model is used as the first approximation of shape-structure of virion. Theoretical solution of EM wave scattering on single spherical virus-like particle (VLP) is applied for determination of EM fields in the areas of core, shell and surrounding medium of (VLP), as well as scattering and absorption characteristics. Numerical results obtained by computer simulation for estimation of EM “spectra” of bacteriophage T7 demonstrate the strong dependence of spectroscopic characteristics on core-shell related electric and geometric parameters of VLP in resonance wavelengths range. Expected spectral response is observable on far-field characterizations. Obtained analytical EM field expressions, modelling technique in complement with experimental spectroscopic methods should be the way of providing the virus spectral signatures, important in bioparticles characterization.

  8. DESGW: Optical Follow-up of BBH LIGO-Virgo Events with DECam

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Butler, Robert E.; Soares-Santos, M.; Annis, j.

    2017-12-14

    The DESGW program is a collaboration between members of the Dark Energy Survey, the wider astronomical community, and the LIGO-Virgo Collaboration to search for optical counterparts of gravitational wave events, such as those expected from binary neutron star mergers or neutron star-black hole mergers. While binary black hole (BBH) events are not expected to produce an electromagnetic (EM) signature, emission is certainly not impossible. The DESGW program has performed follow-up observations of four BBH events detected by LIGO in order to search for any possible EM counterpart. Failure to nd such counterparts is still relevant in that it produces limitsmore » on optical emission from such events. This is a review of follow-up results from O1 BBH events and a discussion of the status of ongoing uniform re-analysis of all BBH events that DESGW has followed up to date.« less

  9. Optimization of Multiple Related Negotiation through Multi-Negotiation Network

    NASA Astrophysics Data System (ADS)

    Ren, Fenghui; Zhang, Minjie; Miao, Chunyan; Shen, Zhiqi

    In this paper, a Multi-Negotiation Network (MNN) and a Multi- Negotiation Influence Diagram (MNID) are proposed to optimally handle Multiple Related Negotiations (MRN) in a multi-agent system. Most popular, state-of-the-art approaches perform MRN sequentially. However, a sequential procedure may not optimally execute MRN in terms of maximizing the global outcome, and may even lead to unnecessary losses in some situations. The motivation of this research is to use a MNN to handle MRN concurrently so as to maximize the expected utility of MRN. Firstly, both the joint success rate and the joint utility by considering all related negotiations are dynamically calculated based on a MNN. Secondly, by employing a MNID, an agent's possible decision on each related negotiation is reflected by the value of expected utility. Lastly, through comparing expected utilities between all possible policies to conduct MRN, an optimal policy is generated to optimize the global outcome of MRN. The experimental results indicate that the proposed approach can improve the global outcome of MRN in a successful end scenario, and avoid unnecessary losses in an unsuccessful end scenario.

  10. Deployable reflector antenna performance optimization using automated surface correction and array-feed compensation

    NASA Technical Reports Server (NTRS)

    Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.

    1992-01-01

    Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.

  11. Network clustering and community detection using modulus of families of loops.

    PubMed

    Shakeri, Heman; Poggi-Corradini, Pietro; Albin, Nathan; Scoglio, Caterina

    2017-01-01

    We study the structure of loops in networks using the notion of modulus of loop families. We introduce an alternate measure of network clustering by quantifying the richness of families of (simple) loops. Modulus tries to minimize the expected overlap among loops by spreading the expected link usage optimally. We propose weighting networks using these expected link usages to improve classical community detection algorithms. We show that the proposed method enhances the performance of certain algorithms, such as spectral partitioning and modularity maximization heuristics, on standard benchmarks.

  12. Optimal Resource Allocation in Library Systems

    ERIC Educational Resources Information Center

    Rouse, William B.

    1975-01-01

    Queueing theory is used to model processes as either waiting or balking processes. The optimal allocation of resources to these processes is defined as that which maximizes the expected value of the decision-maker's utility function. (Author)

  13. The Dynamics of Crime and Punishment

    NASA Astrophysics Data System (ADS)

    Hausken, Kjell; Moxnes, John F.

    This article analyzes crime development which is one of the largest threats in today's world, frequently referred to as the war on crime. The criminal commits crimes in his free time (when not in jail) according to a non-stationary Poisson process which accounts for fluctuations. Expected values and variances for crime development are determined. The deterrent effect of imprisonment follows from the amount of time in imprisonment. Each criminal maximizes expected utility defined as expected benefit (from crime) minus expected cost (imprisonment). A first-order differential equation of the criminal's utility-maximizing response to the given punishment policy is then developed. The analysis shows that if imprisonment is absent, criminal activity grows substantially. All else being equal, any equilibrium is unstable (labile), implying growth of criminal activity, unless imprisonment increases sufficiently as a function of criminal activity. This dynamic approach or perspective is quite interesting and has to our knowledge not been presented earlier. The empirical data material for crime intensity and imprisonment for Norway, England and Wales, and the US supports the model. Future crime development is shown to depend strongly on the societally chosen imprisonment policy. The model is intended as a valuable tool for policy makers who can envision arbitrarily sophisticated imprisonment functions and foresee the impact they have on crime development.

  14. Acceptable regret in medical decision making.

    PubMed

    Djulbegovic, B; Hozo, I; Schwartz, A; McMasters, K M

    1999-09-01

    When faced with medical decisions involving uncertain outcomes, the principles of decision theory hold that we should select the option with the highest expected utility to maximize health over time. Whether a decision proves right or wrong can be learned only in retrospect, when it may become apparent that another course of action would have been preferable. This realization may bring a sense of loss, or regret. When anticipated regret is compelling, a decision maker may choose to violate expected utility theory to avoid regret. We formulate a concept of acceptable regret in medical decision making that explicitly introduces the patient's attitude toward loss of health due to a mistaken decision into decision making. In most cases, minimizing expected regret results in the same decision as maximizing expected utility. However, when acceptable regret is taken into consideration, the threshold probability below which we can comfortably withhold treatment is a function only of the net benefit of the treatment, and the threshold probability above which we can comfortably administer the treatment depends only on the magnitude of the risks associated with the therapy. By considering acceptable regret, we develop new conceptual relations that can help decide whether treatment should be withheld or administered, especially when the diagnosis is uncertain. This may be particularly beneficial in deciding what constitutes futile medical care.

  15. Speed of Gravitational Waves from Strongly Lensed Gravitational Waves and Electromagnetic Signals.

    PubMed

    Fan, Xi-Long; Liao, Kai; Biesiada, Marek; Piórkowska-Kurpas, Aleksandra; Zhu, Zong-Hong

    2017-03-03

    We propose a new model-independent measurement strategy for the propagation speed of gravitational waves (GWs) based on strongly lensed GWs and their electromagnetic (EM) counterparts. This can be done in two ways: by comparing arrival times of GWs and their EM counterparts and by comparing the time delays between images seen in GWs and their EM counterparts. The lensed GW-EM event is perhaps the best way to identify an EM counterpart. Conceptually, this method does not rely on any specific theory of massive gravitons or modified gravity. Its differential setting (i.e., measuring the difference between time delays in GW and EM domains) makes it robust against lens modeling details (photons and GWs travel in the same lensing potential) and against internal time delays between GW and EM emission acts. It requires, however, that the theory of gravity is metric and predicts gravitational lensing similar to general relativity. We expect that such a test will become possible in the era of third-generation gravitational-wave detectors, when about 10 lensed GW events would be observed each year. The power of this method is mainly limited by the timing accuracy of the EM counterpart, which for kilonovae is around 10^{4}  s. This uncertainty can be suppressed by a factor of ∼10^{10}, if strongly lensed transients of much shorter duration associated with the GW event can be identified. Candidates for such short transients include short γ-ray bursts and fast radio bursts.

  16. Feasibility of energy medicine in a community teaching hospital: an exploratory case series.

    PubMed

    Dufresne, Francois; Simmons, Bonnie; Vlachostergios, Panagiotis J; Fleischner, Zachary; Joudeh, Ramsey; Blakeway, Jill; Julliard, Kell

    2015-06-01

    Energy medicine (EM) derives from the theory that a subtle biologic energy can be influenced for therapeutic effect. EM practitioners may be trained within a specific tradition or work solo. Few studies have investigated the feasibility of solo-practitioner EM in hospitals. This study investigated the feasibility of EM as provided by a solo practitioner in inpatient and emergent settings. Feasibility study, including a prospective case series. Inpatient units and emergency department. To investigate the feasibility of EM, acceptability, demand, implementation, and practicality were assessed. Short-term clinical changes were documented by treating physicians. Patients, employees, and family members were enrolled in the study only if study physicians expected no or slow improvement in specific symptoms. Those with secondary gains or who could not communicate perception of symptom change were excluded. EM was found to have acceptability and demand, and implementation was smooth because study procedures dovetailed with conventional clinical practice. Practicality was acceptable within the study but was low upon further application of EM because of cost of program administration. Twenty-four of 32 patients requested relief from pain. Of 50 reports of pain, 5 (10%) showed no improvement; 4 (8%), slight improvement; 3 (6%), moderate improvement; and 38 (76%), marked improvement. Twenty-one patients had issues other than pain. Of 29 non-pain-related problems, 3 (10%) showed no, 2 (7%) showed slight, 1 (4%) showed moderate, and 23 (79%) showed marked improvement. Changes during EM sessions were usually immediate. This study successfully implemented EM provided by a solo practitioner in inpatient and emergent hospital settings and found that acceptability and demand justified its presence. Most patients experienced marked, immediate improvement of symptoms associated with their chief complaint. Substantial practicality issues must be addressed to implement EM clinically in a hospital, however.

  17. Variation in ectomycorrhizal fungal communities associated with Oreomunnea mexicana (Juglandaceae) in a Neotropical montane forest.

    PubMed

    Corrales, Adriana; Arnold, A Elizabeth; Ferrer, Astrid; Turner, Benjamin L; Dalling, James W

    2016-01-01

    Neotropical montane forests are often dominated by ectomycorrhizal (EM) tree species, yet the diversity of their EM fungal communities remains poorly explored. In lower montane forests in western Panama, the EM tree species Oreomunnea mexicana (Juglandaceae) forms locally dense populations in forest otherwise characterized by trees that form arbuscular mycorrhizal (AM) associations. The objective of this study was to compare the composition of EM fungal communities associated with Oreomunnea adults, saplings, and seedlings across sites differing in soil fertility and the amount and seasonality of rainfall. Analysis of fungal nrITS DNA (nuclear ribosomal internal transcribed spacers) revealed 115 EM fungi taxa from 234 EM root tips collected from adults, saplings, and seedlings in four sites. EM fungal communities were equally species-rich and diverse across Oreomunnea developmental stages and sites, regardless of soil conditions or rainfall patterns. However, ordination analysis revealed high compositional turnover between low and high fertility/rainfall sites located ca. 6 km apart. The EM fungal community was dominated by Russula (ca. 36 taxa). Cortinarius, represented by 14 species and previously reported to extract nitrogen from organic sources under low nitrogen availability, was found only in low fertility/high rainfall sites. Phylogenetic diversity analyses of Russula revealed greater evolutionary distance among taxa found on sites with contrasting fertility and rainfall than was expected by chance, suggesting that environmental differences among sites may be important in structuring EM fungal communities. More research is needed to evaluate whether EM fungal taxa associated with Oreomunnea form mycorrhizal networks that might account for local dominance of this tree species in otherwise diverse forest communities.

  18. Maximization, learning, and economic behavior

    PubMed Central

    Erev, Ido; Roth, Alvin E.

    2014-01-01

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182

  19. Maximization, learning, and economic behavior.

    PubMed

    Erev, Ido; Roth, Alvin E

    2014-07-22

    The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.

  20. Phenomenology of maximal and near-maximal lepton mixing

    NASA Astrophysics Data System (ADS)

    Gonzalez-Garcia, M. C.; Peña-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ɛ≡1-2 sin2 θex and quantify the present experimental status for \\|ɛ\\|<0.3. We show that both probabilities and observables depend on ɛ quadratically when effects are due to vacuum oscillations and they depend on ɛ linearly if matter effects dominate. The most important information on νe mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10-8 eV2<~Δm2<~2×10-7 eV2. In the mass ranges Δm2>~1.5×10-5 eV2 and 4×10-10 eV2<~Δm2<~2×10-7 eV2 the full interval \\|ɛ\\|<0.3 is allowed within ~4σ (99.995% CL) We suggest ways to measure ɛ in future experiments. The observable that is most sensitive to ɛ is the rate [NC]/[CC] in combination with the day-night asymmetry in the SNO detector. With theoretical and statistical uncertainties, the expected accuracy after 5 years is Δɛ~0.07. We also discuss the effects of maximal and near-maximal νe mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay.

  1. Assessing park-and-ride impacts.

    DOT National Transportation Integrated Search

    2010-06-01

    Efficient transportation systems are vital to quality-of-life and mobility issues, and an effective park-and-ride (P&R) : network can help maximize system performance. Properly placed P&R facilities are expected to result in fewer calls : to increase...

  2. Three faces of node importance in network epidemiology: Exact results for small graphs

    NASA Astrophysics Data System (ADS)

    Holme, Petter

    2017-12-01

    We investigate three aspects of the importance of nodes with respect to susceptible-infectious-removed (SIR) disease dynamics: influence maximization (the expected outbreak size given a set of seed nodes), the effect of vaccination (how much deleting nodes would reduce the expected outbreak size), and sentinel surveillance (how early an outbreak could be detected with sensors at a set of nodes). We calculate the exact expressions of these quantities, as functions of the SIR parameters, for all connected graphs of three to seven nodes. We obtain the smallest graphs where the optimal node sets are not overlapping. We find that (i) node separation is more important than centrality for more than one active node, (ii) vaccination and influence maximization are the most different aspects of importance, and (iii) the three aspects are more similar when the infection rate is low.

  3. Global biomass production potentials exceed expected future demand without the need for cropland expansion

    PubMed Central

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-01-01

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification. PMID:26558436

  4. Global biomass production potentials exceed expected future demand without the need for cropland expansion.

    PubMed

    Mauser, Wolfram; Klepper, Gernot; Zabel, Florian; Delzeit, Ruth; Hank, Tobias; Putzenlechner, Birgitta; Calzadilla, Alvaro

    2015-11-12

    Global biomass demand is expected to roughly double between 2005 and 2050. Current studies suggest that agricultural intensification through optimally managed crops on today's cropland alone is insufficient to satisfy future demand. In practice though, improving crop growth management through better technology and knowledge almost inevitably goes along with (1) improving farm management with increased cropping intensity and more annual harvests where feasible and (2) an economically more efficient spatial allocation of crops which maximizes farmers' profit. By explicitly considering these two factors we show that, without expansion of cropland, today's global biomass potentials substantially exceed previous estimates and even 2050s' demands. We attribute 39% increase in estimated global production potentials to increasing cropping intensities and 30% to the spatial reallocation of crops to their profit-maximizing locations. The additional potentials would make cropland expansion redundant. Their geographic distribution points at possible hotspots for future intensification.

  5. Expected Power-Utility Maximization Under Incomplete Information and with Cox-Process Observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fujimoto, Kazufumi, E-mail: m_fuji@kvj.biglobe.ne.jp; Nagai, Hideo, E-mail: nagai@sigmath.es.osaka-u.ac.jp; Runggaldier, Wolfgang J., E-mail: runggal@math.unipd.it

    2013-02-15

    We consider the problem of maximization of expected terminal power utility (risk sensitive criterion). The underlying market model is a regime-switching diffusion model where the regime is determined by an unobservable factor process forming a finite state Markov process. The main novelty is due to the fact that prices are observed and the portfolio is rebalanced only at random times corresponding to a Cox process where the intensity is driven by the unobserved Markovian factor process as well. This leads to a more realistic modeling for many practical situations, like in markets with liquidity restrictions; on the other hand itmore » considerably complicates the problem to the point that traditional methodologies cannot be directly applied. The approach presented here is specific to the power-utility. For log-utilities a different approach is presented in Fujimoto et al. (Preprint, 2012).« less

  6. Royal Darwinian Demons: Enforced Changes in Reproductive Efforts Do Not Affect the Life Expectancy of Ant Queens.

    PubMed

    Schrempf, Alexandra; Giehr, Julia; Röhrl, Ramona; Steigleder, Sarah; Heinze, Jürgen

    2017-04-01

    One of the central tenets of life-history theory is that organisms cannot simultaneously maximize all fitness components. This results in the fundamental trade-off between reproduction and life span known from numerous animals, including humans. Social insects are a well-known exception to this rule: reproductive queens outlive nonreproductive workers. Here, we take a step forward and show that under identical social and environmental conditions the fecundity-longevity trade-off is absent also within the queen caste. A change in reproduction did not alter life expectancy, and even a strong enforced increase in reproductive efforts did not reduce residual life span. Generally, egg-laying rate and life span were positively correlated. Queens of perennial social insects thus seem to maximize at the same time two fitness parameters that are normally negatively correlated. Even though they are not immortal, they best approach a hypothetical "Darwinian demon" in the animal kingdom.

  7. WFIRST: Exoplanet Target Selection and Scheduling with Greedy Optimization

    NASA Astrophysics Data System (ADS)

    Keithly, Dean; Garrett, Daniel; Delacroix, Christian; Savransky, Dmitry

    2018-01-01

    We present target selection and scheduling algorithms for missions with direct imaging of exoplanets, and the Wide Field Infrared Survey Telescope (WFIRST) in particular, which will be equipped with a coronagraphic instrument (CGI). Optimal scheduling of CGI targets can maximize the expected value of directly imaged exoplanets (completeness). Using target completeness as a reward metric and integration time plus overhead time as a cost metric, we can maximize the sum completeness for a mission with a fixed duration. We optimize over these metrics to create a list of target stars using a greedy optimization algorithm based off altruistic yield optimization (AYO) under ideal conditions. We simulate full missions using EXOSIMS by observing targets in this list for their predetermined integration times. In this poster, we report the theoretical maximum sum completeness, mean number of detected exoplanets from Monte Carlo simulations, and the ideal expected value of the simulated missions.

  8. An EM-based semi-parametric mixture model approach to the regression analysis of competing-risks data.

    PubMed

    Ng, S K; McLachlan, G J

    2003-04-15

    We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.

  9. A Markov model for blind image separation by a mean-field EM algorithm.

    PubMed

    Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele

    2006-02-01

    This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.

  10. Recognizing patterns of visual field loss using unsupervised machine learning

    NASA Astrophysics Data System (ADS)

    Yousefi, Siamak; Goldbaum, Michael H.; Zangwill, Linda M.; Medeiros, Felipe A.; Bowd, Christopher

    2014-03-01

    Glaucoma is a potentially blinding optic neuropathy that results in a decrease in visual sensitivity. Visual field abnormalities (decreased visual sensitivity on psychophysical tests) are the primary means of glaucoma diagnosis. One form of visual field testing is Frequency Doubling Technology (FDT) that tests sensitivity at 52 points within the visual field. Like other psychophysical tests used in clinical practice, FDT results yield specific patterns of defect indicative of the disease. We used Gaussian Mixture Model with Expectation Maximization (GEM), (EM is used to estimate the model parameters) to automatically separate FDT data into clusters of normal and abnormal eyes. Principal component analysis (PCA) was used to decompose each cluster into different axes (patterns). FDT measurements were obtained from 1,190 eyes with normal FDT results and 786 eyes with abnormal (i.e., glaucomatous) FDT results, recruited from a university-based, longitudinal, multi-center, clinical study on glaucoma. The GEM input was the 52-point FDT threshold sensitivities for all eyes. The optimal GEM model separated the FDT fields into 3 clusters. Cluster 1 contained 94% normal fields (94% specificity) and clusters 2 and 3 combined, contained 77% abnormal fields (77% sensitivity). For clusters 1, 2 and 3 the optimal number of PCA-identified axes were 2, 2 and 5, respectively. GEM with PCA successfully separated FDT fields from healthy and glaucoma eyes and identified familiar glaucomatous patterns of loss.

  11. Evaluation of hierarchical models for integrative genomic analyses.

    PubMed

    Denis, Marie; Tadesse, Mahlet G

    2016-03-01

    Advances in high-throughput technologies have led to the acquisition of various types of -omic data on the same biological samples. Each data type gives independent and complementary information that can explain the biological mechanisms of interest. While several studies performing independent analyses of each dataset have led to significant results, a better understanding of complex biological mechanisms requires an integrative analysis of different sources of data. Flexible modeling approaches, based on penalized likelihood methods and expectation-maximization (EM) algorithms, are studied and tested under various biological relationship scenarios between the different molecular features and their effects on a clinical outcome. The models are applied to genomic datasets from two cancer types in the Cancer Genome Atlas project: glioblastoma multiforme and ovarian serous cystadenocarcinoma. The integrative models lead to improved model fit and predictive performance. They also provide a better understanding of the biological mechanisms underlying patients' survival. Source code implementing the integrative models is freely available at https://github.com/mgt000/IntegrativeAnalysis along with example datasets and sample R script applying the models to these data. The TCGA datasets used for analysis are publicly available at https://tcga-data.nci.nih.gov/tcga/tcgaDownload.jsp marie.denis@cirad.fr or mgt26@georgetown.edu Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  12. Speech Enhancement Using Gaussian Scale Mixture Models

    PubMed Central

    Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.

    2011-01-01

    This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139

  13. Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification

    PubMed Central

    Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.

    2016-01-01

    Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661

  14. A study on real-time low-quality content detection on Twitter from the users' perspective.

    PubMed

    Chen, Weiling; Yeo, Chai Kiat; Lau, Chiew Tong; Lee, Bu Sung

    2017-01-01

    Detection techniques of malicious content such as spam and phishing on Online Social Networks (OSN) are common with little attention paid to other types of low-quality content which actually impacts users' content browsing experience most. The aim of our work is to detect low-quality content from the users' perspective in real time. To define low-quality content comprehensibly, Expectation Maximization (EM) algorithm is first used to coarsely classify low-quality tweets into four categories. Based on this preliminary study, a survey is carefully designed to gather users' opinions on different categories of low-quality content. Both direct and indirect features including newly proposed features are identified to characterize all types of low-quality content. We then further combine word level analysis with the identified features and build a keyword blacklist dictionary to improve the detection performance. We manually label an extensive Twitter dataset of 100,000 tweets and perform low-quality content detection in real time based on the characterized significant features and word level analysis. The results of our research show that our method has a high accuracy of 0.9711 and a good F1 of 0.8379 based on a random forest classifier with real time performance in the detection of low-quality content in tweets. Our work therefore achieves a positive impact in improving user experience in browsing social media content.

  15. Segmentation of brain volume based on 3D region growing by integrating intensity and edge for image-guided surgery

    NASA Astrophysics Data System (ADS)

    Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu

    2006-03-01

    This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.

  16. A comparison of algorithms for inference and learning in probabilistic graphical models.

    PubMed

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  17. Methods to assess an exercise intervention trial based on 3-level functional data.

    PubMed

    Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J

    2015-10-01

    Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. A comparison of continuous- and discrete- time three-state models for rodent tumorigenicity experiments.

    PubMed Central

    Lindsey, J C; Ryan, L M

    1994-01-01

    The three-state illness-death model provides a useful way to characterize data from a rodent tumorigenicity experiment. Most parametrizations proposed recently in the literature assume discrete time for the death process and either discrete or continuous time for the tumor onset process. We compare these approaches with a third alternative that uses a piecewise continuous model on the hazards for tumor onset and death. All three models assume proportional hazards to characterize tumor lethality and the effect of dose on tumor onset and death rate. All of the models can easily be fitted using an Expectation Maximization (EM) algorithm. The piecewise continuous model is particularly appealing in this context because the complete data likelihood corresponds to a standard piecewise exponential model with tumor presence as a time-varying covariate. It can be shown analytically that differences between the parameter estimates given by each model are explained by varying assumptions about when tumor onsets, deaths, and sacrifices occur within intervals. The mixed-time model is seen to be an extension of the grouped data proportional hazards model [Mutat. Res. 24:267-278 (1981)]. We argue that the continuous-time model is preferable to the discrete- and mixed-time models because it gives reasonable estimates with relatively few intervals while still making full use of the available information. Data from the ED01 experiment illustrate the results. PMID:8187731

  19. A study on real-time low-quality content detection on Twitter from the users’ perspective

    PubMed Central

    Yeo, Chai Kiat; Lau, Chiew Tong; Lee, Bu Sung

    2017-01-01

    Detection techniques of malicious content such as spam and phishing on Online Social Networks (OSN) are common with little attention paid to other types of low-quality content which actually impacts users’ content browsing experience most. The aim of our work is to detect low-quality content from the users’ perspective in real time. To define low-quality content comprehensibly, Expectation Maximization (EM) algorithm is first used to coarsely classify low-quality tweets into four categories. Based on this preliminary study, a survey is carefully designed to gather users’ opinions on different categories of low-quality content. Both direct and indirect features including newly proposed features are identified to characterize all types of low-quality content. We then further combine word level analysis with the identified features and build a keyword blacklist dictionary to improve the detection performance. We manually label an extensive Twitter dataset of 100,000 tweets and perform low-quality content detection in real time based on the characterized significant features and word level analysis. The results of our research show that our method has a high accuracy of 0.9711 and a good F1 of 0.8379 based on a random forest classifier with real time performance in the detection of low-quality content in tweets. Our work therefore achieves a positive impact in improving user experience in browsing social media content. PMID:28793347

  20. Supersymmetric electric-magnetic duality in D =3 +3 and D =5 +5 dimensions as foundation of self-dual supersymmetric Yang-Mills theory

    NASA Astrophysics Data System (ADS)

    Nishino, Hitoshi; Rajpoot, Subhash

    2016-05-01

    We present electric-magnetic (EM)-duality formulations for non-Abelian gauge groups with N =1 supersymmetry in D =3 +3 and 5 +5 space-time dimensions. We show that these systems generate self-dual N =1 supersymmetric Yang-Mills (SDSYM) theory in D =2 +2 . For a N =2 supersymmetric EM-dual system in D =3 +3 , we have the Yang-Mills multiplet (Aμ I,λA I) and a Hodge-dual multiplet (Bμν ρ I,χA I) , with an auxiliary tensors Cμν ρ σ I and Kμ ν. Here, I is the adjoint index, while A is for the doublet of S p (1 ). The EM-duality conditions are Fμν I=(1 /4 !)ɛμν ρ σ τ λGρσ τ λ I with its superpartner duality condition λA I=-χA I . Upon appropriate dimensional reduction, this system generates SDSYM in D =2 +2 . This system is further generalized to D =5 +5 with the EM-duality condition Fμν I=(1 /8 !)ɛμν ρ1⋯ρ8Gρ1⋯ρ8 I with its superpartner condition λI=-χI . Upon appropriate dimensional reduction, this theory also generates SDSYM in D =2 +2 . As long as we maintain Lorentz covariance, D =5 +5 dimensions seems to be the maximal space-time dimensions that generate SDSYM in D =2 +2 . Namely, EM-dual system in D =5 +5 serves as the Master Theory of all supersymmetric integrable models in dimensions 1 ≤D ≤3 .

  1. Long-Term Outcomes of Elagolix in Women With Endometriosis: Results From Two Extension Studies.

    PubMed

    Surrey, Eric; Taylor, Hugh S; Giudice, Linda; Lessey, Bruce A; Abrao, Mauricio S; Archer, David F; Diamond, Michael P; Johnson, Neil P; Watts, Nelson B; Gallagher, J Chris; Simon, James A; Carr, Bruce R; Dmowski, W Paul; Leyland, Nicholas; Singh, Sukhbir S; Rechberger, Tomasz; Agarwal, Sanjay K; Duan, W Rachel; Schwefel, Brittany; Thomas, James W; Peloso, Paul M; Ng, Juki; Soliman, Ahmed M; Chwalisz, Kristof

    2018-06-06

    To evaluate the efficacy and safety of elagolix, an oral, nonpeptide gonadotropin-releasing hormone antagonist, over 12 months in women with endometriosis-associated pain. Elaris Endometriosis (EM)-III and -IV were extension studies that evaluated an additional 6 months of treatment after two 6-month, double-blind, placebo-controlled phase 3 trials (12 continuous treatment months) with two elagolix doses (150 mg once daily and 200 mg twice daily). Coprimary efficacy endpoints were the proportion of responders (clinically meaningful pain reduction and stable or decreased rescue analgesic use) based on average monthly dysmenorrhea and nonmenstrual pelvic pain scores. Safety assessments included adverse events, clinical laboratory tests, and endometrial and bone mineral density assessments. The power of Elaris EM-III and -IV was based on the comparison to placebo in Elaris EM-I and -II with an expected 25% dropout rate. Between December 28, 2012, and October 31, 2014 (Elaris EM-III), and between May 27, 2014, and January 6, 2016 (Elaris EM-IV), 569 participants were enrolled. After 12 months of treatment, Elaris EM-III responder rates for dysmenorrhea were 52.1% at 150 mg once daily (Elaris EM-IV=50.8%) and 78.1% at 200 mg twice daily (Elaris EM-IV=75.9%). Elaris EM-III nonmenstrual pelvic pain responder rates were 67.8% at 150 mg once daily (Elaris EM-IV=66.4%) and 69.1% at 200 mg twice daily (Elaris EM-IV=67.2%). After 12 months of treatment, Elaris EM-III dyspareunia responder rates were 45.2% at 150 mg once daily (Elaris EM-IV=45.9%) and 60.0% at 200 mg twice daily (Elaris EM-IV=58.1%). Hot flush was the most common adverse event. Decreases from baseline in bone mineral density and increases from baseline in lipids were observed after 12 months of treatment. There were no adverse endometrial findings. Long-term elagolix treatment provided sustained reductions in dysmenorrhea, nonmenstrual pelvic pain, and dyspareunia. The safety was consistent with reduced estrogen levels and no new safety concerns were associated with long-term elagolix use. ClinicalTrials.gov, NCT01760954 and NCT02143713.

  2. Maximizing investments in work zone safety in Oregon : final report.

    DOT National Transportation Integrated Search

    2011-05-01

    Due to the federal stimulus program and the 2009 Jobs and Transportation Act, the Oregon Department of Transportation (ODOT) anticipates that a large increase in highway construction will occur. There is the expectation that, since transportation saf...

  3. Leadership Strategies.

    ERIC Educational Resources Information Center

    Lashway, Larry

    1997-01-01

    Principals today are expected to maximize their schools' performances with limited resources while also adopting educational innovations. This synopsis reviews five recent publications that offer some important insights about the nature of principals' leadership strategies: (1) "Leadership Styles and Strategies" (Larry Lashway); (2) "Facilitative…

  4. Three-dimensional ordered-subset expectation maximization iterative protocol for evaluation of left ventricular volumes and function by quantitative gated SPECT: a dynamic phantom study.

    PubMed

    Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca

    2010-03-01

    The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.

  5. Correlative 3D superresolution fluorescence and electron microscopy reveal the relationship of mitochondrial nucleoids to membranes

    PubMed Central

    Kopek, Benjamin G.; Shtengel, Gleb; Xu, C. Shan; Clayton, David A.; Hess, Harald F.

    2012-01-01

    Microscopic images of specific proteins in their cellular context yield important insights into biological processes and cellular architecture. The advent of superresolution optical microscopy techniques provides the possibility to augment EM with nanometer-resolution fluorescence microscopy to access the precise location of proteins in the context of cellular ultrastructure. Unfortunately, efforts to combine superresolution fluorescence and EM have been stymied by the divergent and incompatible sample preparation protocols of the two methods. Here, we describe a protocol that preserves both the delicate photoactivatable fluorescent protein labels essential for superresolution microscopy and the fine ultrastructural context of EM. This preparation enables direct 3D imaging in 500- to 750-nm sections with interferometric photoactivatable localization microscopy followed by scanning EM images generated by focused ion beam ablation. We use this process to “colorize” detailed EM images of the mitochondrion with the position of labeled proteins. The approach presented here has provided a new level of definition of the in vivo nature of organization of mitochondrial nucleoids, and we expect this straightforward method to be applicable to many other biological questions that can be answered by direct imaging. PMID:22474357

  6. Combining Gravitational Wave Events with their Electromagnetic Counterparts: A Realistic Joint False-Alarm Rate

    NASA Astrophysics Data System (ADS)

    Ackley, Kendall; Eikenberry, Stephen; Klimenko, Sergey; LIGO Team

    2017-01-01

    We present a false-alarm rate for a joint detection of gravitational wave (GW) events and associated electromagnetic (EM) counterparts for Advanced LIGO and Virgo (LV) observations during the first years of operation. Using simulated GW events and their recostructed probability skymaps, we tile over the error regions using sets of archival wide-field telescope survey images and recover the number of astrophysical transients to be expected during LV-EM followup. With the known GW event injection coordinates we inject artificial electromagnetic (EM) sources at that site based on theoretical and observational models on a one-to-one basis. We calculate the EM false-alarm probability using an unsupervised machine learning algorithm based on shapelet analysis which has shown to be a strong discriminator between astrophysical transients and image artifacts while reducing the set of transients to be manually vetted by five orders of magnitude. We also show the performance of our method in context with other machine-learned transient classification and reduction algorithms, showing comparability without the need for a large set of training data opening the possibility for next-generation telescopes to take advantage of this pipeline for LV-EM followup missions.

  7. On the Achievable Throughput Over TVWS Sensor Networks

    PubMed Central

    Caleffi, Marcello; Cacciapuoti, Angela Sara

    2016-01-01

    In this letter, we study the throughput achievable by an unlicensed sensor network operating over TV white space spectrum in presence of coexistence interference. Through the letter, we first analytically derive the achievable throughput as a function of the channel ordering. Then, we show that the problem of deriving the maximum expected throughput through exhaustive search is computationally unfeasible. Finally, we derive a computational-efficient algorithm characterized by polynomial-time complexity to compute the channel set maximizing the expected throughput and, stemming from this, we derive a closed-form expression of the maximum expected throughput. Numerical simulations validate the theoretical analysis. PMID:27043565

  8. Factors associated with emergency medical services scope of practice for acute cardiovascular events.

    PubMed

    Williams, Ishmael; Valderrama, Amy L; Bolton, Patricia; Greek, April; Greer, Sophia; Patterson, Davis G; Zhang, Zefeng

    2012-01-01

    To examine prehospital emergency medical services (EMS) scope of practice for acute cardiovascular events and characteristics that may affect scope of practice; and to describe variations in EMS scope of practice for these events and the characteristics associated with that variability. In 2008, we conducted a telephone survey of 1,939 eligible EMS providers in nine states to measure EMS agency characteristics, medical director involvement, and 18 interventions authorized for prehospital care of acute cardiovascular events by three levels of emergency medical technician (EMT) personnel. A total of 1,292 providers responded to the survey, for a response rate of 67%. EMS scope of practice interventions varied by EMT personnel level, with the proportion of authorized interventions increasing as expected from EMT-Basic to EMT-Paramedic. Seven of eight statistically significant associations indicated that EMS agencies in urban settings were less likely to authorize interventions (odds ratios <0.7) for any level of EMS personnel. Based on the subset of six statistically significant associations, fire department-based EMS agencies were two to three times more likely to authorize interventions for EMT-Intermediate personnel. Volunteer EMS agencies were more than twice as likely as nonvolunteer agencies to authorize interventions for EMT-Basic and EMT-Intermediate personnel but were less likely to authorize any one of the 11 interventions for EMT-Paramedics. Greater medical director involvement was associated with greater likelihood of authorization of seven of the 18 interventions for EMT-Basic and EMT-Paramedic personnel but had no association with EMT-Intermediate personnel. We noted statistically significant variations in scope of practice by rural vs. urban setting, medical director involvement, and type of EMS service (fire department-based/non-fire department-based; volunteer/paid). These variations highlight local differences in the composition and capacity of EMS providers and offer important information for the transition towards the implementation of a national scope of practice model.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han, Bong-Gyoon; Watson, Zoe; Kang, Hannah

    We describe a rapid and convenient method of growing streptavidin (SA) monolayer crystals directly on holey-carbon EM grids. As expected, these SA monolayer crystals retain their biotin-binding function and crystalline order through a cycle of embedding in trehalose and, later, its removal. This fact allows one to prepare, and store for later use, EM grids on which SA monolayer crystals serve as an affinity substrate for preparing specimens of biological macromolecules. In addition, we report that coating the lipid-tail side of trehalose-embedded monolayer crystals with evaporated carbon appears to improve the consistency with which well-ordered, single crystals are observed tomore » span over entire, 2 μm holes of the support films. Randomly biotinylated 70S ribosomes are used as a test specimen to show that these support films can be used to obtain a high-resolution cryo-EM structure« less

  10. Electrical muscle stimulation in thomboprophylaxis: review and a derived hypothesis about thrombogenesis-the 4th factor.

    PubMed

    Stefanou, Christos

    2016-01-01

    Electrical muscle stimulation (EMS) is an FDA-approved thromboprophylactic method. Thrombus pathogenesis is considered to depend on factors related to components of the vessel wall, the velocity of blood, and blood consistency-collectively known as, the Virchow's triad. The testimony supporting the thromboprophylactic effects of the EMS is reviewed. An emphasis is placed on the fact that, EMS has demonstrated, in certain circumstances, an efficacy rate that cannot be fully explained by the Virchow's triad; also that, in reviewing relevant evidence and the theorized pathophysiological mechanisms, several findings collectively point to a potentially missed point. Remarkably, venous thromboembolic disease (VTE) is extremely more common in the lower versus the upper extremities even when the blood velocities equalize; EMS had synergistic effects with intermittent compressive devices, despite their presumed identical mechanism of action; sleep is not thrombogenic; non-peroperative EMS is meaningful only if applied ≥5 times daily; neural insult increases VTEs more than the degree expected by the hypomobility-related blood stasis; etc. These phenomena infer the presence of a 4th thrombogenetic factor: neural supply to the veins provides direct antithrombic effects, by inducing periodic vessel diameter changes and/or by neuro-humoral, chemically acting factors. EMS may stimulate or substitute the 4th factor. This evidence-based hypothesis is analyzed. A novel pathophysiologic mechanism of thrombogenesis is supported; and, based on this, the role of EMS in thromboprophylaxis is expanded. Exploration of this mechanism may provide new targets for intervention.

  11. The Role of BRCA1 Domains and Motifs in Tumor Suppression

    DTIC Science & Technology

    2009-08-01

    parental as negative. Lane3 and 4 HCC1937 tet infected with plentiBRCA1WT plasmid respectively before and after addition of tetracycline. 1 2 3 4 H1...histone H3 (mitotic marker) and analyzed th em by flow cytometry. As expected Hela cells presented 80% decrease in the mitotic cell population 1h...analyze spindle assembly checkpoint for each case. In conclusion, we are on track to complete the tasks in the time proposed. Our expectations are

  12. Donor selection criteria for liver transplantation in Argentina: are current standards too rigorous?

    PubMed

    Dirchwolf, Melisa; Ruf, Andrés E; Biggins, Scott W; Bisigniano, Liliana; Hansen Krogh, Daniela; Villamil, Federico G

    2015-02-01

    Organ shortage is the major limitation for the growth of deceased donor liver transplant worldwide. One strategy to ameliorate this problem is to maximize the liver utilization rate. To assess predictors of liver utilization in Argentina. The national database was used to analyze transplant activity in 2010. Donor, recipient, and transplant variables were evaluated as predictors of graft utilization of number of rejected donor offers before grafting and with the occurrence of primary nonfunction (PNF) or early post-transplant mortality (EM). Of the 582 deceased donors, 293 (50.3%) were recovered for liver transplant. Variables associated with the nonrecovery of the liver were age ≥46 years, umbilical perimeter ≥92 cm, organ procurement outside Gran Buenos Aires, AST ≥42 U/l and ALT ≥29 U/l. The median number of rejected offers before grafting was 4, and in 71 patients (25%), there were ≥13. The only independent predictor for the occurrence of PNF (3.4%) or EM (5.2%) was the recipient's emergency status. During 2010 in Argentina, the liver was recovered in only half of donors. The low incidence of PNF and EM and the characteristics of the nonrecovered liver donors suggest that organ acceptance criteria should be less rigorous. © 2014 Steunstichting ESOT.

  13. Leverage front-line expertise to maximize trauma prevention efforts.

    PubMed

    2012-06-01

    The trauma prevention program at Geisinger Wyoming Valley (GWV) Medical Center in Wilkes-Barre, PA, has enlisted the assistance of an experienced paramedic and ED tech to spend part of his time targeting prevention education toward populations that have been experiencing high rates of traumatic injuries. While community outreach has long been a priority for the trauma prevention program, the new position is enabling GWV to boost the magnitude of its prevention efforts, and to reach out to referring facilities as well. Program administrators say a similar outreach effort aimed at EMS providers has strengthened relationships and helped to improve trauma care at the facility. The new trauma injury prevention outreach coordinator has focused his first efforts on fall prevention and curbing motor vehicle accidents among very young and very mature driving populations. Data from GWV's trauma registry suggest that its fall prevention efforts are having an effect. The incidence of falls among patients over the age 65 is down by about 10% at the facility since it began targeting education at the community's senior population. Administrators say a monthly lecture series aimed at the prehospital community has gone a long way toward nurturing ties with EMS providers. Called "EMS Night Out," the series covers a range of topics, but the most popular programs involve case reviews.

  14. Image-derived and arterial blood sampled input functions for quantitative PET imaging of the angiotensin II subtype 1 receptor in the kidney

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feng, Tao; Tsui, Benjamin M. W.; Li, Xin

    Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less

  15. Expecting ankle tilts and wearing an ankle brace influence joint control in an imitated ankle sprain mechanism during walking.

    PubMed

    Gehring, Dominic; Wissler, Sabrina; Lohrer, Heinz; Nauck, Tanja; Gollhofer, Albert

    2014-03-01

    A thorough understanding of the functional aspects of ankle joint control is essential to developing effective injury prevention. It is of special interest to understand how neuromuscular control mechanisms and mechanical constraints stabilize the ankle joint. Therefore, the aim of the present study was to determine how expecting ankle tilts and the application of an ankle brace influence ankle joint control when imitating the ankle sprain mechanism during walking. Ankle kinematics and muscle activity were assessed in 17 healthy men. During gait rapid perturbations were applied using a trapdoor (tilting with 24° inversion and 15° plantarflexion). The subjects either knew that a perturbation would definitely occur (expected tilts) or there was only the possibility that a perturbation would occur (potential tilts). Both conditions were conducted with and without a semi-rigid ankle brace. Expecting perturbations led to an increased ankle eversion at foot contact, which was mediated by an altered muscle preactivation pattern. Moreover, the maximal inversion angle (-7%) and velocity (-4%), as well as the reactive muscle response were significantly reduced when the perturbation was expected. While wearing an ankle brace did not influence muscle preactivation nor the ankle kinematics before ground contact, it significantly reduced the maximal ankle inversion angle (-14%) and velocity (-11%) as well as reactive neuromuscular responses. The present findings reveal that expecting ankle inversion modifies neuromuscular joint control prior to landing. Although such motor control strategies are weaker in their magnitude compared with braces, they seem to assist ankle joint stabilization in a close-to-injury situation. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Shifting baselines in the Ems Dollard estuary: A comparison across three decades reveals changing benthic communities

    NASA Astrophysics Data System (ADS)

    Compton, Tanya J.; Holthuijsen, Sander; Mulder, Maarten; van Arkel, Maarten; Schaars, Loran Kleine; Koolhaas, Anita; Dekinga, Anne; ten Horn, Job; Luttikhuizen, Pieternella C.; van der Meer, Jaap; Piersma, Theunis; van der Veer, Henk W.

    2017-09-01

    At a time when there is a growing discussion about the natural state of estuaries, a comparison of macrozoobenthos communities from two surveys conducted 30 years apart in the Ems Dollard estuary, in the eastern Wadden Sea, The Netherlands, provides a unique opportunity to compare changes over time. As expected, our comparison revealed a gradient in species composition from land (the Dollard) to sea (the Outer Ems) at both points in time, with brackish species in the Dollard and more marine species in the Outer Ems (Wadden Sea). Total richness increased over time; however, this mainly reflected the immigration of new species and sampling differences. In the Dollard, total biomass declined over time, most likely reflecting de-eutrophication in this area. Strikingly, at the meeting point between the sea and the brackish Dollard, i.e. the Inner Ems, the community composition changed from one dominated by bivalves (1970s) to one dominated by worms (since 2009). This change involved a reduction in total biomass, mainly of Mya arenaria, and immigration of polychaete worms (Marenzellaria viridis and Alitta succinea). In the Outer Ems, an increase in total biomass was observed, associated with the recent successful recruitment of Cerastoderma edule. This comparison highlights that historical data provides useful insights at large spatial scales. However, a full understanding of the complex dynamics of estuaries requires an analysis of continuous long-term monitoring series.

  17. Enhanced Energy Localization in Hyperthermia Treatment Based on Hybrid Electromagnetic and Ultrasonic System: Proof of Concept with Numerical Simulations.

    PubMed

    Nizam-Uddin, N; Elshafiey, Ibrahim

    2017-01-01

    This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors.

  18. Enhanced Energy Localization in Hyperthermia Treatment Based on Hybrid Electromagnetic and Ultrasonic System: Proof of Concept with Numerical Simulations

    PubMed Central

    Elshafiey, Ibrahim

    2017-01-01

    This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors. PMID:28840125

  19. Feasibility of Energy Medicine in a Community Teaching Hospital: An Exploratory Case Series

    PubMed Central

    Dufresne, Francois; Simmons, Bonnie; Vlachostergios, Panagiotis J.; Fleischner, Zachary; Joudeh, Ramsey; Blakeway, Jill

    2015-01-01

    Abstract Background: Energy medicine (EM) derives from the theory that a subtle biologic energy can be influenced for therapeutic effect. EM practitioners may be trained within a specific tradition or work solo. Few studies have investigated the feasibility of solo-practitioner EM in hospitals. Objective: This study investigated the feasibility of EM as provided by a solo practitioner in inpatient and emergent settings. Design: Feasibility study, including a prospective case series. Settings: Inpatient units and emergency department. Outcome measures: To investigate the feasibility of EM, acceptability, demand, implementation, and practicality were assessed. Short-term clinical changes were documented by treating physicians. Participants: Patients, employees, and family members were enrolled in the study only if study physicians expected no or slow improvement in specific symptoms. Those with secondary gains or who could not communicate perception of symptom change were excluded. Results: EM was found to have acceptability and demand, and implementation was smooth because study procedures dovetailed with conventional clinical practice. Practicality was acceptable within the study but was low upon further application of EM because of cost of program administration. Twenty-four of 32 patients requested relief from pain. Of 50 reports of pain, 5 (10%) showed no improvement; 4 (8%), slight improvement; 3 (6%), moderate improvement; and 38 (76%), marked improvement. Twenty-one patients had issues other than pain. Of 29 non–pain-related problems, 3 (10%) showed no, 2 (7%) showed slight, 1 (4%) showed moderate, and 23 (79%) showed marked improvement. Changes during EM sessions were usually immediate. Conclusions: This study successfully implemented EM provided by a solo practitioner in inpatient and emergent hospital settings and found that acceptability and demand justified its presence. Most patients experienced marked, immediate improvement of symptoms associated with their chief complaint. Substantial practicality issues must be addressed to implement EM clinically in a hospital, however. PMID:26035025

  20. Numerical and Experimental Investigation on the Attenuation of Electromagnetic Waves in Unmagnetized Plasmas Using Inductively Coupled Plasma Actuator

    NASA Astrophysics Data System (ADS)

    Lin, Min; Xu, Haojun; Wei, Xiaolong; Liang, Hua; Song, Huimin; Sun, Quan; Zhang, Yanhua

    2015-10-01

    The attenuation of electromagnetic (EM) waves in unmagnetized plasma generated by an inductively coupled plasma (ICP) actuator has been investigated both theoretically and experimentally. A numerical study is conducted to investigate the propagation of EM waves in multilayer plasma structures which cover a square flat plate. Experimentally, an ICP actuator with dimensions of 20 cm×20 cm×4 cm is designed to produce a steady plasma slab. The attenuation of EM waves in the plasma generated by the ICP actuator is measured by a reflectivity arch test method at incident waves of 2.3 GHz and 10.1 GHz, respectively. A contrastive analysis of calculated and measured results of these incident wave frequencies is presented, which suggests that the experiment accords well with our theory. As expected, the plasma slab generated by the ICP actuator can effectively attenuate the EM waves, which may have great potential application prospects in aircraft stealth. supported by National Natural Science Foundation of China (Nos. 51276197, 11472306 and 11402301)

  1. Expecting the unexpected: A mixed methods study of violence to EMS responders in an urban fire department.

    PubMed

    Taylor, Jennifer A; Barnes, Brittany; Davis, Andrea L; Wright, Jasmine; Widman, Shannon; LeVasseur, Michael

    2016-02-01

    Struck by injuries experienced by females were observed to be higher compared to males in an urban fire department. The disparity was investigated while gaining a grounded understanding of EMS responder experiences from patient-initiated violence. A convergent parallel mixed methods design was employed. Using a linked injury dataset, patient-initiated violence estimates were calculated comparing genders. Semi-structured interviews and a focus group were conducted with injured EMS responders. Paramedics had significantly higher odds for patient-initiated violence injuries than firefighters (OR 14.4, 95%CI: 9.2-22.2, P < 0.001). Females reported increased odds of patient-initiated violence injuries compared to males (OR = 6.25, 95%CI 3.8-10.2), but this relationship was entirely mediated through occupation (AOR = 1.64, 95%CI 0.94-2.85). Qualitative data illuminated the impact of patient-initiated violence and highlighted important organizational opportunities for intervention. Mixed methods greatly enhanced the assessment of EMS responder patient-initiated violence prevention. © 2016 The Authors. American Journal of Industrial Medicine Published by Wiley Periodicals, Inc.

  2. Valuing empathy and emotional intelligence in health leadership: a study of empathy, leadership behaviour and outcome effectiveness.

    PubMed

    Skinner, C; Spurgeon, P

    2005-02-01

    This article examines the relationship between health managers' self-assessed empathy, their leadership behaviours as rated by their staff, and staff's personal ratings on a range of work satisfaction and related outcome measures. Empathy was conceived of as four distinct but related individual dispositions, namely empathic concern (EC), perspective taking (PT), personal distress (PD) and empathic matching (EM). Results showed three empathy scales (EC, PT and EM) were, as postulated, positively related to transformational behaviour (inspiring followers to achieve more than expected). The same three measures, also as expected, showed no relationship to transactional behaviour (motivating followers to achieve expected results) and were negatively associated with laissez-faire leadership (an absence of leadership style). Relationships between empathy scales and outcome measures were selective and moderate in size. Strongest empathy association was evident between the PT scale and most outcome measures. Conversely, the extra effort outcome appeared most sensitive to the range of empathy scales. Where significant relationships did exist between empathy and outcome, leadership behaviour was in all cases a perfect mediator. Whilst not denying the smaller dispositional effects on leadership outcomes, leadership behaviour itself, rather than individual traits such as empathy, appear to be major influencing factors in leadership effectiveness.

  3. A History of the Environmental Management Advisory Board: 20 Years of Service and Partnership - 13219

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ellis, Kristen; Schmitt, Elizabeth

    2013-07-01

    The Environmental Management Advisory Board (EMAB or Board) was chartered under the Federal Advisory Committee Act (FACA) in 1992 to provide the Assistant Secretary for Environmental Management (EM) with independent and external advice, information, and recommendations on corporate issues relating to accelerated site clean-up and risk reduction throughout the EM complex. Over the course of the past 20 years, the composition and focus of the Board have varied widely to address the changing needs of the program. EMAB began as the Environmental Restoration and Waste Management Advisory Committee, formed to provide advice on an EM Programmatic Environmental Impact Statement. Inmore » 1994, the Board was restructured to function more as an executive-level, limited member advisory board whose membership provides insight of leading industry experts and the viewpoints of representatives from critical stakeholder constituencies. Throughout the 20 years of its existence, EMAB has covered a wide variety of topics and produced nearly 200 recommendations. These recommendations have resulted in several policy changes and improvements within EM. Most recently, EMAB has been credited for its contribution to the EM Energy Park Initiative, forerunner of the DOE Asset Revitalization Initiative; creation of the EM Offices of Communications and External Affairs; improvement of acquisition and project management strategies and culture; and several recommendations related to the Waste Treatment Plant and the tank waste programs at Hanford and the Savannah River Site. The wealth of experience and knowledge the Assistant Secretary can leverage through utilization of the Board continues to support fulfillment of EM's mission. In commemoration of EMAB's 20. anniversary, this paper will provide further context for the evolution of the Board, the role FACA plays in its administration, and a look at the members' current objectives and EM's expectations for the future. (authors)« less

  4. Constraining parameters of white-dwarf binaries using gravitational-wave and electromagnetic observations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shah, Sweta; Nelemans, Gijs, E-mail: s.shah@astro.ru.nl

    The space-based gravitational wave (GW) detector, evolved Laser Interferometer Space Antenna (eLISA) is expected to observe millions of compact Galactic binaries that populate our Milky Way. GW measurements obtained from the eLISA detector are in many cases complimentary to possible electromagnetic (EM) data. In our previous papers, we have shown that the EM data can significantly enhance our knowledge of the astrophysically relevant GW parameters of Galactic binaries, such as the amplitude and inclination. This is possible due to the presence of some strong correlations between GW parameters that are measurable by both EM and GW observations, for example, themore » inclination and sky position. In this paper, we quantify the constraints in the physical parameters of the white-dwarf binaries, i.e., the individual masses, chirp mass, and the distance to the source that can be obtained by combining the full set of EM measurements such as the inclination, radial velocities, distances, and/or individual masses with the GW measurements. We find the following 2σ fractional uncertainties in the parameters of interest. The EM observations of distance constrain the chirp mass to ∼15%-25%, whereas EM data of a single-lined spectroscopic binary constrain the secondary mass and the distance with factors of two to ∼40%. The single-line spectroscopic data complemented with distance constrains the secondary mass to ∼25%-30%. Finally, EM data on double-lined spectroscopic binary constrain the distance to ∼30%. All of these constraints depend on the inclination and the signal strength of the binary systems. We also find that the EM information on distance and/or the radial velocity are the most useful in improving the estimate of the secondary mass, inclination, and/or distance.« less

  5. Factors influencing medical students' choice of emergency medicine as a career specialty-a descriptive study of Saudi medical students.

    PubMed

    Alkhaneen, Hadeel; Alhusain, Faisal; Alshahri, Khalid; Al Jerian, Nawfal

    2018-03-07

    Choosing a medical specialty is a poorly understood process. Although studies conducted around the world have attempted to identify the factors that affect medical students' choice of specialty, data is scarce on the factors that influence the choice of specialty of Saudi Arabian medical students, in particular those planning a career in emergency medicine (EM). In this study, we investigated whether Saudi medical students choosing EM are influenced by different factors to those choosing other specialties. A cross-sectional survey was conducted at King Saud bin Abdulaziz University for Health Sciences (KSAUHS), Riyadh, Saudi Arabia. The questionnaire distributed among all undergraduate and postgraduate medical students of both sexes in the second and third phases (57% were males and 43% were females). A total of 436 students answered the questionnaire, a response rate of 53.4%. EM group was most influenced by hospital orientation and lifestyle and least influenced by social orientation and prestige provided by their specialty. Unlike controllable lifestyle (CL) group and primary care (PC) group, EM reported lesser influence of social orientation on their career choice. When compared with students primarily interested in the surgical subspecialties (SS), EM group were less likely to report prestige as an important influence. Moreover, students interested in SS reported a leaser influence of medical lifestyle in comparison to EM group. When compared with CL group, EM group reported more interest in medical lifestyle. We found that students primarily interested in EM had different values and career expectations to other specialty groups. The trends in specialty choice should be appraised to meet future needs.

  6. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    PubMed

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ.

  7. Long shelf-life streptavidin support-films suitable for electron microscopy of biological macromolecules

    DOE PAGES

    Han, Bong-Gyoon; Watson, Zoe; Kang, Hannah; ...

    2016-06-15

    We describe a rapid and convenient method of growing streptavidin (SA) monolayer crystals directly on holey-carbon EM grids. As expected, these SA monolayer crystals retain their biotin-binding function and crystalline order through a cycle of embedding in trehalose and, later, its removal. This fact allows one to prepare, and store for later use, EM grids on which SA monolayer crystals serve as an affinity substrate for preparing specimens of biological macromolecules. In addition, we report that coating the lipid-tail side of trehalose-embedded monolayer crystals with evaporated carbon appears to improve the consistency with which well-ordered, single crystals are observed tomore » span over entire, 2 μm holes of the support films. Randomly biotinylated 70S ribosomes are used as a test specimen to show that these support films can be used to obtain a high-resolution cryo-EM structure« less

  8. EMS Provider assessment of vehicle damage compared with assessment by a professional crash reconstructionist.

    PubMed

    Lerner, E Brooke; Cushman, Jeremy T; Blatt, Alan; Lawrence, Richard D; Shah, Manish N; Swor, Robert A; Brasel, Karen; Jurkovich, Gregory J

    2011-01-01

    To determine the accuracy of emergency medical services (EMS) provider assessments of motor vehicle damage when compared with measurements made by a professional crash reconstructionist. EMS providers caring for adult patients injured during a motor vehicle crash and transported to the regional trauma center in a midsized community were interviewed upon emergency department arrival. The interview collected provider estimates of crash mechanism of injury. For crashes that met a preset severity threshold, the vehicle's owner was asked to consent to having a crash reconstructionist assess the vehicle. The assessment included measuring intrusion and external automobile deformity. Vehicle damage was used to calculate change in velocity. Paired t-test, correlation, and kappa were used to compare EMS estimates and investigator-derived values. Ninety-one vehicles were enrolled; of these, 58 were inspected and 33 were excluded because the vehicle was not accessible. Six vehicles had multiple patients. Therefore, a total of 68 EMS estimates were compared with the inspection findings. Patients were 46% male, 28% were admitted to hospital, and 1% died. The mean EMS-estimated deformity was 18 inches and the mean measured deformity was 14 inches. The mean EMS-estimated intrusion was 5 inches and the mean measured intrusion was 4 inches. The EMS providers and the reconstructionist had 68% agreement for determination of external automobile deformity (kappa 0.26) and 88% agreement for determination of intrusion (kappa 0.27) when the 1999 American College of Surgeons Field Triage Decision Scheme criteria were applied. The mean (± standard deviation) EMS-estimated speed prior to the crash was 48 ± 13 mph and the mean reconstructionist-estimated change in velocity was 18 ± 12 mph (correlation -0.45). The EMS providers determined that 19 vehicles had rolled over, whereas the investigator identified 18 (kappa 0.96). In 55 cases, EMS and the investigator agreed on seat belt use; for the remaining 13 cases, there was disagreement (five) or the investigator was unable to make a determination (eight) (kappa 0.40). This study found that EMS providers are good at estimating rollover. Vehicle intrusion, deformity, and seat belt use appear to be more difficult for EMS to estimate, with only fair agreement with the crash reconstructionist. As expected, the EMS provider -estimated speed prior to the crash does not appear to be a reasonable proxy for change in velocity.

  9. Engaging Older Adult Volunteers in National Service

    ERIC Educational Resources Information Center

    McBride, Amanda Moore; Greenfield, Jennifer C.; Morrow-Howell, Nancy; Lee, Yung Soo; McCrary, Stacey

    2012-01-01

    Volunteer-based programs are increasingly designed as interventions to affect the volunteers and the beneficiaries of the volunteers' activities. To achieve the intended impacts for both, programs need to leverage the volunteers' engagement by meeting their expectations, retaining them, and maximizing their perceptions of benefits. Programmatic…

  10. Merton's problem for an investor with a benchmark in a Barndorff-Nielsen and Shephard market.

    PubMed

    Lennartsson, Jan; Lindberg, Carl

    2015-01-01

    To try to outperform an externally given benchmark with known weights is the most common equity mandate in the financial industry. For quantitative investors, this task is predominantly approached by optimizing their portfolios consecutively over short time horizons with one-period models. We seek in this paper to provide a theoretical justification to this practice when the underlying market is of Barndorff-Nielsen and Shephard type. This is done by verifying that an investor who seeks to maximize her expected terminal exponential utility of wealth in excess of her benchmark will in fact use an optimal portfolio equivalent to the one-period Markowitz mean-variance problem in continuum under the corresponding Black-Scholes market. Further, we can represent the solution to the optimization problem as in Feynman-Kac form. Hence, the problem, and its solution, is analogous to Merton's classical portfolio problem, with the main difference that Merton maximizes expected utility of terminal wealth, not wealth in excess of a benchmark.

  11. Bilevel formulation of a policy design problem considering multiple objectives and incomplete preferences

    NASA Astrophysics Data System (ADS)

    Hawthorne, Bryant; Panchal, Jitesh H.

    2014-07-01

    A bilevel optimization formulation of policy design problems considering multiple objectives and incomplete preferences of the stakeholders is presented. The formulation is presented for Feed-in-Tariff (FIT) policy design for decentralized energy infrastructure. The upper-level problem is the policy designer's problem and the lower-level problem is a Nash equilibrium problem resulting from market interactions. The policy designer has two objectives: maximizing the quantity of energy generated and minimizing policy cost. The stakeholders decide on quantities while maximizing net present value and minimizing capital investment. The Nash equilibrium problem in the presence of incomplete preferences is formulated as a stochastic linear complementarity problem and solved using expected value formulation, expected residual minimization formulation, and the Monte Carlo technique. The primary contributions in this article are the mathematical formulation of the FIT policy, the extension of computational policy design problems to multiple objectives, and the consideration of incomplete preferences of stakeholders for policy design problems.

  12. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  13. Visual attention distracter insertion for improved EEG rapid serial visual presentation (RSVP) target stimuli detection

    NASA Astrophysics Data System (ADS)

    Khosla, Deepak; Huber, David J.; Martin, Kevin

    2017-05-01

    This paper† describes a technique in which we improve upon the prior performance of the Rapid Serial Visual Presentation (RSVP) EEG paradigm for image classification though the insertion of visual attention distracters and overall sequence reordering based upon the expected ratio of rare to common "events" in the environment and operational context. Inserting distracter images maintains the ratio of common events to rare events at an ideal level, maximizing the rare event detection via P300 EEG response to the RSVP stimuli. The method has two steps: first, we compute the optimal number of distracters needed for an RSVP stimuli based on the desired sequence length and expected number of targets and insert the distracters into the RSVP sequence, and then we reorder the RSVP sequence to maximize P300 detection. We show that by reducing the ratio of target events to nontarget events using this method, we can allow RSVP sequences with more targets without sacrificing area under the ROC curve (azimuth).

  14. Choosing Fitness-Enhancing Innovations Can Be Detrimental under Fluctuating Environments

    PubMed Central

    Xue, Julian Z.; Costopoulos, Andre; Guichard, Frederic

    2011-01-01

    The ability to predict the consequences of one's behavior in a particular environment is a mechanism for adaptation. In the absence of any cost to this activity, we might expect agents to choose behaviors that maximize their fitness, an example of directed innovation. This is in contrast to blind mutation, where the probability of becoming a new genotype is independent of the fitness of the new genotypes. Here, we show that under environments punctuated by rapid reversals, a system with both genetic and cultural inheritance should not always maximize fitness through directed innovation. This is because populations highly accurate at selecting the fittest innovations tend to over-fit the environment during its stable phase, to the point that a rapid environmental reversal can cause extinction. A less accurate population, on the other hand, can track long term trends in environmental change, keeping closer to the time-average of the environment. We use both analytical and agent-based models to explore when this mechanism is expected to occur. PMID:22125601

  15. Design of a Miniature Pulse Tube Cryocooler for Space Applications

    NASA Astrophysics Data System (ADS)

    Trollier, T.; Ravex, A.; Charles, I.; Duband, L.; Mullié, J.; Bruins, P.; Benschop, T.; Linder, M.

    2004-06-01

    An Engineering Model (EM) of a Miniature Pulse Tube Cooler (MPTC) has been designed and manufactured. The expected performance of the MPTC were 1240 mW heat lift at 80 K with 288 K ambient temperature and 40 Watts rms maximum input power to the compressor motors. The EM is a U shape configuration operated with an inertance tube. The design and optimisation of the compressor and the Pulse Tube cold finger are described. The thermal performance test results are presented and discussed as well. This work is performed within a Technological Research Project (TRP) funded by ESA (Contract 14896/00/NL/PA).

  16. Using return on investment to maximize conservation effectiveness in Argentine grasslands.

    PubMed

    Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James

    2010-12-07

    The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land ("minimize cost"), maximizing conservation benefit regardless of cost ("maximize benefit"), and maximizing conservation benefit per dollar ("return on investment"). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy.

  17. DARE Mission Design: Low RFI Observations from a Low-Altitude Frozen Lunar Orbit

    NASA Technical Reports Server (NTRS)

    Plice, Laura; Galal, Ken; Burns, Jack O.

    2017-01-01

    The Dark Ages Radio Explorer (DARE) seeks to study the cosmic Dark Ages approximately 80 to 420 million years after the Big Bang. Observations require truly quiet radio conditions, shielded from Sun and Earth electromagnetic (EM) emissions, on the far side of the Moon. DAREs science orbit is a frozen orbit with respect to lunar gravitational perturbations. The altitude and orientation of the orbit remain nearly fixed indefinitely, maximizing science time without the need for maintenance. DAREs observation targets avoid the galactic center and enable investigation of the universes first stars and galaxies.

  18. Local Influence Analysis of Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Tang, Nian-Sheng

    2004-01-01

    By regarding the latent random vectors as hypothetical missing data and based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm, we investigate assessment of local influence of various perturbation schemes in a nonlinear structural equation model. The basic building blocks of local influence analysis…

  19. Intracellular chloride regulation in amphibian dorsal root ganglion neurones studied with ion-selective microelectrodes.

    PubMed Central

    Alvarez-Leefmans, F J; Gamiño, S M; Giraldez, F; Noguerón, I

    1988-01-01

    1. Intracellular Cl- activity (aiCl) and membrane potential (Em) were measured in frog dorsal root ganglion neurones (DRG neurones) using double-barrelled Cl- -selective microelectrodes. In standard Ringer solution buffered with HEPES (5 mM), equilibrated with air or 100% O2, the resting membrane potential was -57.7 +/- 1.0 mV and aiCl was 23.6 +/- 1.0 mM (n = 53). The value of aiCl was 2.6 times the activity expected for an equilibrium distribution and the difference between Em and ECl was 25 mV. 2. Removal of external Cl- led to a reversible fall in aiCl. Initial rates of decay and recovery of aiCl were 4.1 and 3.3 mM min-1, respectively. During the recovery of aiCl following return to standard Ringer solution, most of the movement of Cl- occurred against the driving force for a passive distribution. Changes in aiCl were not associated with changes in Em. Chloride fluxes estimated from initial rates of change in aiCl when external Cl- was removed were too high to be accounted for by electrodiffusion. 3. The intracellular accumulation of Cl- was dependent on the extracellular Cl- activity (aoCl). The relationship between aiCl and aoCl had a sigmoidal shape with a half-maximal activation of about 50 mM-external Cl-. 4. The steady-state aiCl depended on the simultaneous presence of extracellular Na+ and K+. Similarly, the active reaccumulation of Cl- after intracellular Cl- depletion was abolished in the absence of either Na+ or K+ in the bathing solution. 5. The reaccumulation of Cl- was inhibited by furosemide (0.5-1 x 10(-3) M) or bumetanide (10(-5) M). The decrease in aiCl observed in Cl- -free solutions was also inhibited by bumetanide. 6. Cell volume changes were calculated from the observed changes in aiCl. Cells were estimated to shrink in Cl- -free solutions to about 75% their initial volume, at an initial rate of 6% min-1. 7. The present results provide direct evidence for the active accumulation of Cl- in DRG neurones. The mechanism of Cl- transport is electrically silent, dependent on the simultaneous presence of external Cl-, Na+ and K+ and inhibited by loop diuretics. It is suggested that a Na+:K+:Cl- co-transport system mediates the active transport of Cl- across the cell membrane of DRG neurones. PMID:3254412

  20. Dose impact in radiographic lung injury following lung SBRT: Statistical analysis and geometric interpretation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Victoria; Kishan, Amar U.; Cao, Minsong

    2014-03-15

    Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Basedmore » on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose has been demonstrated. Bimodal behavior was observed in the dose distribution of lung injury after SBRT. Novel statistical and geometrical analysis has shown that the systematically quantified low-dose peak at approximately 35 Gy, or 70% prescription dose, is a good indication of a critical dose for injury. The determined critical dose of 35 Gy resembles the critical dose volume limit of 30 Gy for ipsilateral bronchus in RTOG 0618 and results from previous studies. The authors seek to further extend this improved analysis method to a larger cohort to better understand the interpatient variation in radiographic lung injury dose response post-SBRT.« less

  1. Expectant management compared with physical examination-indicated cerclage (EM-PEC) in selected women with a dilated cervix at 14(0/7)-25(6/7) weeks: results from the EM-PEC international cohort study.

    PubMed

    Pereira, Leonardo; Cotter, Amanda; Gómez, Ricardo; Berghella, Vincenzo; Prasertcharoensuk, Witoon; Rasanen, Juha; Chaithongwongwatthana, Surasith; Mittal, Suneeta; Daly, Sean; Airoldi, Jim; Tolosa, Jorge E

    2007-11-01

    The objective of the study was to compare pregnancy outcomes in selected women with a dilated cervix who underwent expectant management or physical examination-indicated cerclage. This was a historical cohort study conducted by the Global Network for Perinatal and Reproductive Health. Women between 14(0/7) and 25(6/7) weeks' gestation with a dilated cervix were identified at 10 centers by ultrasound or digital examination. Primary outcome was time from presentation until delivery (weeks). Secondary outcomes were neonatal survival, birthweight greater than 1500 g and preterm birth less than 28 weeks. Multivariate regression was used to assess the likelihood of neonatal outcomes and control for confounders. Of 225 women, 152 received a physical examination-indicated cerclage, and 73 were managed expectantly without cerclage. Cervical dilation, gestational age at presentation, and antenatal steroid use differed between groups. In the adjusted analyses, cerclage was associated with longer interval from presentation until delivery, improved neonatal survival, birthweight greater than 1500 g and preterm birth less than 28 weeks, compared with expectant management. Similar results were obtained in the analyses limited to women dilated between 2 and 4 cm (n = 122). In this study, the largest cohort reported to date, physical examination-indicated cerclage appears to prolong gestation and improve neonatal survival, compared with expectant management in selected women with cervical dilation between 14(0/7) and 25(6/7) weeks. A randomized, controlled trial should be conducted to determine whether these potential benefits outweigh the risks of cerclage placement in this population.

  2. To drain or not to drain? Predictors of tube thoracostomy insertion and outcomes associated with drainage of traumatic hemothoraces.

    PubMed

    Wells, Bryan J; Roberts, Derek J; Grondin, Sean; Navsaria, Pradeep H; Kirkpatrick, Andrew W; Dunham, Michael B; Ball, Chad G

    2015-09-01

    Historical data suggests that many traumatic hemothoraces (HTX) can be managed expectantly without tube thoracostomy (TT) drainage. The purpose of this study was to identify predictors of TT, including whether the quantity of pleural blood predicted tube placement, and to evaluate outcomes associated with TT versus expected management (EM) of traumatic HTXs. A retrospective cohort study of all trauma patients with HTXs and an Injury Severity Score (ISS) ≥12 managed at a level I trauma centre between April 1, 2005 and December 31, 2012 was completed. Mixed-effects models with a subject-specific random intercept were used to identify independent risk factors for TT. Logistic and log-linear regression were used to compute odds ratios (ORs) for mortality and empyema and percent increases in length of hospital and intensive care unit stay between patients managed with TT versus EM, respectively. A total of 635 patients with 749 HTXs were included in the study. Overall, 491 (66%) HTXs were drained while 258 (34%) were managed expectantly. Independent predictors of TT placement included concomitant ipsilateral flail chest [OR 3.03; 95% confidence interval (CI) 1.04-8.80; p=0.04] or pneumothorax (OR 6.19; 95% CI 1.79-21.5; p<0.01) and the size of the HTX (OR per 10cc increase 1.12; 95% CI 1.04-1.21; p<0.01). Although the adjusted odds of mortality were not significantly different between groups (OR 3.99; 95% CI 0.87-18.30; p=0.08), TT was associated with a 47.14% (95% CI, 25.57-69.71%; p<0.01) adjusted increase in hospital length of stay. Empyemas (n=29) only occurred among TT patients. Expectant management of traumatic HTX was associated with a shorter length of hospital stay, no empyemas, and no increase in mortality. Although EM of smaller HTXs may be safe, these findings must be confirmed by a large multi-centre cohort study and randomized controlled trials before they are used to guide practice. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. On the Teaching of Portfolio Theory.

    ERIC Educational Resources Information Center

    Biederman, Daniel K.

    1992-01-01

    Demonstrates how a simple portfolio problem expressed explicitly as an expected utility maximization problem can be used to instruct students in portfolio theory. Discusses risk aversion, decision making under uncertainty, and the limitations of the traditional mean variance approach. Suggests students may develop a greater appreciation of general…

  4. TIME SHARING WITH AN EXPLICIT PRIORITY QUEUING DISCIPLINE.

    DTIC Science & Technology

    exponentially distributed service times and an ordered priority queue. Each new arrival buys a position in this queue by offering a non-negative bribe to the...parameters is investigated through numerical examples. Finally, to maximize the expected revenue per unit time accruing from bribes , an optimization

  5. Program Monitoring: Problems and Cases.

    ERIC Educational Resources Information Center

    Lundin, Edward; Welty, Gordon

    Designed as the major component of a comprehensive model of educational management, a behavioral model of decision making is presented that approximates the synoptic model of neoclassical economic theory. The synoptic model defines all possible alternatives and provides a basis for choosing that alternative which maximizes expected utility. The…

  6. A Bayesian Approach to Interactive Retrieval

    ERIC Educational Resources Information Center

    Tague, Jean M.

    1973-01-01

    A probabilistic model for interactive retrieval is presented. Bayesian statistical decision theory principles are applied: use of prior and sample information about the relationship of document descriptions to query relevance; maximization of expected value of a utility function, to the problem of optimally restructuring search strategies in an…

  7. Creating an Agent Based Framework to Maximize Information Utility

    DTIC Science & Technology

    2008-03-01

    information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings

  8. Robust radio interferometric calibration using the t-distribution

    NASA Astrophysics Data System (ADS)

    Kazemi, S.; Yatawatta, S.

    2013-10-01

    A major stage of radio interferometric data processing is calibration or the estimation of systematic errors in the data and the correction for such errors. A stochastic error (noise) model is assumed, and in most cases, this underlying model is assumed to be Gaussian. However, outliers in the data due to interference or due to errors in the sky model would have adverse effects on processing based on a Gaussian noise model. Most of the shortcomings of calibration such as the loss in flux or coherence, and the appearance of spurious sources, could be attributed to the deviations of the underlying noise model. In this paper, we propose to improve the robustness of calibration by using a noise model based on Student's t-distribution. Student's t-noise is a special case of Gaussian noise when the variance is unknown. Unlike Gaussian-noise-model-based calibration, traditional least-squares minimization would not directly extend to a case when we have a Student's t-noise model. Therefore, we use a variant of the expectation-maximization algorithm, called the expectation-conditional maximization either algorithm, when we have a Student's t-noise model and use the Levenberg-Marquardt algorithm in the maximization step. We give simulation results to show the robustness of the proposed calibration method as opposed to traditional Gaussian-noise-model-based calibration, especially in preserving the flux of weaker sources that are not included in the calibration model.

  9. Can differences in breast cancer utilities explain disparities in breast cancer care?

    PubMed

    Schleinitz, Mark D; DePalo, Dina; Blume, Jeffrey; Stein, Michael

    2006-12-01

    Black, older, and less affluent women are less likely to receive adjuvant breast cancer therapy than their counterparts. Whereas preference contributes to disparities in other health care scenarios, it is unclear if preference explains differential rates of breast cancer care. To ascertain utilities from women of diverse backgrounds for the different stages of, and treatments for, breast cancer and to determine whether a treatment decision modeled from utilities is associated with socio-demographic characteristics. A stratified sample (by age and race) of 156 English-speaking women over 25 years old not currently undergoing breast cancer treatment. We assessed utilities using standard gamble for 5 breast cancer stages, and time-tradeoff for 3 therapeutic modalities. We incorporated each subject's utilities into a Markov model to determine whether her quality-adjusted life expectancy would be maximized with chemotherapy for a hypothetical, current diagnosis of stage II breast cancer. We used logistic regression to determine whether socio-demographic variables were associated with this optimal strategy. Median utilities for the 8 health states were: stage I disease, 0.91 (interquartile range 0.50 to 1.00); stage II, 0.75 (0.26 to 0.99); stage III, 0.51 (0.25 to 0.94); stage IV (estrogen receptor positive), 0.36 (0 to 0.75); stage IV (estrogen receptor negative), 0.40 (0 to 0.79); chemotherapy 0.50 (0 to 0.92); hormonal therapy 0.58 (0 to 1); and radiation therapy 0.83 (0.10 to 1). Utilities for early stage disease and treatment modalities, but not metastatic disease, varied with socio-demographic characteristics. One hundred and twenty-two of 156 subjects had utilities that maximized quality-adjusted life expectancy given stage II breast cancer with chemotherapy. Age over 50, black race, and low household income were associated with at least 5-fold lower odds of maximizing quality-adjusted life expectancy with chemotherapy, whereas women who were married or had a significant other were 4-fold more likely to maximize quality-adjusted life expectancy with chemotherapy. Differences in utility for breast cancer health states may partially explain the lower rate of adjuvant therapy for black, older, and less affluent women. Further work must clarify whether these differences result from health preference alone or reflect women's perceptions of sources of disparity, such as access to care, poor communication with providers, limitations in health knowledge or in obtaining social and workplace support during therapy.

  10. EMS Provider Assessment of Vehicle Damage Compared to a Professional Crash Reconstructionist

    PubMed Central

    Lerner, E. Brooke; Cushman, Jeremy T.; Blatt, Alan; Lawrence, Richard; Shah, Manish N.; Swor, Robert; Brasel, Karen; Jurkovich, Gregory J.

    2011-01-01

    Objective To determine the accuracy of EMS provider assessments of motor vehicle damage, when compared to measurements made by a professional crash reconstructionist. Methods EMS providers caring for adult patients injured during a motor vehicle crash and transported to the regional trauma center in a midsized community were interviewed upon ED arrival. The interview collected provider estimates of crash mechanism of injury. For crashes that met a preset severity threshold, the vehicle’s owner was asked to consent to having a crash reconstructionist assess their vehicle. The assessment included measuring intrusion and external auto deformity. Vehicle damage was used to calculate change in velocity. Paired t-test and correlation were used to compare EMS estimates and investigator derived values. Results 91 vehicles were enrolled; of these 58 were inspected and 33 were excluded because the vehicle was not accessible. 6 vehicles had multiple patients. Therefore, a total of 68 EMS estimates were compared to the inspection findings. Patients were 46% male, 28% admitted to hospital, and 1% died. Mean EMS estimated deformity was 18” and mean measured was 14”. Mean EMS estimated intrusion was 5” and mean measured was 4”. EMS providers and the reconstructionist had 67% agreement for determination of external auto deformity (kappa 0.26), and 88% agreement for determination of intrusion (kappa 0.27) when the 1999 Field Triage Decision Scheme Criteria were applied. Mean EMS estimated speed prior to the crash was 48 mph±13 and mean reconstructionist estimated change in velocity was 18 mph±12 (correlation -0.45). EMS determined that 19 vehicles had rolled over while the investigator identified 18 (kappa 0.96). In 55 cases EMS and the investigator agreed on seatbelt use, for the remaining 13 cases there was disagreement (5) or the investigator was unable to make a determination (8) (kappa 0.40). Conclusions This study found that EMS providers are good at estimating rollover. Vehicle intrusion, deformity, and seatbelt use appear to be more difficult to estimate with only fair agreement with the crash reconstructionist. As expected, the EMS provider estimated speed prior to the crash does not appear to be a reasonable proxy for change in velocity. PMID:21815732

  11. MC EMiNEM maps the interaction landscape of the Mediator.

    PubMed

    Niederberger, Theresa; Etzold, Stefanie; Lidschreiber, Michael; Maier, Kerstin C; Martin, Dietmar E; Fröhlich, Holger; Cramer, Patrick; Tresch, Achim

    2012-01-01

    The Mediator is a highly conserved, large multiprotein complex that is involved essentially in the regulation of eukaryotic mRNA transcription. It acts as a general transcription factor by integrating regulatory signals from gene-specific activators or repressors to the RNA Polymerase II. The internal network of interactions between Mediator subunits that conveys these signals is largely unknown. Here, we introduce MC EMiNEM, a novel method for the retrieval of functional dependencies between proteins that have pleiotropic effects on mRNA transcription. MC EMiNEM is based on Nested Effects Models (NEMs), a class of probabilistic graphical models that extends the idea of hierarchical clustering. It combines mode-hopping Monte Carlo (MC) sampling with an Expectation-Maximization (EM) algorithm for NEMs to increase sensitivity compared to existing methods. A meta-analysis of four Mediator perturbation studies in Saccharomyces cerevisiae, three of which are unpublished, provides new insight into the Mediator signaling network. In addition to the known modular organization of the Mediator subunits, MC EMiNEM reveals a hierarchical ordering of its internal information flow, which is putatively transmitted through structural changes within the complex. We identify the N-terminus of Med7 as a peripheral entity, entailing only local structural changes upon perturbation, while the C-terminus of Med7 and Med19 appear to play a central role. MC EMiNEM associates Mediator subunits to most directly affected genes, which, in conjunction with gene set enrichment analysis, allows us to construct an interaction map of Mediator subunits and transcription factors.

  12. Numerical simulations of imaging satellites with optical interferometry

    NASA Astrophysics Data System (ADS)

    Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong

    2015-08-01

    Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.

  13. Predictive coarse-graining

    NASA Astrophysics Data System (ADS)

    Schöberl, Markus; Zabaras, Nicholas; Koutsourelakis, Phaedon-Stelios

    2017-03-01

    We propose a data-driven, coarse-graining formulation in the context of equilibrium statistical mechanics. In contrast to existing techniques which are based on a fine-to-coarse map, we adopt the opposite strategy by prescribing a probabilistic coarse-to-fine map. This corresponds to a directed probabilistic model where the coarse variables play the role of latent generators of the fine scale (all-atom) data. From an information-theoretic perspective, the framework proposed provides an improvement upon the relative entropy method [1] and is capable of quantifying the uncertainty due to the information loss that unavoidably takes place during the coarse-graining process. Furthermore, it can be readily extended to a fully Bayesian model where various sources of uncertainties are reflected in the posterior of the model parameters. The latter can be used to produce not only point estimates of fine-scale reconstructions or macroscopic observables, but more importantly, predictive posterior distributions on these quantities. Predictive posterior distributions reflect the confidence of the model as a function of the amount of data and the level of coarse-graining. The issues of model complexity and model selection are seamlessly addressed by employing a hierarchical prior that favors the discovery of sparse solutions, revealing the most prominent features in the coarse-grained model. A flexible and parallelizable Monte Carlo - Expectation-Maximization (MC-EM) scheme is proposed for carrying out inference and learning tasks. A comparative assessment of the proposed methodology is presented for a lattice spin system and the SPC/E water model.

  14. Deep soil carbon dynamics are driven more by soil type than by climate: a worldwide meta-analysis of radiocarbon profiles.

    PubMed

    Mathieu, Jordane A; Hatté, Christine; Balesdent, Jérôme; Parent, Éric

    2015-11-01

    The response of soil carbon dynamics to climate and land-use change will affect both the future climate and the quality of ecosystems. Deep soil carbon (>20 cm) is the primary component of the soil carbon pool, but the dynamics of deep soil carbon remain poorly understood. Therefore, radiocarbon activity (Δ14C), which is a function of the age of carbon, may help to understand the rates of soil carbon biodegradation and stabilization. We analyzed the published 14C contents in 122 profiles of mineral soil that were well distributed in most of the large world biomes, except for the boreal zone. With a multivariate extension of a linear mixed-effects model whose inference was based on the parallel combination of two algorithms, the expectation-maximization (EM) and the Metropolis-Hasting algorithms, we expressed soil Δ14C profiles as a four-parameter function of depth. The four-parameter model produced insightful predictions of soil Δ14C as dependent on depth, soil type, climate, vegetation, land-use and date of sampling (R2=0.68). Further analysis with the model showed that the age of topsoil carbon was primarily affected by climate and cultivation. By contrast, the age of deep soil carbon was affected more by soil taxa than by climate and thus illustrated the strong dependence of soil carbon dynamics on other pedologic traits such as clay content and mineralogy. © 2015 John Wiley & Sons Ltd.

  15. A novel essential domain perspective for exploring gene essentiality.

    PubMed

    Lu, Yao; Lu, Yulan; Deng, Jingyuan; Peng, Hai; Lu, Hui; Lu, Long Jason

    2015-09-15

    Genes with indispensable functions are identified as essential; however, the traditional gene-level studies of essentiality have several limitations. In this study, we characterized gene essentiality from a new perspective of protein domains, the independent structural or functional units of a polypeptide chain. To identify such essential domains, we have developed an Expectation-Maximization (EM) algorithm-based Essential Domain Prediction (EDP) Model. With simulated datasets, the model provided convergent results given different initial values and offered accurate predictions even with noise. We then applied the EDP model to six microbial species and predicted 1879 domains to be essential in at least one species, ranging 10-23% in each species. The predicted essential domains were more conserved than either non-essential domains or essential genes. Comparing essential domains in prokaryotes and eukaryotes revealed an evolutionary distance consistent with that inferred from ribosomal RNA. When utilizing these essential domains to reproduce the annotation of essential genes, we received accurate results that suggest protein domains are more basic units for the essentiality of genes. Furthermore, we presented several examples to illustrate how the combination of essential and non-essential domains can lead to genes with divergent essentiality. In summary, we have described the first systematic analysis on gene essentiality on the level of domains. huilu.bioinfo@gmail.com or Long.Lu@cchmc.org Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  16. Inferring the most probable maps of underground utilities using Bayesian mapping model

    NASA Astrophysics Data System (ADS)

    Bilal, Muhammad; Khan, Wasiq; Muggleton, Jennifer; Rustighi, Emiliano; Jenks, Hugo; Pennock, Steve R.; Atkins, Phil R.; Cohn, Anthony

    2018-03-01

    Mapping the Underworld (MTU), a major initiative in the UK, is focused on addressing social, environmental and economic consequences raised from the inability to locate buried underground utilities (such as pipes and cables) by developing a multi-sensor mobile device. The aim of MTU device is to locate different types of buried assets in real time with the use of automated data processing techniques and statutory records. The statutory records, even though typically being inaccurate and incomplete, provide useful prior information on what is buried under the ground and where. However, the integration of information from multiple sensors (raw data) with these qualitative maps and their visualization is challenging and requires the implementation of robust machine learning/data fusion approaches. An approach for automated creation of revised maps was developed as a Bayesian Mapping model in this paper by integrating the knowledge extracted from sensors raw data and available statutory records. The combination of statutory records with the hypotheses from sensors was for initial estimation of what might be found underground and roughly where. The maps were (re)constructed using automated image segmentation techniques for hypotheses extraction and Bayesian classification techniques for segment-manhole connections. The model consisting of image segmentation algorithm and various Bayesian classification techniques (segment recognition and expectation maximization (EM) algorithm) provided robust performance on various simulated as well as real sites in terms of predicting linear/non-linear segments and constructing refined 2D/3D maps.

  17. Power calculations for likelihood ratio tests for offspring genotype risks, maternal effects, and parent-of-origin (POO) effects in the presence of missing parental genotypes when unaffected siblings are available.

    PubMed

    Rampersaud, E; Morris, R W; Weinberg, C R; Speer, M C; Martin, E R

    2007-01-01

    Genotype-based likelihood-ratio tests (LRT) of association that examine maternal and parent-of-origin effects have been previously developed in the framework of log-linear and conditional logistic regression models. In the situation where parental genotypes are missing, the expectation-maximization (EM) algorithm has been incorporated in the log-linear approach to allow incomplete triads to contribute to the LRT. We present an extension to this model which we call the Combined_LRT that incorporates additional information from the genotypes of unaffected siblings to improve assignment of incompletely typed families to mating type categories, thereby improving inference of missing parental data. Using simulations involving a realistic array of family structures, we demonstrate the validity of the Combined_LRT under the null hypothesis of no association and provide power comparisons under varying levels of missing data and using sibling genotype data. We demonstrate the improved power of the Combined_LRT compared with the family-based association test (FBAT), another widely used association test. Lastly, we apply the Combined_LRT to a candidate gene analysis in Autism families, some of which have missing parental genotypes. We conclude that the proposed log-linear model will be an important tool for future candidate gene studies, for many complex diseases where unaffected siblings can often be ascertained and where epigenetic factors such as imprinting may play a role in disease etiology.

  18. Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data

    PubMed Central

    Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure

    2018-01-01

    Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gritti, Fabrice; Guiochon, Georges A

    The adsorption isotherms of phenol, caffeine, propranolol chloride, and amitriptyline chloride were measured on three new brands of C{sub 18}-bonded silica that have been designed to be more resistant than conventional C{sub 18}-bonded silica at high pHs (>8). These columns were the 4 {micro}m Bidendate Cogent-C{sub 18} (Microsolv Technology Corporation, Long Branch, NJ, USA), the 3.5 {micro}m Zorbax Extend-C{sub 18} (Agilent Technologies, Palo Alto, CA, USA), and the 5 {micro}m XTerra-C{sub 18} (Waters, Milford, MA, USA). The originality of these adsorbents is due to their surface chemistry, which protects them from rapid hydrolysis or dissolution at extreme pH conditions. Theirmore » adsorption properties were compared to those of the 3 {micro}m Luna-C{sub 18} (Phenomenex, Torrance, CA), which is a more conventional monofunctional material. The adsorption data were acquired by frontal analysis (FA) and the adsorption energy distributions (AEDs) of all systems studied were calculated by the expectation-maximization (EM) method. The experimental results show that neither a simple surface protection (Extend-C{sub 18}) nor the elimination of most of the silanol groups (Cogent-C{sub 18}) is sufficient to avoid a peak tailing of the basic compounds at pH 8 that is of thermodynamic origin. The incorporation of organic moieties in the silica matrix, which was achieved in XTerra-C{sub 18}, the first generation of hybrid methyl/silica material, reduces the silanols activity and is more successful in reducing this peak tailing.« less

  20. Robust multiperson detection and tracking for mobile service and social robots.

    PubMed

    Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou

    2012-10-01

    This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.

  1. Optimal joint detection and estimation that maximizes ROC-type curves

    PubMed Central

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K.

    2017-01-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation. PMID:27093544

  2. Optimal Joint Detection and Estimation That Maximizes ROC-Type Curves.

    PubMed

    Wunderlich, Adam; Goossens, Bart; Abbey, Craig K

    2016-09-01

    Combined detection-estimation tasks are frequently encountered in medical imaging. Optimal methods for joint detection and estimation are of interest because they provide upper bounds on observer performance, and can potentially be utilized for imaging system optimization, evaluation of observer efficiency, and development of image formation algorithms. We present a unified Bayesian framework for decision rules that maximize receiver operating characteristic (ROC)-type summary curves, including ROC, localization ROC (LROC), estimation ROC (EROC), free-response ROC (FROC), alternative free-response ROC (AFROC), and exponentially-transformed FROC (EFROC) curves, succinctly summarizing previous results. The approach relies on an interpretation of ROC-type summary curves as plots of an expected utility versus an expected disutility (or penalty) for signal-present decisions. We propose a general utility structure that is flexible enough to encompass many ROC variants and yet sufficiently constrained to allow derivation of a linear expected utility equation that is similar to that for simple binary detection. We illustrate our theory with an example comparing decision strategies for joint detection-estimation of a known signal with unknown amplitude. In addition, building on insights from our utility framework, we propose new ROC-type summary curves and associated optimal decision rules for joint detection-estimation tasks with an unknown, potentially-multiple, number of signals in each observation.

  3. Optimal threshold estimator of a prognostic marker by maximizing a time-dependent expected utility function for a patient-centered stratified medicine.

    PubMed

    Dantan, Etienne; Foucher, Yohann; Lorent, Marine; Giral, Magali; Tessier, Philippe

    2018-06-01

    Defining thresholds of prognostic markers is essential for stratified medicine. Such thresholds are mostly estimated from purely statistical measures regardless of patient preferences potentially leading to unacceptable medical decisions. Quality-Adjusted Life-Years are a widely used preferences-based measure of health outcomes. We develop a time-dependent Quality-Adjusted Life-Years-based expected utility function for censored data that should be maximized to estimate an optimal threshold. We performed a simulation study to compare estimated thresholds when using the proposed expected utility approach and purely statistical estimators. Two applications illustrate the usefulness of the proposed methodology which was implemented in the R package ROCt ( www.divat.fr ). First, by reanalysing data of a randomized clinical trial comparing the efficacy of prednisone vs. placebo in patients with chronic liver cirrhosis, we demonstrate the utility of treating patients with a prothrombin level higher than 89%. Second, we reanalyze the data of an observational cohort of kidney transplant recipients: we conclude to the uselessness of the Kidney Transplant Failure Score to adapt the frequency of clinical visits. Applying such a patient-centered methodology may improve future transfer of novel prognostic scoring systems or markers in clinical practice.

  4. Mother doesn't always know best: Maternal wormlion choice of oviposition habitat does not match larval habitat choice.

    PubMed

    Adar, Shay; Dor, Roi

    2018-02-01

    Habitat choice is an important decision that influences animals' fitness. Insect larvae are less mobile than the adults. Consequently, the contribution of the maternal choice of habitat to the survival and development of the offspring is considered to be crucial. According to the "preference-performance hypothesis", ovipositing females are expected to choose habitats that will maximize the performance of their offspring. We tested this hypothesis in wormlions (Diptera: Vermileonidae), which are small sand-dwelling insects that dig pit-traps in sandy patches and ambush small arthropods. Larvae prefer relatively deep and obstacle-free sand, and here we tested the habitat preference of the ovipositing female. In contrast to our expectation, ovipositing females showed no clear preference for either a deep sand or obstacle-free habitat, in contrast to the larval choice. This suboptimal female choice led to smaller pits being constructed later by the larvae, which may reduce prey capture success of the larvae. We offer several explanations for this apparently suboptimal female behavior, related either to maximizing maternal rather than offspring fitness, or to constraints on the female's behavior. Female's ovipositing habitat choice may have weaker negative consequences than expected for the offspring, as larvae can partially correct suboptimal maternal choice. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. High dimensional land cover inference using remotely sensed modis data

    NASA Astrophysics Data System (ADS)

    Glanz, Hunter S.

    Image segmentation persists as a major statistical problem, with the volume and complexity of data expanding alongside new technologies. Land cover classification, one of the most studied problems in Remote Sensing, provides an important example of image segmentation whose needs transcend the choice of a particular classification method. That is, the challenges associated with land cover classification pervade the analysis process from data pre-processing to estimation of a final land cover map. Many of the same challenges also plague the task of land cover change detection. Multispectral, multitemporal data with inherent spatial relationships have hardly received adequate treatment due to the large size of the data and the presence of missing values. In this work we propose a novel, concerted application of methods which provide a unified way to estimate model parameters, impute missing data, reduce dimensionality, classify land cover, and detect land cover changes. This comprehensive analysis adopts a Bayesian approach which incorporates prior knowledge to improve the interpretability, efficiency, and versatility of land cover classification and change detection. We explore a parsimonious, parametric model that allows for a natural application of principal components analysis to isolate important spectral characteristics while preserving temporal information. Moreover, it allows us to impute missing data and estimate parameters via expectation-maximization (EM). A significant byproduct of our framework includes a suite of training data assessment tools. To classify land cover, we employ a spanning tree approximation to a lattice Potts prior to incorporate spatial relationships in a judicious way and more efficiently access the posterior distribution of pixel labels. We then achieve exact inference of the labels via the centroid estimator. To detect land cover changes, we develop a new EM algorithm based on the same parametric model. We perform simulation studies to validate our models and methods, and conduct an extensive continental scale case study using MODIS data. The results show that we successfully classify land cover and recover the spatial patterns present in large scale data. Application of our change point method to an area in the Amazon successfully identifies the progression of deforestation through portions of the region.

  6. Memetic algorithms for de novo motif-finding in biomedical sequences.

    PubMed

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Bayesian deconvolution of [corrected] fMRI data using bilinear dynamical systems.

    PubMed

    Makni, Salima; Beckmann, Christian; Smith, Steve; Woolrich, Mark

    2008-10-01

    In Penny et al. [Penny, W., Ghahramani, Z., Friston, K.J. 2005. Bilinear dynamical systems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1457) 983-993], a particular case of the Linear Dynamical Systems (LDSs) was used to model the dynamic behavior of the BOLD response in functional MRI. This state-space model, called bilinear dynamical system (BDS), is used to deconvolve the fMRI time series in order to estimate the neuronal response induced by the different stimuli of the experimental paradigm. The BDS model parameters are estimated using an expectation-maximization (EM) algorithm proposed by Ghahramani and Hinton [Ghahramani, Z., Hinton, G.E. 1996. Parameter Estimation for Linear Dynamical Systems. Technical Report, Department of Computer Science, University of Toronto]. In this paper we introduce modifications to the BDS model in order to explicitly model the spatial variations of the haemodynamic response function (HRF) in the brain using a non-parametric approach. While in Penny et al. [Penny, W., Ghahramani, Z., Friston, K.J. 2005. Bilinear dynamical systems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 360(1457) 983-993] the relationship between neuronal activation and fMRI signals is formulated as a first-order convolution with a kernel expansion using basis functions (typically two or three), in this paper, we argue in favor of a spatially adaptive GLM in which a local non-parametric estimation of the HRF is performed. Furthermore, in order to overcome the overfitting problem typically associated with simple EM estimates, we propose a full Variational Bayes (VB) solution to infer the BDS model parameters. We demonstrate the usefulness of our model which is able to estimate both the neuronal activity and the haemodynamic response function in every voxel of the brain. We first examine the behavior of this approach when applied to simulated data with different temporal and noise features. As an example we will show how this method can be used to improve interpretability of estimates from an independent component analysis (ICA) analysis of fMRI data. We finally demonstrate its use on real fMRI data in one slice of the brain.

  8. Reliability analysis based on the losses from failures.

    PubMed

    Todinov, M T

    2006-04-01

    The conventional reliability analysis is based on the premise that increasing the reliability of a system will decrease the losses from failures. On the basis of counterexamples, it is demonstrated that this is valid only if all failures are associated with the same losses. In case of failures associated with different losses, a system with larger reliability is not necessarily characterized by smaller losses from failures. Consequently, a theoretical framework and models are proposed for a reliability analysis, linking reliability and the losses from failures. Equations related to the distributions of the potential losses from failure have been derived. It is argued that the classical risk equation only estimates the average value of the potential losses from failure and does not provide insight into the variability associated with the potential losses. Equations have also been derived for determining the potential and the expected losses from failures for nonrepairable and repairable systems with components arranged in series, with arbitrary life distributions. The equations are also valid for systems/components with multiple mutually exclusive failure modes. The expected losses given failure is a linear combination of the expected losses from failure associated with the separate failure modes scaled by the conditional probabilities with which the failure modes initiate failure. On this basis, an efficient method for simplifying complex reliability block diagrams has been developed. Branches of components arranged in series whose failures are mutually exclusive can be reduced to single components with equivalent hazard rate, downtime, and expected costs associated with intervention and repair. A model for estimating the expected losses from early-life failures has also been developed. For a specified time interval, the expected losses from early-life failures are a sum of the products of the expected number of failures in the specified time intervals covering the early-life failures region and the expected losses given failure characterizing the corresponding time intervals. For complex systems whose components are not logically arranged in series, discrete simulation algorithms and software have been created for determining the losses from failures in terms of expected lost production time, cost of intervention, and cost of replacement. Different system topologies are assessed to determine the effect of modifications of the system topology on the expected losses from failures. It is argued that the reliability allocation in a production system should be done to maximize the profit/value associated with the system. Consequently, a method for setting reliability requirements and reliability allocation maximizing the profit by minimizing the total cost has been developed. Reliability allocation that maximizes the profit in case of a system consisting of blocks arranged in series is achieved by determining for each block individually the reliabilities of the components in the block that minimize the sum of the capital, operation costs, and the expected losses from failures. A Monte Carlo simulation based net present value (NPV) cash-flow model has also been proposed, which has significant advantages to cash-flow models based on the expected value of the losses from failures per time interval. Unlike these models, the proposed model has the capability to reveal the variation of the NPV due to different number of failures occurring during a specified time interval (e.g., during one year). The model also permits tracking the impact of the distribution pattern of failure occurrences and the time dependence of the losses from failures.

  9. Allocating dissipation across a molecular machine cycle to maximize flux

    PubMed Central

    Brown, Aidan I.; Sivak, David A.

    2017-01-01

    Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that—in contrast to previous findings—the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both “irreversible” and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation. PMID:29073016

  10. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Optimisation of the mean boat velocity in rowing.

    PubMed

    Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P

    2012-01-01

    In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.

  12. A Case for Application Oblivious Energy-Efficient MPI Runtime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venkatesh, Akshay; Vishnu, Abhinav; Hamidouche, Khaled

    Power has become the major impediment in designing large scale high-end systems. Message Passing Interface (MPI) is the {\\em de facto} communication interface used as the back-end for designing applications, programming models and runtime for these systems. Slack --- the time spent by an MPI process in a single MPI call --- provides a potential for energy and power savings, if an appropriate power reduction technique such as core-idling/Dynamic Voltage and Frequency Scaling (DVFS) can be applied without perturbing application's execution time. Existing techniques that exploit slack for power savings assume that application behavior repeats across iterations/executions. However, an increasingmore » use of adaptive, data-dependent workloads combined with system factors (OS noise, congestion) makes this assumption invalid. This paper proposes and implements Energy Aware MPI (EAM) --- an application-oblivious energy-efficient MPI runtime. EAM uses a combination of communication models of common MPI primitives (point-to-point, collective, progress, blocking/non-blocking) and an online observation of slack for maximizing energy efficiency. Each power lever incurs time overhead, which must be amortized over slack to minimize degradation. When predicted communication time exceeds a lever overhead, the lever is used {\\em as soon as possible} --- to maximize energy efficiency. When mis-prediction occurs, the lever(s) are used automatically at specific intervals for amortization. We implement EAM using MVAPICH2 and evaluate it on ten applications using up to 4096 processes. Our performance evaluation on an InfiniBand cluster indicates that EAM can reduce energy consumption by 5--41\\% in comparison to the default approach, with negligible (less than 4\\% in all cases) performance loss.« less

  13. Effect of Repeated Whole Blood Donations on Aerobic Capacity and Hemoglobin Mass in Moderately Trained Male Subjects: A Randomized Controlled Trial.

    PubMed

    Meurrens, Julie; Steiner, Thomas; Ponette, Jonathan; Janssen, Hans Antonius; Ramaekers, Monique; Wehrlin, Jon Peter; Vandekerckhove, Philippe; Deldicque, Louise

    2016-12-01

    The aims of the present study were to investigate the impact of three whole blood donations on endurance capacity and hematological parameters and to determine the duration to fully recover initial endurance capacity and hematological parameters after each donation. Twenty-four moderately trained subjects were randomly divided in a donation (n = 16) and a placebo (n = 8) group. Each of the three donations was interspersed by 3 months, and the recovery of endurance capacity and hematological parameters was monitored up to 1 month after donation. Maximal power output, peak oxygen consumption, and hemoglobin mass decreased (p < 0.001) up to 4 weeks after a single blood donation with a maximal decrease of 4, 10, and 7%, respectively. Hematocrit, hemoglobin concentration, ferritin, and red blood cell count (RBC), all key hematological parameters for oxygen transport, were lowered by a single donation (p < 0.001) and cumulatively further affected by the repetition of the donations (p < 0.001). The maximal decrease after a blood donation was 11% for hematocrit, 10% for hemoglobin concentration, 50% for ferritin, and 12% for RBC (p < 0.001). Maximal power output cumulatively increased in the placebo group as the maximal exercise tests were repeated (p < 0.001), which indicates positive training adaptations. This increase in maximal power output over the whole duration of the study was not observed in the donation group. Maximal, but not submaximal, endurance capacity was altered after blood donation in moderately trained people and the expected increase in capacity after multiple maximal exercise tests was not present when repeating whole blood donations.

  14. Aging Education: A Worldwide Imperative

    ERIC Educational Resources Information Center

    McGuire, Sandra L.

    2017-01-01

    Life expectancy is increasing worldwide. Unfortunately, people are generally not prepared for this long life ahead and have ageist attitudes that inhibit maximizing the "longevity dividend" they have been given. Aging education can prepare people for life's later years and combat ageism. It can reimage aging as a time of continued…

  15. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  16. The Probabilistic Nature of Preferential Choice

    ERIC Educational Resources Information Center

    Rieskamp, Jorg

    2008-01-01

    Previous research has developed a variety of theories explaining when and why people's decisions under risk deviate from the standard economic view of expected utility maximization. These theories are limited in their predictive accuracy in that they do not explain the probabilistic nature of preferential choice, that is, why an individual makes…

  17. Relevance of a Managerial Decision-Model to Educational Administration.

    ERIC Educational Resources Information Center

    Lundin, Edward.; Welty, Gordon

    The rational model of classical economic theory assumes that the decision maker has complete information on alternatives and consequences, and that he chooses the alternative that maximizes expected utility. This model does not allow for constraints placed on the decision maker resulting from lack of information, organizational pressures,…

  18. India's growing participation in global clinical trials.

    PubMed

    Gupta, Yogendra K; Padhy, Biswa M

    2011-06-01

    Lower operational costs, recent regulatory reforms and several logistic advantages make India an attractive destination for conducting clinical trials. Efforts for maintaining stringent ethical standards and the launch of Pharmacovigilance Program of India are expected to maximize the potential of the country for clinical research. Copyright © 2011. Published by Elsevier Ltd.

  19. Optimization Techniques for College Financial Aid Managers

    ERIC Educational Resources Information Center

    Bosshardt, Donald I.; Lichtenstein, Larry; Palumbo, George; Zaporowski, Mark P.

    2010-01-01

    In the context of a theoretical model of expected profit maximization, this paper shows how historic institutional data can be used to assist enrollment managers in determining the level of financial aid for students with varying demographic and quality characteristics. Optimal tuition pricing in conjunction with empirical estimation of…

  20. An Investigation of the Nontechnical Skills Required to Maximize the Safety and Productivity of U.S. Navy Divers

    DTIC Science & Technology

    2005-04-01

    experience. The critical incident interview uses recollection of a specific incident as its starting point and employs a semistructured interview format...context assessment, expectancies, and judgments. The four sweeps in the critical incident interview include: Sweep 1 - Prompting the interviewee to

  1. Common Core in the Real World

    ERIC Educational Resources Information Center

    Hess, Frederick M.; McShane, Michael Q.

    2013-01-01

    There are at least four key places where the Common Core intersects with current efforts to improve education in the United States--testing, professional development, expectations, and accountability. Understanding them can help educators, parents, and policymakers maximize the chance that the Common Core is helpful to these efforts and, perhaps…

  2. Designing Contributing Student Pedagogies to Promote Students' Intrinsic Motivation to Learn

    ERIC Educational Resources Information Center

    Herman, Geoffrey L.

    2012-01-01

    In order to maximize the effectiveness of our pedagogies, we must understand how our pedagogies align with prevailing theories of cognition and motivation and design our pedagogies according to this understanding. When implementing Contributing Student Pedagogies (CSPs), students are expected to make meaningful contributions to the learning of…

  3. Charter School Discipline: Examples of Policies and School Climate Efforts from the Field

    ERIC Educational Resources Information Center

    Kern, Nora; Kim, Suzie

    2016-01-01

    Students need a safe and supportive school environment to maximize their academic and social-emotional learning potential. A school's discipline policies and practices directly impact school climate and student achievement. Together, discipline policies and positive school climate efforts can reinforce behavioral expectations and ensure student…

  4. In the queue for coronary artery bypass grafting: patients' perceptions of risk and 'maximal acceptable waiting time'.

    PubMed

    Llewellyn-Thomas, H; Thiel, E; Paterson, M; Naylor, D

    1999-04-01

    To elicit patients' maximal acceptable waiting times (MAWT) for non-urgent coronary artery bypass grafting (CABG), and to determine if MAWT is related to prior expectations of waiting times, symptom burden, expected relief, or perceived risks of myocardial infarction while waiting. Seventy-two patients on an elective CABG waiting list chose between two hypothetical but plausible options: a 1-month wait with 2% risk of surgical mortality, and a 6-month wait with 1% risk of surgical mortality. Waiting time in the 6-month option was varied up if respondents chose the 6-month/lower risk option, and down if they chose the 1-month/higher risk option, until the MAWT switch point was reached. Patients also reported their expected waiting time, perceived risks of myocardial infarction while waiting, current function, expected functional improvement and the value of that improvement. Only 17 (24%) patients chose the 6-month/1% risk option, while 55 (76%) chose the 1-month/2% risk option. The median MAWT was 2 months; scores ranged from 1 to 12 months (with two outliers). Many perceived high cumulative risks of myocardial infarction if waiting for 1 (upper quartile, > or = 1.45%) or 6 (upper quartile, > or = 10%) months. However, MAWT scores were related only to expected waiting time (r = 0.47; P < 0.0001). Most patients reject waiting 6 months for elective CABG, even if offered along with a halving in surgical mortality (from 2% to 1%). Intolerance for further delay seems to be determined primarily by patients' attachment to their scheduled surgical dates. Many also have severely inflated perceptions of their risk of myocardial infarction in the queue. These results suggest a need for interventions to modify patients' inaccurate risk perceptions, particularly if a scheduled surgical date must be deferred.

  5. Robust generative asymmetric GMM for brain MR image segmentation.

    PubMed

    Ji, Zexuan; Xia, Yong; Zheng, Yuhui

    2017-11-01

    Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Impact of chronobiology on neuropathic pain treatment.

    PubMed

    Gilron, Ian

    2016-01-01

    Inflammatory pain exhibits circadian rhythmicity. Recently, a distinct diurnal pattern has been described for peripheral neuropathic conditions. This diurnal variation has several implications: advancing understanding of chronobiology may facilitate identification of new and improved treatments; developing pain-contingent strategies that maximize treatment at times of the day associated with highest pain intensity may provide optimal pain relief as well as minimize treatment-related adverse effects (e.g., daytime cognitive dysfunction); and consideration of the impact of chronobiology on pain measurement may lead to improvements in analgesic study design that will maximize assay sensitivity of clinical trials. Recent and ongoing chronobiology studies are thus expected to advance knowledge and treatment of neuropathic pain.

  7. Case-Deletion Diagnostics for Nonlinear Structural Equation Models

    ERIC Educational Resources Information Center

    Lee, Sik-Yum; Lu, Bin

    2003-01-01

    In this article, a case-deletion procedure is proposed to detect influential observations in a nonlinear structural equation model. The key idea is to develop the diagnostic measures based on the conditional expectation of the complete-data log-likelihood function in the EM algorithm. An one-step pseudo approximation is proposed to reduce the…

  8. Feedback-Driven Mode Rotation Control by Electro-Magnetic Torque

    NASA Astrophysics Data System (ADS)

    Okabayashi, M.; Strait, E. J.; Garofalo, A. M.; La Haye, R. J.; in, Y.; Hanson, J. M.; Shiraki, D.; Volpe, F.

    2013-10-01

    The recent experimental discovery of feedback-driven mode rotation control, supported by modeling, opens new approaches for avoidance of locked tearing modes that otherwise lead to disruptions. This approach is an application of electro-magnetic (EM) torque using 3D fields, routinely maximized through a simple feedback system. In DIII-D, it is observed that a feedback-applied radial field can be synchronized in phase with the poloidal field component of a large amplitude tearing mode, producing the maximum EM torque input. The mode frequency can be maintained in the 10 Hz to 100 Hz range in a well controlled manner, sustaining the discharges. Presently, in the ITER internal coils designed for edge localized mode (ELM) control can only be varied at few Hz, yet, well below the inverse wall time constant. Hence, ELM control system could in principle be used for this feedback-driven mode control in various ways. For instance, the locking of MHD modes can be avoided during the controlled shut down of multi hundreds Mega Joule EM stored energy in case of emergency. Feedback could also be useful to minimize mechanical resonances at the disruption events by forcing the MHD frequency away from dangerous ranges. Work supported by the US DOE under DE-AC02-09CH11466, DE-FC-02-04ER54698, DE-FG02-08ER85195, and DE-FG02-04ER54761.

  9. Evaluation of thin discontinuities in planar conducting materials using the diffraction of electromagnetic field

    NASA Astrophysics Data System (ADS)

    Savin, A.; Novy, F.; Fintova, S.; Steigmann, R.

    2017-08-01

    The current stage of nondestructive evaluation techniques imposes the development of new electromagnetic (EM) methods that are based on high spatial resolution and increased sensitivity. In order to achieve high performance, the work frequencies must be either radifrequencies or microwaves. At these frequencies, at the dielectric/conductor interface, plasmon polaritons can appear, propagating between conductive regions as evanescent waves. In order to use the evanescent wave that can appear even if the slits width is much smaller that the wavwelength of incident EM wave, a sensor with metamaterial (MM) is used. The study of the EM field diffraction against the edge of long thin discontinuity placed under the inspected surface of a conductive plate has been performed using the geometrical optics principles. This type of sensor having the reception coils shielded by a conductive screen with a circular aperture placed in the front of reception coil of emission reception sensor has been developed and “transported” information for obtaining of magnified image of the conductive structures inspected. This work presents a sensor, using MM conical Swiss roll type that allows the propagation of evanescent waves and the electromagnetic images are magnified. The test method can be successfully applied in a variety of applications of maxim importance such as defect/damage detection in materials used in automotive and aviation technologies. Applying this testing method, spatial resolution can be improved.

  10. Operation of MRO's High Resolution Imaging Science Experiment (HiRISE): Maximizing Science Participation

    NASA Technical Reports Server (NTRS)

    Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.

    2003-01-01

    Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.

  11. Interindividual variation in thermal sensitivity of maximal sprint speed, thermal behavior, and resting metabolic rate in a lizard.

    PubMed

    Artacho, Paulina; Jouanneau, Isabelle; Le Galliard, Jean-François

    2013-01-01

    Studies of the relationship of performance and behavioral traits with environmental factors have tended to neglect interindividual variation even though quantification of this variation is fundamental to understanding how phenotypic traits can evolve. In ectotherms, functional integration of locomotor performance, thermal behavior, and energy metabolism is of special interest because of the potential for coadaptation among these traits. For this reason, we analyzed interindividual variation, covariation, and repeatability of the thermal sensitivity of maximal sprint speed, preferred body temperature, thermal precision, and resting metabolic rate measured in ca. 200 common lizards (Zootoca vivipara) that varied by sex, age, and body size. We found significant interindividual variation in selected body temperatures and in the thermal performance curve of maximal sprint speed for both the intercept (expected trait value at the average temperature) and the slope (measure of thermal sensitivity). Interindividual differences in maximal sprint speed across temperatures, preferred body temperature, and thermal precision were significantly repeatable. A positive relationship existed between preferred body temperature and thermal precision, implying that individuals selecting higher temperatures were more precise. The resting metabolic rate was highly variable but was not related to thermal sensitivity of maximal sprint speed or thermal behavior. Thus, locomotor performance, thermal behavior, and energy metabolism were not directly functionally linked in the common lizard.

  12. Using return on investment to maximize conservation effectiveness in Argentine grasslands

    PubMed Central

    Murdoch, William; Ranganathan, Jai; Polasky, Stephen; Regetz, James

    2010-01-01

    The rapid global loss of natural habitats and biodiversity, and limited resources, place a premium on maximizing the expected benefits of conservation actions. The scarcity of information on the fine-grained distribution of species of conservation concern, on risks of loss, and on costs of conservation actions, especially in developing countries, makes efficient conservation difficult. The distribution of ecosystem types (unique ecological communities) is typically better known than species and arguably better represents the entirety of biodiversity than do well-known taxa, so we use conserving the diversity of ecosystem types as our conservation goal. We define conservation benefit to include risk of conversion, spatial effects that reward clumping of habitat, and diminishing returns to investment in any one ecosystem type. Using Argentine grasslands as an example, we compare three strategies: protecting the cheapest land (“minimize cost”), maximizing conservation benefit regardless of cost (“maximize benefit”), and maximizing conservation benefit per dollar (“return on investment”). We first show that the widely endorsed goal of saving some percentage (typically 10%) of a country or habitat type, although it may inspire conservation, is a poor operational goal. It either leads to the accumulation of areas with low conservation benefit or requires infeasibly large sums of money, and it distracts from the real problem: maximizing conservation benefit given limited resources. Second, given realistic budgets, return on investment is superior to the other conservation strategies. Surprisingly, however, over a wide range of budgets, minimizing cost provides more conservation benefit than does the maximize-benefit strategy. PMID:21098281

  13. Optimization of detectors for the ILC

    NASA Astrophysics Data System (ADS)

    Suehara, Taikan; ILD Group; SID Group

    2016-04-01

    International Linear Collider (ILC) is a next-generation e+e- linear collider to explore Higgs, Beyond-Standard-Models, top and electroweak particles with great precision. We are optimizing our two detectors, International Large Detector (ILD) and Silicon Detector (SiD) to maximize the physics reach expected in ILC with reasonable detector cost and good reliability. The optimization study on vertex detectors, main trackers and calorimeters is underway. We aim to conclude the optimization to establish final designs in a few years, to finish detector TDR and proposal in reply to expected ;green sign; of the ILC project.

  14. Eye movement during recall reduces objective memory performance: An extended replication.

    PubMed

    Leer, Arne; Engelhard, Iris M; Lenaert, Bert; Struyf, Dieter; Vervliet, Bram; Hermans, Dirk

    2017-05-01

    Eye Movement Desensitization and Reprocessing (EMDR) therapy for posttraumatic stress disorder involves making eye movements (EMs) during recall of a traumatic image. Experimental studies have shown that the dual task decreases self-reported memory vividness and emotionality. However valuable, these data are prone to demand effects and little can be inferred about the mechanism(s) underlying the observed effects. The current research aimed to fill this lacuna by providing two objective tests of memory performance. Experiment I involved a stimulus discrimination task. Findings were that EM during stimulus recall not only reduces self-reported memory vividness, but also slows down reaction time in a task that requires participants to discriminate the stimulus from perceptually similar stimuli. Experiment II involved a fear conditioning paradigm. It was shown that EM during recall of a threatening stimulus intensifies fearful responding to a perceptually similar yet non-threat-related stimulus, as evidenced by increases in danger expectancies and skin conductance responses. The latter result was not corroborated by startle EMG data. Together, the findings suggest that the EM manipulation renders stimulus attributes less accessible for future recall. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Searching for high-energy gamma-ray counterparts to gravitational-wave sources with Fermi-LAT: A needle in a haystack

    DOE PAGES

    Vianello, G.; Omodei, N.; Chiang, J.; ...

    2017-05-20

    At least a fraction of gravitational-wave (GW) progenitors are expected to emit an electromagnetic (EM) signal in the form of a short gamma-ray burst (sGRB). Discovering such a transient EM counterpart is challenging because the LIGO/VIRGO localization region is much larger (several hundreds of square degrees) than the field of view of X-ray, optical, and radio telescopes. The Fermi Large Area Telescope (LAT) has a wide field of view (~2.4 sr) and detects ~2–3 sGRBs per year above 100 MeV. It can detect them not only during the short prompt phase, but also during their long-lasting high-energy afterglow phase. If other wide-field, high-energy instruments such as Fermi-GBM, Swift-BAT, or INTEGRAL-ISGRI cannot detect or localize with enough precision an EM counterpart during the prompt phase, the LAT can potentially pinpoint it withmore » $$\\lesssim 10$$ arcmin accuracy during the afterglow phase. This routinely happens with gamma-ray bursts. Moreover, the LAT will cover the entire localization region within hours of any triggers during normal operations, allowing the γ-ray flux of any EM counterpart to be measured or constrained. As a result, we illustrate two new ad hoc methods to search for EM counterparts with the LAT and their application to the GW candidate LVT151012.« less

  16. Searching for high-energy gamma-ray counterparts to gravitational-wave sources with Fermi-LAT: A needle in a haystack

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vianello, G.; Omodei, N.; Chiang, J.

    At least a fraction of gravitational-wave (GW) progenitors are expected to emit an electromagnetic (EM) signal in the form of a short gamma-ray burst (sGRB). Discovering such a transient EM counterpart is challenging because the LIGO/VIRGO localization region is much larger (several hundreds of square degrees) than the field of view of X-ray, optical, and radio telescopes. The Fermi Large Area Telescope (LAT) has a wide field of view (~2.4 sr) and detects ~2–3 sGRBs per year above 100 MeV. It can detect them not only during the short prompt phase, but also during their long-lasting high-energy afterglow phase. If other wide-field, high-energy instruments such as Fermi-GBM, Swift-BAT, or INTEGRAL-ISGRI cannot detect or localize with enough precision an EM counterpart during the prompt phase, the LAT can potentially pinpoint it withmore » $$\\lesssim 10$$ arcmin accuracy during the afterglow phase. This routinely happens with gamma-ray bursts. Moreover, the LAT will cover the entire localization region within hours of any triggers during normal operations, allowing the γ-ray flux of any EM counterpart to be measured or constrained. As a result, we illustrate two new ad hoc methods to search for EM counterparts with the LAT and their application to the GW candidate LVT151012.« less

  17. K+-induced alterations in airway muscle responsiveness to electrical field stimulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murlas, C.; Ehring, G.; Suszkiw, J.

    1986-07-01

    We investigated possible pre- and postsynaptic effects of K+-induced depolarization on ferret tracheal smooth muscle (TSM) responsiveness to cholinergic stimulation. To assess electromechanical activity, cell membrane potential (Em) and tension (Tm) were simultaneously recorded in buffer containing 6, 12, 18, or 24 mM K+ before and after electrical field stimulation (EFS) or exogenous acetylcholine (ACh). In 6 mM K+, Em was -58.1 +/- 1.0 mV (mean +/- SE). In 12 mM K+, Em was depolarized to -52.3 +/- 0.9 mV, basal Tm did not change, and both excitatory junctional potentials and contractile responses to EFS at short stimulus duration weremore » larger than in 6 mM K+. No such potentiation occurred at a higher K+, although resting Em and Tm increased progressively above 12 mM K+. The sensitivity of ferret TSM to exogenous ACh appeared unaffected by K+. To determine whether the hyperresponsiveness in 12 mM K+ was due, in part, to augmented ACh release from intramural airway nerves, experiments were done using TSM preparations incubated with (3H)choline to measure (3H)ACh release at rest and during EFS. Although resting (3H)ACh release increased progressively in higher K+, release evoked by EFS was maximal in 12 mM K+ and declined in higher concentrations. We conclude that small elevations in the extracellular K+ concentration augment responsiveness of the airways, by increasing the release of ACh both at rest and during EFS from intramural cholinergic nerve terminals. Larger increases in K+ appear to be inhibitory, possibly due to voltage-dependent effects that occur both pre- and postsynaptically.« less

  18. Entanglement distribution in star network based on spin chain in diamond

    NASA Astrophysics Data System (ADS)

    Zhu, Yuan-Ming; Ma, Lei

    2018-06-01

    After star network of spins was proposed, generating entanglement directly through spin interactions between distant parties became possible. We propose an architecture which involves coupled spin chains based on nitrogen-vacancy centers and nitrogen defect spins to expand star network. The numerical analysis shows that the maximally achievable entanglement Em exponentially decays with the length of spin chains M and spin noise. The entanglement capability of this configuration under the effect of disorder and spin loss is also studied. Moreover, it is shown that with this kind of architecture, star network of spins is feasible in measurement of magnetic-field gradient.

  19. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    PubMed Central

    Kurnianingsih, Yoanna A.; Sim, Sam K. Y.; Chee, Michael W. L.; Mullette-Gillman, O’Dhaniel A.

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61–80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for losses through changes in both individual preferences and the strategies individuals employ. PMID:26029092

  20. Reliability and cost: A sensitivity analysis

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1991-01-01

    In the design phase of a system, how a design engineer or manager choose between a subsystem with .990 reliability and a more costly subsystem with .995 reliability is examined, along with the justification of the increased cost. High reliability is not necessarily an end in itself but may be desirable in order to reduce the expected cost due to subsystem failure. However, this may not be the wisest use of funds since the expected cost due to subsystem failure is not the only cost involved. The subsystem itself may be very costly. The cost of the subsystem nor the expected cost due to subsystem failure should not be considered separately but the total of the two costs should be maximized, i.e., the total of the cost of the subsystem plus the expected cost due to subsystem failure.

Top