ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Approximated mutual information training for speech recognition using myoelectric signals.
Guo, Hua J; Chan, A D C
2006-01-01
A new training algorithm called the approximated maximum mutual information (AMMI) is proposed to improve the accuracy of myoelectric speech recognition using hidden Markov models (HMMs). Previous studies have demonstrated that automatic speech recognition can be performed using myoelectric signals from articulatory muscles of the face. Classification of facial myoelectric signals can be performed using HMMs that are trained using the maximum likelihood (ML) algorithm; however, this algorithm maximizes the likelihood of the observations in the training sequence, which is not directly associated with optimal classification accuracy. The AMMI training algorithm attempts to maximize the mutual information, thereby training the HMMs to optimize their parameters for discrimination. Our results show that AMMI training consistently reduces the error rates compared to these by the ML training, increasing the accuracy by approximately 3% on average.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
Deterministic annealing for density estimation by multivariate normal mixtures
NASA Astrophysics Data System (ADS)
Kloppenburg, Martin; Tavan, Paul
1997-03-01
An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.
More Effective Distributed ML via a Stale Synchronous Parallel Parameter Server
Ho, Qirong; Cipar, James; Cui, Henggang; Kim, Jin Kyu; Lee, Seunghak; Gibbons, Phillip B.; Gibson, Garth A.; Ganger, Gregory R.; Xing, Eric P.
2014-01-01
We propose a parameter server system for distributed ML, which follows a Stale Synchronous Parallel (SSP) model of computation that maximizes the time computational workers spend doing useful work on ML algorithms, while still providing correctness guarantees. The parameter server provides an easy-to-use shared interface for read/write access to an ML model’s values (parameters and variables), and the SSP model allows distributed workers to read older, stale versions of these values from a local cache, instead of waiting to get them from a central storage. This significantly increases the proportion of time workers spend computing, as opposed to waiting. Furthermore, the SSP model ensures ML algorithm correctness by limiting the maximum age of the stale values. We provide a proof of correctness under SSP, as well as empirical results demonstrating that the SSP model achieves faster algorithm convergence on several different ML problems, compared to fully-synchronous and asynchronous schemes. PMID:25400488
Robust multiperson detection and tracking for mobile service and social robots.
Li, Liyuan; Yan, Shuicheng; Yu, Xinguo; Tan, Yeow Kee; Li, Haizhou
2012-10-01
This paper proposes an efficient system which integrates multiple vision models for robust multiperson detection and tracking for mobile service and social robots in public environments. The core technique is a novel maximum likelihood (ML)-based algorithm which combines the multimodel detections in mean-shift tracking. First, a likelihood probability which integrates detections and similarity to local appearance is defined. Then, an expectation-maximization (EM)-like mean-shift algorithm is derived under the ML framework. In each iteration, the E-step estimates the associations to the detections, and the M-step locates the new position according to the ML criterion. To be robust to the complex crowded scenarios for multiperson tracking, an improved sequential strategy to perform the mean-shift tracking is proposed. Under this strategy, human objects are tracked sequentially according to their priority order. To balance the efficiency and robustness for real-time performance, at each stage, the first two objects from the list of the priority order are tested, and the one with the higher score is selected. The proposed method has been successfully implemented on real-world service and social robots. The vision system integrates stereo-based and histograms-of-oriented-gradients-based human detections, occlusion reasoning, and sequential mean-shift tracking. Various examples to show the advantages and robustness of the proposed system for multiperson tracking from mobile robots are presented. Quantitative evaluations on the performance of multiperson tracking are also performed. Experimental results indicate that significant improvements have been achieved by using the proposed method.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
Ali, Sikander; Nawaz, Wajeeha
2016-08-01
The present research work is concerned with the biotransformation of L-tyrosine to dopamine (DA) by calcium alginate entrapped conidiospores of a mutant strain of Aspergillus oryzae. Different strains of A. oryzae were isolated from soil. Out of 13 isolated strains, isolate-2 (I-2) was found to be a better DA producer. The wild-type I-2 was chemically improved by treating it with different concentrations of ethyl methyl sulfonate (EMS). Among seven mutant variants, EMS-6 exhibiting maximal DA activity of 43 μg/ml was selected. The strain was further exposed with L-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of selected mutant variant A. oryzae EMS-6 strain were entrapped in calcium alginate beads. Different parameters for immobilization were investigated. The activity was further improved from 44 to 62 μg/ml under optimized conditions (1.5 % sodium alginate, 2 ml inoculum, and 2 mm bead size). The best resistant mutant variable exhibited over threefold increase in DA activity (62 μg/ml) than did wild-type I-2 (21 μg/ml) in the reaction mixture. From the results presented in the study, it was observed that high titers of DA activity in vitro could effectively be achieved by the EMS-induced mutagenesis of filamentous fungus culture used.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
DOE Office of Scientific and Technical Information (OSTI.GOV)
The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
NASA Astrophysics Data System (ADS)
Lalush, D. S.; Tsui, B. M. W.
1998-06-01
We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.
Fast estimation of diffusion tensors under Rician noise by the EM algorithm.
Liu, Jia; Gasbarra, Dario; Railavo, Juha
2016-01-15
Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Ceriani, Luca; Ruberto, Teresa; Delaloye, Angelika Bischof; Prior, John O; Giovanella, Luca
2010-03-01
The purposes of this study were to characterize the performance of a 3-dimensional (3D) ordered-subset expectation maximization (OSEM) algorithm in the quantification of left ventricular (LV) function with (99m)Tc-labeled agent gated SPECT (G-SPECT), the QGS program, and a beating-heart phantom and to optimize the reconstruction parameters for clinical applications. A G-SPECT image of a dynamic heart phantom simulating the beating left ventricle was acquired. The exact volumes of the phantom were known and were as follows: end-diastolic volume (EDV) of 112 mL, end-systolic volume (ESV) of 37 mL, and stroke volume (SV) of 75 mL; these volumes produced an LV ejection fraction (LVEF) of 67%. Tomographic reconstructions were obtained after 10-20 iterations (I) with 4, 8, and 16 subsets (S) at full width at half maximum (FWHM) gaussian postprocessing filter cutoff values of 8-15 mm. The QGS program was used for quantitative measurements. Measured values ranged from 72 to 92 mL for EDV, from 18 to 32 mL for ESV, and from 54 to 63 mL for SV, and the calculated LVEF ranged from 65% to 76%. Overall, the combination of 10 I, 8 S, and a cutoff filter value of 10 mm produced the most accurate results. The plot of the measures with respect to the expectation maximization-equivalent iterations (I x S product) revealed a bell-shaped curve for the LV volumes and a reverse distribution for the LVEF, with the best results in the intermediate range. In particular, FWHM cutoff values exceeding 10 mm affected the estimation of the LV volumes. The QGS program is able to correctly calculate the LVEF when used in association with an optimized 3D OSEM algorithm (8 S, 10 I, and FWHM of 10 mm) but underestimates the LV volumes. However, various combinations of technical parameters, including a limited range of I and S (80-160 expectation maximization-equivalent iterations) and low cutoff values (< or =10 mm) for the gaussian postprocessing filter, produced results with similar accuracies and without clinically relevant differences in the LV volumes and the estimated LVEF.
Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C
2010-06-01
We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation. Copyright 2009 Elsevier Ltd. All rights reserved.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Counting malaria parasites with a two-stage EM based algorithm using crowsourced data.
Cabrera-Bean, Margarita; Pages-Zamora, Alba; Diaz-Vilor, Carles; Postigo-Camps, Maria; Cuadrado-Sanchez, Daniel; Luengo-Oroz, Miguel Angel
2017-07-01
Malaria eradication of the worldwide is currently one of the main WHO's global goals. In this work, we focus on the use of human-machine interaction strategies for low-cost fast reliable malaria diagnostic based on a crowdsourced approach. The addressed technical problem consists in detecting spots in images even under very harsh conditions when positive objects are very similar to some artifacts. The clicks or tags delivered by several annotators labeling an image are modeled as a robust finite mixture, and techniques based on the Expectation-Maximization (EM) algorithm are proposed for accurately counting malaria parasites on thick blood smears obtained by microscopic Giemsa-stained techniques. This approach outperforms other traditional methods as it is shown through experimentation with real data.
Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J.; Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago
2016-07-07
Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work,more » we present the unfolding results using the EM algorithm.« less
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, J.; Qu, M.; Leng, S.; McCollough, C. H.
2010-04-01
In this study, the feasibility of differentiating uric acid from non-uric acid kidney stones in the presence of iodinated contrast material was evaluated using dual-energy CT (DECT). Iodine subtraction was accomplished with a commercial three material decomposition algorithm to create a virtual non-contrast (VNC) image set. VNC images were then used to segment stone regions from tissue background. The DE ratio of each stone was calculated using the CT images acquired at two different energies with DECT using the stone map generated from the VNC images. The performance of DE ratio-based stone differentiation was evaluated at five different iodine concentrations (21, 42, 63, 84 and 105 mg/ml). The DE ratio of stones in iodine solution was found larger than those obtained in non-iodine cases. This is mainly caused by the partial volume effect around the boundary between the stone and iodine solution. The overestimation of the DE ratio leads to substantial overlap between different stone types. To address the partial volume effect, an expectation-maximization (EM) approach was implemented to estimate the contribution of iodine and stone within each image pixel in their mixture area. The DE ratio of each stone was corrected to maximally remove the influence of iodine solutions. The separation of uric-acid and non-uric-acid stone was improved in the presence of iodine solution.
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
NASA Astrophysics Data System (ADS)
Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Algorithmic detectability threshold of the stochastic block model
NASA Astrophysics Data System (ADS)
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
NASA Astrophysics Data System (ADS)
Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.
2017-08-01
A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-01-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.
Identification of the focal plane wavefront control system using E-M algorithm
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert
2017-09-01
In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.
PEG Enhancement for EM1 and EM2+ Missions
NASA Technical Reports Server (NTRS)
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG algorithm is capable for use on the SLS Block 1-B vehicle as part of the Guidance, Navigation, and Control System.
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
Staff - Cheryl E. Cameron | Alaska Division of Geological & Geophysical
Geothermal Research, v. 264, p. 85-106. http://dx.doi.org/10.1016/j.jvolgeores.2013.07.003 Coombs, M.L p. http://dggs.alaska.gov/pubs/id/23555 Larsen, J.F., Nye, C.J., Coombs, M.L., Tilman, Mariah Volcano, chapter 15, in Power, J.A., Coombs, M.L., and Freymueller, J.T., eds., The 2006 eruption of
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Inverse Ising problem in continuous time: A latent variable approach
NASA Astrophysics Data System (ADS)
Donner, Christian; Opper, Manfred
2017-12-01
We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low-contrast microcalcifications, the FBP reduced detectability due to its increased noise. The EM algorithm yielded high conspicuity for both microcalcifications and masses and yielded better ASFs in terms of the full width at half maximum. The higher contrast and lower homogeneity in terms of texture analysis were shown in FBP algorithm than in other algorithms. The patient images using the EM algorithm resulted in high visibility of low-contrast mass with clear border. In this study, we compared three reconstruction algorithms by using various kinds of breast phantoms and patient cases. Future work using these algorithms and considering the type of the breast and the acquisition techniques used (e.g., angular range, dose distribution) should include the use of actual patients or patient-like phantoms to increase the potential for practical applications.
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N
2016-04-01
Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.
NREL, Giner Evaluated Polymer Electrolyte Membrane for Maximizing Renewable
Energy on Grid | Energy Systems Integration Facility | NREL Giner NREL, Giner Evaluated Polymer -scale polymer electrolyte membrane (PEM) stack designed to maximize renewable energy on the grid by converting it to hydrogen when supply exceeds demand. Photo of a polymer electrolyte membrane stack in a
Maximizing the Benefits of Plug-in Electric Vehicles - Continuum Magazine
Testing and Integration Facility. Photo by Dennis Schroeder, NREL Maximizing the Benefits of Plug-in . Electric vehicle charging stations in NREL's parking garage. Photo by Dennis Schroder, NREL An NREL
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2018-01-01
The Chirikov standard map and the 2D Froeschlé map are investigated. A few thousand values of the Hurst exponent (HE) and the maximal Lyapunov exponent (mLE) are plotted in a mixed space of the nonlinear parameter versus the initial condition. Both characteristic exponents reveal remarkably similar structures in this space. A tight correlation between the HEs and mLEs is found, with the Spearman rank ρ = 0 . 83 and ρ = 0 . 75 for the Chirikov and 2D Froeschlé maps, respectively. Based on this relation, a machine learning (ML) procedure, using the nearest neighbor algorithm, is performed to reproduce the HE distribution based on the mLE distribution alone. A few thousand HE and mLE values from the mixed spaces were used for training, and then using 2 - 2 . 4 × 105 mLEs, the HEs were retrieved. The ML procedure allowed to reproduce the structure of the mixed spaces in great detail.
NASA Astrophysics Data System (ADS)
Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah
2017-08-01
In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.
Item-Based Top-N Recommendation Algorithms
2003-01-20
basket of items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Finally, even though the...not discriminate between items that are present in frequent itemsets and items that are not, while still maintaining the computational advantages of...453219 0.02% 7.74 ccard 42629 68793 398619 0.01% 9.35 ecommerce 6667 17491 91222 0.08% 13.68 em 8002 1648 769311 5.83% 96.14 ml 943 1682 100000 6.31
Staff - Janet R. Schaefer | Alaska Division of Geological & Geophysical
. http://doi.org/10.14509/25602 Wallace, K.L., Schaefer, J.R., and Coombs, M.L., 2013, Character, mass ., Coombs, M.L., Diefenbach, A.K., Lopez, Taryn, McNutt, S.R., Neal, C.A., Payne, A.L., Power, J.A , J.E., Schilling, S.P., Schaefer, J.R., and Trusdell, F.A., 2007, Crisis GIS: Preparing for and
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Alignment of cryo-EM movies of individual particles by optimization of image translations.
Rubinstein, John L; Brubaker, Marcus A
2015-11-01
Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.
1996-12-31
We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less
Statistical modeling, detection, and segmentation of stains in digitized fabric images
NASA Astrophysics Data System (ADS)
Gururajan, Arunkumar; Sari-Sarraf, Hamed; Hequet, Eric F.
2007-02-01
This paper will describe a novel and automated system based on a computer vision approach, for objective evaluation of stain release on cotton fabrics. Digitized color images of the stained fabrics are obtained, and the pixel values in the color and intensity planes of these images are probabilistically modeled as a Gaussian Mixture Model (GMM). Stain detection is posed as a decision theoretic problem, where the null hypothesis corresponds to absence of a stain. The null hypothesis and the alternate hypothesis mathematically translate into a first order GMM and a second order GMM respectively. The parameters of the GMM are estimated using a modified Expectation-Maximization (EM) algorithm. Minimum Description Length (MDL) is then used as the test statistic to decide the verity of the null hypothesis. The stain is then segmented by a decision rule based on the probability map generated by the EM algorithm. The proposed approach was tested on a dataset of 48 fabric images soiled with stains of ketchup, corn oil, mustard, ragu sauce, revlon makeup and grape juice. The decision theoretic part of the algorithm produced a correct detection rate (true positive) of 93% and a false alarm rate of 5% on these set of images.
Detection of delamination defects in CFRP materials using ultrasonic signal processing.
Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak
2008-12-01
In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.
Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
NASA Astrophysics Data System (ADS)
Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent
2016-03-01
The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Nosaka, Kazunori
2009-09-01
This study compared voluntary (VOL) and electrically evoked isometric contractions by muscle stimulation (EMS) for changes in biceps brachii muscle oxygenation (tissue oxygenation index, DeltaTOI) and total haemoglobin concentration (DeltatHb = oxygenated haemoglobin + deoxygenated haemoglobin) determined by near-infrared spectroscopy. Twelve men performed EMS with one arm followed 24 h later by VOL with the contralateral arm, consisting of 30 repeated (1-s contraction, 1-s relaxation) isometric contractions at 30% of maximal voluntary contraction (MVC) for the first 60 s, and maximal intensity contractions thereafter (MVC for VOL and maximal tolerable current at 30 Hz for EMS) until MVC decreased approximately 30% of pre-exercise MVC. During the 30 contractions at 30% MVC, DeltaTOI decrease was significantly (P < 0.05) greater and DeltatHb was significantly (P < 0.05) lower for EMS than VOL, suggesting that the metabolic demand for oxygen in EMS is greater than VOL at the same torque level. However, during maximal intensity contractions, although EMS torque (approximately 40% of VOL) was significantly (P < 0.05) lower than VOL, DeltaTOI was similar and tHb was significantly (P < 0.05) lower for EMS than VOL towards the end, without significant differences between the two sessions in the recovery period. It is concluded that the oxygen demand of the activated biceps brachii muscle in EMS is comparable to VOL at maximal intensity.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
What does fault tolerant Deep Learning need from MPI?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amatya, Vinay C.; Vishnu, Abhinav; Siegel, Charles M.
Deep Learning (DL) algorithms have become the {\\em de facto} Machine Learning (ML) algorithm for large scale data analysis. DL algorithms are computationally expensive -- even distributed DL implementations which use MPI require days of training (model learning) time on commonly studied datasets. Long running DL applications become susceptible to faults -- requiring development of a fault tolerant system infrastructure, in addition to fault tolerant DL algorithms. This raises an important question: {\\em What is needed from MPI for designing fault tolerant DL implementations?} In this paper, we address this problem for permanent faults. We motivate the need for amore » fault tolerant MPI specification by an in-depth consideration of recent innovations in DL algorithms and their properties, which drive the need for specific fault tolerance features. We present an in-depth discussion on the suitability of different parallelism types (model, data and hybrid); a need (or lack thereof) for check-pointing of any critical data structures; and most importantly, consideration for several fault tolerance proposals (user-level fault mitigation (ULFM), Reinit) in MPI and their applicability to fault tolerant DL implementations. We leverage a distributed memory implementation of Caffe, currently available under the Machine Learning Toolkit for Extreme Scale (MaTEx). We implement our approaches by extending MaTEx-Caffe for using ULFM-based implementation. Our evaluation using the ImageNet dataset and AlexNet neural network topology demonstrates the effectiveness of the proposed fault tolerant DL implementation using OpenMPI based ULFM.« less
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
A unified framework for group independent component analysis for multi-subject fMRI data
Guo, Ying; Pagnoni, Giuseppe
2008-01-01
Independent component analysis (ICA) is becoming increasingly popular for analyzing functional magnetic resonance imaging (fMRI) data. While ICA has been successfully applied to single-subject analysis, the extension of ICA to group inferences is not straightforward and remains an active topic of research. Current group ICA models, such as the GIFT (Calhoun et al., 2001) and tensor PICA (Beckmann and Smith, 2005), make different assumptions about the underlying structure of the group spatio-temporal processes and are thus estimated using algorithms tailored for the assumed structure, potentially leading to diverging results. To our knowledge, there are currently no methods for assessing the validity of different model structures in real fMRI data and selecting the most appropriate one among various choices. In this paper, we propose a unified framework for estimating and comparing group ICA models with varying spatio-temporal structures. We consider a class of group ICA models that can accommodate different group structures and include existing models, such as the GIFT and tensor PICA, as special cases. We propose a maximum likelihood (ML) approach with a modified Expectation-Maximization (EM) algorithm for the estimation of the proposed class of models. Likelihood ratio tests (LRT) are presented to compare between different group ICA models. The LRT can be used to perform model comparison and selection, to assess the goodness-of-fit of a model in a particular data set, and to test group differences in the fMRI signal time courses between subject subgroups. Simulation studies are conducted to evaluate the performance of the proposed method under varying structures of group spatio-temporal processes. We illustrate our group ICA method using data from an fMRI study that investigates changes in neural processing associated with the regular practice of Zen meditation. PMID:18650105
Dinamarca, M Alejandro; Ibacache-Quiroga, C; Baeza, P; Galvez, S; Villarroel, M; Olivero, P; Ojeda, J
2010-04-01
The immobilization of Pseudomonas stutzeri using adsorption on different inorganic supports was studied in relation to the number of adsorbed cells, metabolic activity and biodesulfurization (BDS). The electrophoretic migration (EM) measurements and Tetrazolioum (TTC) method were used to evaluate adsorption and metabolic activity. Results indicate that maximal immobilization was obtained with an initial load of 14 x 10(8) cells mL(-1) for Al and Sep, whereas Ti requires 20 x 10(8) cells mL(-1). The highest interaction was observed in the P. stutzeri/Si and P. stutzeri/Sep biocatalysts. The IEP values and metabolic activities indicate that P. stutzeri change the surface of supports and maintains metabolic activity. A direct relation between BDS activity and the adsorption capacity of the bacterial cells was observed at the adsorption/desorption equilibrium level. The biomodification of inorganic supports by the adsorption process increases the bioavailability of sulphur substrates for bacterial cells, improving BDS activity. Copyright 2009 Elsevier Ltd. All rights reserved.
van Rosendael, Alexander R; Maliakal, Gabriel; Kolli, Kranthi K; Beecy, Ashley; Al'Aref, Subhi J; Dwivedi, Aeshita; Singh, Gurpreet; Panday, Mohit; Kumar, Amit; Ma, Xiaoyue; Achenbach, Stephan; Al-Mallah, Mouaz H; Andreini, Daniele; Bax, Jeroen J; Berman, Daniel S; Budoff, Matthew J; Cademartiri, Filippo; Callister, Tracy Q; Chang, Hyuk-Jae; Chinnaiyan, Kavitha; Chow, Benjamin J W; Cury, Ricardo C; DeLago, Augustin; Feuchtner, Gudrun; Hadamitzky, Martin; Hausleiter, Joerg; Kaufmann, Philipp A; Kim, Yong-Jin; Leipsic, Jonathon A; Maffei, Erica; Marques, Hugo; Pontone, Gianluca; Raff, Gilbert L; Rubinshtein, Ronen; Shaw, Leslee J; Villines, Todd C; Gransar, Heidi; Lu, Yao; Jones, Erica C; Peña, Jessica M; Lin, Fay Y; Min, James K
Machine learning (ML) is a field in computer science that demonstrated to effectively integrate clinical and imaging data for the creation of prognostic scores. The current study investigated whether a ML score, incorporating only the 16 segment coronary tree information derived from coronary computed tomography angiography (CCTA), provides enhanced risk stratification compared with current CCTA based risk scores. From the multi-center CONFIRM registry, patients were included with complete CCTA risk score information and ≥3 year follow-up for myocardial infarction and death (primary endpoint). Patients with prior coronary artery disease were excluded. Conventional CCTA risk scores (conventional CCTA approach, segment involvement score, duke prognostic index, segment stenosis score, and the Leaman risk score) and a score created using ML were compared for the area under the receiver operating characteristic curve (AUC). Only 16 segment based coronary stenosis (0%, 1-24%, 25-49%, 50-69%, 70-99% and 100%) and composition (calcified, mixed and non-calcified plaque) were provided to the ML model. A boosted ensemble algorithm (extreme gradient boosting; XGBoost) was used and the entire data was randomly split into a training set (80%) and testing set (20%). First, tuned hyperparameters were used to generate a trained model from the training data set (80% of data). Second, the performance of this trained model was independently tested on the unseen test set (20% of data). In total, 8844 patients (mean age 58.0 ± 11.5 years, 57.7% male) were included. During a mean follow-up time of 4.6 ± 1.5 years, 609 events occurred (6.9%). No CAD was observed in 48.7% (3.5% event), non-obstructive CAD in 31.8% (6.8% event), and obstructive CAD in 19.5% (15.6% event). Discrimination of events as expressed by AUC was significantly better for the ML based approach (0.771) vs the other scores (ranging from 0.685 to 0.701), P < 0.001. Net reclassification improvement analysis showed that the improved risk stratification was the result of down-classification of risk among patients that did not experience events (non-events). A risk score created by a ML based algorithm, that utilizes standard 16 coronary segment stenosis and composition information derived from detailed CCTA reading, has greater prognostic accuracy than current CCTA integrated risk scores. These findings indicate that a ML based algorithm can improve the integration of CCTA derived plaque information to improve risk stratification. Published by Elsevier Inc.
Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar
2016-05-04
Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in PSIR images (n = 49). The EWA algorithm was validated experimentally and in patient data with a low bias in both IR and PSIR LGE images. Thus, the use of EM and a weighted intensity as in the EWA algorithm, may serve as a clinical standard for the quantification of myocardial infarction in LGE CMR images. CHILL-MI: NCT01379261 . NCT01374321 .
Body composition of university students by hydrostatic weighing and skinfold measurement.
Jürimäe, T; Jagomägi, G; Lepp, T
1992-12-01
The body composition of 124 male and 70 female Tartu University students was measured by three different methods: hydrostatic weighing by maximal expiration, hydrostatic weighing by maximal inspiration and subcutaneous fat thickness measurements. Our results show that the proposed body density measuring method by maximal expiration is simple, reliable and applicable not only in indoor swimming pools but in field conditions as well. The second new hydrostatic weighing apparatus is more comfortable for the subjects where the body density is measured at maximal inspiration. The mean body density of males was somewhat higher when measured by the maximal inspiration (1.066 +/- 0.012 g.ml-1) than when measured by the maximal expiration (1.063 +/- 0.009 g.ml-1, p < 0.05). For females, on the contrary, the maximal expiration method (1.044 +/- 0.010 g.ml-1) yielded a higher body density value than when measured by the maximal inspiration (1.040 +/- 0.011 g.ml-1, p > 0.05). The body fat percentage measured by skinfold thickness correlated significantly with the body fat percentage calculated by body density by maximal expiration (males r = 0.420, females r = 0.531) and inspiration (males r = 0.507, females r = 0.663). We must conclude that the presented two methods of measuring body density offer new possibilities for densitometric analysis without the need for expensive laboratory equipment.
Maximize Energy Efficiency in Buildings | Climate Neutral Research Campuses
Buildings on a research campus, especially laboratory buildings, often represent the most cost-effective plans, campuses can evaluate the following: Energy Management Building Management New Buildings Design
Fusing Continuous-Valued Medical Labels Using a Bayesian Model.
Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D
2015-12-01
With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Bridging a translational gap: using machine learning to improve the prediction of PTSD.
Karstoft, Karen-Inge; Galatzer-Levy, Isaac R; Statnikov, Alexander; Li, Zhiguo; Shalev, Arieh Y
2015-03-16
Predicting Posttraumatic Stress Disorder (PTSD) is a pre-requisite for targeted prevention. Current research has identified group-level risk-indicators, many of which (e.g., head trauma, receiving opiates) concern but a subset of survivors. Identifying interchangeable sets of risk indicators may increase the efficiency of early risk assessment. The study goal is to use supervised machine learning (ML) to uncover interchangeable, maximally predictive combinations of early risk indicators. Data variables (features) reflecting event characteristics, emergency department (ED) records and early symptoms were collected in 957 trauma survivors within ten days of ED admission, and used to predict PTSD symptom trajectories during the following fifteen months. A Target Information Equivalence Algorithm (TIE*) identified all minimal sets of features (Markov Boundaries; MBs) that maximized the prediction of a non-remitting PTSD symptom trajectory when integrated in a support vector machine (SVM). The predictive accuracy of each set of predictors was evaluated in a repeated 10-fold cross-validation and expressed as average area under the Receiver Operating Characteristics curve (AUC) for all validation trials. The average number of MBs per cross validation was 800. MBs' mean AUC was 0.75 (95% range: 0.67-0.80). The average number of features per MB was 18 (range: 12-32) with 13 features present in over 75% of the sets. Our findings support the hypothesized existence of multiple and interchangeable sets of risk indicators that equally and exhaustively predict non-remitting PTSD. ML's ability to increase prediction versatility is a promising step towards developing algorithmic, knowledge-based, personalized prediction of post-traumatic psychopathology.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Marketplace Impact | Transportation Research | NREL
Marketplace Impact Marketplace Impact This is the December 2014 issue of the Transportation and Hydrogen Newsletter. An illustration showing the outside of an electric vehicle with some portion of the partnership with the private sector to maximize market impact. Illustration by Joshua Bauer/NREL Public
Memetic algorithms for de novo motif-finding in biomedical sequences.
Bi, Chengpeng
2012-09-01
The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary microRNA sequences. The memetic motif-finding algorithm is effectively designed and implemented, and its applications demonstrate it is not only time-efficient, but also exhibits excellent performance while compared with other popular algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Commercial Buildings Research Group. Steve's areas of expertise are electric power distribution systems, DC techniques for maximizing the energy efficiency of electrical distribution systems in commercial buildings
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
NASA Astrophysics Data System (ADS)
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Energy Systems Integration Facility Videos | Energy Systems Integration
Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
NASA Technical Reports Server (NTRS)
Bauer, Fabrice; Jones, Michael; Shiota, Takahiro; Firstenberg, Michael S.; Qin, Jian Xin; Tsujino, Hiroyuki; Kim, Yong Jin; Sitges, Marta; Cardon, Lisa A.; Zetts, Arthur D.;
2002-01-01
OBJECTIVE: The goal of this study was to analyze left ventricular outflow tract systolic acceleration (LVOT(Acc)) during alterations in left ventricular (LV) contractility and LV filling. BACKGROUND: Most indexes described to quantify LV systolic function, such as LV ejection fraction and cardiac output, are dependent on loading conditions. METHODS: In 18 sheep (4 normal, 6 with aortic regurgitation, and 8 with old myocardial infarction), blood flow velocities through the LVOT were recorded using conventional pulsed Doppler. The LVOT(Acc) was calculated as the aortic peak velocity divided by the time to peak flow; LVOT(Acc) was compared with LV maximal elastance (E(m)) acquired by conductance catheter under different loading conditions, including volume and pressure overload during an acute coronary occlusion (n = 10). In addition, a clinically validated lumped-parameter numerical model of the cardiovascular system was used to support our findings. RESULTS: Left ventricular E(m) and LVOT(Acc) decreased during ischemia (1.67 +/- 0.67 mm Hg.ml(-1) before vs. 0.93 +/- 0.41 mm Hg.ml(-1) during acute coronary occlusion [p < 0.05] and 7.9 +/- 3.1 m.s(-2) before vs. 4.4 +/- 1.0 m.s(-2) during coronary occlusion [p < 0.05], respectively). Left ventricular outflow tract systolic acceleration showed a strong linear correlation with LV E(m) (y = 3.84x + 1.87, r = 0.85, p < 0.001). Similar findings were obtained with the numerical modeling, which demonstrated a strong correlation between predicted and actual LV E(m) (predicted = 0.98 [actual] -0.01, r = 0.86). By analysis of variance, there was no statistically significant difference in LVOT(Acc) under different loading conditions. CONCLUSIONS: For a variety of hemodynamic conditions, LVOT(Acc) was linearly related to the LV contractility index LV E(m) and was independent of loading conditions. These findings were consistent with numerical modeling. Thus, this Doppler index may serve as a good noninvasive index of LV contractility.
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
A priori motion models for four-dimensional reconstruction in gated cardiac SPECT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lalush, D.S.; Tsui, B.M.W.; Cui, Lin
1996-12-31
We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these {open_quotes}most likely{close_quotes} motion vectors.more » To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies.« less
remaining 85% of the appropriation to maximize total air pollution reduction and health benefits, improve air quality in areas disproportionately affected by air pollution, leverage additional matching funds
Innovation and Entrepreneurship Events | NREL
Innovation and Entrepreneurship Events Innovation and Entrepreneurship Events Industry Growth Forum NREL's annual Industry Growth Forum (IGF) provides clean energy innovators an opportunity to maximize communities. Learn more and register for the 2018 Industry Growth Forum. Text Version
Finite-Difference Algorithm for Simulating 3D Electromagnetic Wavefields in Conductive Media
NASA Astrophysics Data System (ADS)
Aldridge, D. F.; Bartel, L. C.; Knox, H. A.
2013-12-01
Electromagnetic (EM) wavefields are routinely used in geophysical exploration for detection and characterization of subsurface geological formations of economic interest. Recorded EM signals depend strongly on the current conductivity of geologic media. Hence, they are particularly useful for inferring fluid content of saturated porous bodies. In order to enhance understanding of field-recorded data, we are developing a numerical algorithm for simulating three-dimensional (3D) EM wave propagation and diffusion in heterogeneous conductive materials. Maxwell's equations are combined with isotropic constitutive relations to obtain a set of six, coupled, first-order partial differential equations governing the electric and magnetic vectors. An advantage of this system is that it does not contain spatial derivatives of the three medium parameters electric permittivity, magnetic permeability, and current conductivity. Numerical solution methodology consists of explicit, time-domain finite-differencing on a 3D staggered rectangular grid. Temporal and spatial FD operators have order 2 and N, where N is user-selectable. We use an artificially-large electric permittivity to maximize the FD timestep, and thus reduce execution time. For the low frequencies typically used in geophysical exploration, accuracy is not unduly compromised. Grid boundary reflections are mitigated via convolutional perfectly matched layers (C-PMLs) imposed at the six grid flanks. A shared-memory-parallel code implementation via OpenMP directives enables rapid algorithm execution on a multi-thread computational platform. Good agreement is obtained in comparisons of numerically-generated data with reference solutions. EM wavefields are sourced via point current density and magnetic dipole vectors. Spatially-extended inductive sources (current carrying wire loops) are under development. We are particularly interested in accurate representation of high-conductivity sub-grid-scale features that are common in industrial environments (borehole casing, pipes, railroad tracks). Present efforts are oriented toward calculating the EM responses of these objects via a First Born Approximation approach. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the US Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Distributed Optimization and Control | Grid Modernization | NREL
developing an innovative, distributed photovoltaic (PV) inverter control architecture that maximizes PV communications systems to support distribution grid operations. The growth of PV capacity has introduced prescribed limits, while fast variations in PV output tend to cause transients that lead to wear-out of
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Blind deconvolution of 2-D and 3-D fluorescent micrographs
NASA Astrophysics Data System (ADS)
Krishnamurthi, Vijaykumar; Liu, Yi-Hwa; Holmes, Timothy J.; Roysam, Badrinath; Turner, James N.
1992-06-01
This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, X.D.; Tsui, B.M.W.; Gregoriou, G.K.
The goal of the investigation was to study the effectiveness of the corrective reconstruction methods in cardiac SPECT using a realistic phantom and to qualitatively and quantitatively evaluate the reconstructed images using bull's-eye plots. A 3D mathematical phantom which realistically models the anatomical structures of the cardiac-torso region of patients was used. The phantom allows simulation of both the attenuation distribution and the uptake of radiopharmaceuticals in different organs. Also, the phantom can be easily modified to simulate different genders and variations in patient anatomy. Two-dimensional projection data were generated from the phantom and included the effects of attenuation andmore » detector response blurring. The reconstruction methods used in the study included the conventional filtered backprojection (FBP) with no attenuation compensation, and the first-order Chang algorithm, an iterative filtered backprojection algorithm (IFBP), the weighted least square conjugate gradient algorithm and the ML-EM algorithm with non-uniform attenuation compensation. The transaxial reconstructed images were rearranged into short-axis slices from which bull's-eye plots of the count density distribution in the myocardium were generated.« less
NASA Astrophysics Data System (ADS)
Kargoll, Boris; Omidalizarandi, Mohammad; Loth, Ina; Paffenholz, Jens-André; Alkhatib, Hamza
2018-03-01
In this paper, we investigate a linear regression time series model of possibly outlier-afflicted observations and autocorrelated random deviations. This colored noise is represented by a covariance-stationary autoregressive (AR) process, in which the independent error components follow a scaled (Student's) t-distribution. This error model allows for the stochastic modeling of multiple outliers and for an adaptive robust maximum likelihood (ML) estimation of the unknown regression and AR coefficients, the scale parameter, and the degree of freedom of the t-distribution. This approach is meant to be an extension of known estimators, which tend to focus only on the regression model, or on the AR error model, or on normally distributed errors. For the purpose of ML estimation, we derive an expectation conditional maximization either algorithm, which leads to an easy-to-implement version of iteratively reweighted least squares. The estimation performance of the algorithm is evaluated via Monte Carlo simulations for a Fourier as well as a spline model in connection with AR colored noise models of different orders and with three different sampling distributions generating the white noise components. We apply the algorithm to a vibration dataset recorded by a high-accuracy, single-axis accelerometer, focusing on the evaluation of the estimated AR colored noise model.
Zhang, ZhiZhuo; Chang, Cheng Wei; Hugo, Willy; Cheung, Edwin; Sung, Wing-Kin
2013-03-01
Although de novo motifs can be discovered through mining over-represented sequence patterns, this approach misses some real motifs and generates many false positives. To improve accuracy, one solution is to consider some additional binding features (i.e., position preference and sequence rank preference). This information is usually required from the user. This article presents a de novo motif discovery algorithm called SEME (sampling with expectation maximization for motif elicitation), which uses pure probabilistic mixture model to model the motif's binding features and uses expectation maximization (EM) algorithms to simultaneously learn the sequence motif, position, and sequence rank preferences without asking for any prior knowledge from the user. SEME is both efficient and accurate thanks to two important techniques: the variable motif length extension and importance sampling. Using 75 large-scale synthetic datasets, 32 metazoan compendium benchmark datasets, and 164 chromatin immunoprecipitation sequencing (ChIP-Seq) libraries, we demonstrated the superior performance of SEME over existing programs in finding transcription factor (TF) binding sites. SEME is further applied to a more difficult problem of finding the co-regulated TF (coTF) motifs in 15 ChIP-Seq libraries. It identified significantly more correct coTF motifs and, at the same time, predicted coTF motifs with better matching to the known motifs. Finally, we show that the learned position and sequence rank preferences of each coTF reveals potential interaction mechanisms between the primary TF and the coTF within these sites. Some of these findings were further validated by the ChIP-Seq experiments of the coTFs. The application is available online.
Detection of the power lines in UAV remote sensed images using spectral-spatial methods.
Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham
2018-01-15
In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cytoprophet: a Cytoscape plug-in for protein and domain interaction networks inference.
Morcos, Faruck; Lamanna, Charles; Sikora, Marcin; Izaguirre, Jesús
2008-10-01
Cytoprophet is a software tool that allows prediction and visualization of protein and domain interaction networks. It is implemented as a plug-in of Cytoscape, an open source software framework for analysis and visualization of molecular networks. Cytoprophet implements three algorithms that predict new potential physical interactions using the domain composition of proteins and experimental assays. The algorithms for protein and domain interaction inference include maximum likelihood estimation (MLE) using expectation maximization (EM); the set cover approach maximum specificity set cover (MSSC) and the sum-product algorithm (SPA). After accepting an input set of proteins with Uniprot ID/Accession numbers and a selected prediction algorithm, Cytoprophet draws a network of potential interactions with probability scores and GO distances as edge attributes. A network of domain interactions between the domains of the initial protein list can also be generated. Cytoprophet was designed to take advantage of the visual capabilities of Cytoscape and be simple to use. An example of inference in a signaling network of myxobacterium Myxococcus xanthus is presented and available at Cytoprophet's website. http://cytoprophet.cse.nd.edu.
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Integrated Energy Solutions Research | Integrated Energy Solutions | NREL
that spans the height and width of the wall they are facing. Decision Science and Informatics Enabling decision makers with rigorous, technology-neutral, data-backed decision support to maximize the impact of security in energy systems through analysis, decision support, advanced energy technology development, and
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
DefenseLink: Securing Afganistan, Stabilization & Growth
and maximize legitimate agricultural food crops. Story Army Engineers Repair Levee for Neighborhood near the site of the new agricultural research center in Shindand, Afghanistan. U.S. Army photo by Lt
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
Azad, Ariful; Buluç, Aydın
2016-05-16
We describe parallel algorithms for computing maximal cardinality matching in a bipartite graph on distributed-memory systems. Unlike traditional algorithms that match one vertex at a time, our algorithms process many unmatched vertices simultaneously using a matrix-algebraic formulation of maximal matching. This generic matrix-algebraic framework is used to develop three efficient maximal matching algorithms with minimal changes. The newly developed algorithms have two benefits over existing graph-based algorithms. First, unlike existing parallel algorithms, cardinality of matching obtained by the new algorithms stays constant with increasing processor counts, which is important for predictable and reproducible performance. Second, relying on bulk-synchronous matrix operations,more » these algorithms expose a higher degree of parallelism on distributed-memory platforms than existing graph-based algorithms. We report high-performance implementations of three maximal matching algorithms using hybrid OpenMP-MPI and evaluate the performance of these algorithm using more than 35 real and randomly generated graphs. On real instances, our algorithms achieve up to 200 × speedup on 2048 cores of a Cray XC30 supercomputer. Even higher speedups are obtained on larger synthetically generated graphs where our algorithms show good scaling on up to 16,384 cores.« less
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis. PMID:27959895
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
Janet, Jon Paul; Chan, Lydia; Kulik, Heather J
2018-03-01
Machine learning (ML) has emerged as a powerful complement to simulation for materials discovery by reducing time for evaluation of energies and properties at accuracy competitive with first-principles methods. We use genetic algorithm (GA) optimization to discover unconventional spin-crossover complexes in combination with efficient scoring from an artificial neural network (ANN) that predicts spin-state splitting of inorganic complexes. We explore a compound space of over 5600 candidate materials derived from eight metal/oxidation state combinations and a 32-ligand pool. We introduce a strategy for error-aware ML-driven discovery by limiting how far the GA travels away from the nearest ANN training points while maximizing property (i.e., spin-splitting) fitness, leading to discovery of 80% of the leads from full chemical space enumeration. Over a 51-complex subset, average unsigned errors (4.5 kcal/mol) are close to the ANN's baseline 3 kcal/mol error. By obtaining leads from the trained ANN within seconds rather than days from a DFT-driven GA, this strategy demonstrates the power of ML for accelerating inorganic material discovery.
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Multi-pinhole SPECT Imaging with Silicon Strip Detectors
Peterson, Todd E.; Shokouhi, Sepideh; Furenlid, Lars R.; Wilson, Donald W.
2010-01-01
Silicon double-sided strip detectors offer outstanding instrinsic spatial resolution with reasonable detection efficiency for iodine-125 emissions. This spatial resolution allows for multiple-pinhole imaging at low magnification, minimizing the problem of multiplexing. We have conducted imaging studies using a prototype system that utilizes a detector of 300-micrometer thickness and 50-micrometer strip pitch together with a 23-pinhole collimator. These studies include an investigation of the synthetic-collimator imaging approach, which combines multiple-pinhole projections acquired at multiple magnifications to obtain tomographic reconstructions from limited-angle data using the ML-EM algorithm. Sub-millimeter spatial resolution was obtained, demonstrating the basic validity of this approach. PMID:20953300
NREL to Assist in Development and Evaluation of Class 6 Plug-in Hybrid
, and emissions, as well as the potential impacts on life-cycle costs, barriers to implementation, and application and maximizing potential energy efficiency, emissions, economic, and performance impacts."
Shouval, Roni; Labopin, Myriam; Unger, Ron; Giebel, Sebastian; Ciceri, Fabio; Schmid, Christoph; Esteve, Jordi; Baron, Frederic; Gorin, Norbert Claude; Savani, Bipin; Shimoni, Avichai; Mohty, Mohamad; Nagler, Arnon
2016-01-01
Models for prediction of allogeneic hematopoietic stem transplantation (HSCT) related mortality partially account for transplant risk. Improving predictive accuracy requires understating of prediction limiting factors, such as the statistical methodology used, number and quality of features collected, or simply the population size. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to analyze these factors. A cohort of 25,923 adult acute leukemia patients from the European Society for Blood and Marrow Transplantation (EBMT) registry was analyzed. Predictive objective was non-relapse mortality (NRM) 100 days following HSCT. Thousands of prediction models were developed under varying conditions: increasing sample size, specific subpopulations and an increasing number of variables, which were selected and ranked by separate feature selection algorithms. Depending on the algorithm, predictive performance plateaued on a population size of 6,611-8,814 patients, reaching a maximal area under the receiver operator characteristic curve (AUC) of 0.67. AUCs' of models developed on specific subpopulation ranged from 0.59 to 0.67 for patients in second complete remission and receiving reduced intensity conditioning, respectively. Only 3-5 variables were necessary to achieve near maximal AUCs. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning regimen. Our findings empirically demonstrate that with regards to NRM prediction, few variables "carry the weight" and that traditional HSCT data has been "worn out". "Breaking through" the predictive boundaries will likely require additional types of inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, L; Dong, P; Larson, D
Purpose: To investigate a new modulated beam orientation optimization (MBOO) approach maximizing treatment planning quality for the state-of-the-art flattening filter free (FFF) beam that has enabled rapid treatments of multiple brain targets. Methods: MBOO selects and optimizes a large number of intensity-modulated beams (400 or more) from all accessible beam angles surrounding a patient’s skull. The optimization algorithm was implemented on a standalone system that interfaced with the 3D Dicom images and structure sets. A standard published data set that consisted of 1 to 12 metastatic brain tumor combinations was selected for MBOO planning. The planning results from various coplanarmore » and non-coplanar configurations via MBOO were then compared with the results obtained from a clinical volume modulated arc therapy (VMAT) delivery system (Truebeam RapidArc, Varian Oncology). Results: When planning a few number of targets (n<4), MBOO produced results equivalent to non-coplanar multi-arc VMAT planning in terms of target volume coverage and normal tissue sparing. For example, the 12-Gy and 4-Gy normal brain volumes for the 3-target plans differed by less than 1 mL ( 3.0 mLvs 3.8 mL; and 35.2 mL vs 36.3 mL, respectively) for MBOO versus VMAT. However, when planning a larger number of targets (n≥4), MBOO significantly reduced the dose to the normal brain as compared to VMAT, though the target volume coverage was equivalent. For example, the 12-Gy and 4-Gy normal brain volumes for the 12-target plans were 10.8 mL vs. 18.0 mL and 217.9 mL vs. 390.0 mL, respectively for the non-coplanar MBOO versus the non-coplanar VMAT treatment plans, yielding a reduction in volume of more than 60% for the case. Conclusion: MBOO is a unique approach for maximizing normal tissue sparing when treating a large number (n≥4) of brain tumors with FFF linear accelerators. Dr Ma and Dr Sahgal are currently on the board of international society of stereotactic radiosurgery. Dr Sahgal has received support for educational presentations from Elekta company.« less
NREL + SolarCity: Maximizing Solar Power on Electrical Grids Video Text
Electrical Grids video. RYAN HANLEY: The growth of distributed energy resources is becoming real and tangible . BRYAN HANNEGAN: Solar technologies, particularly those distributed, rooftop, PV solar technologies, add Hawaiian Electric Company was concerned about as far as installing distributed energy resources on their
NREL Bridges Fuels and Engines R&D to Maximize Vehicle Efficiency and
innovation-from fuel chemistry, conversion, and combustion to the evaluation of advanced fuels in actual -cylinder engine for advanced compression ignition fuels research will be installed and commissioned in the vehicle performance and emissions research, two engine dynamometer test cells for advanced fuels research
Perovskite Patent Portfolio | Photovoltaic Research | NREL
deposition of high-quality perovskite films. These techniques have been published in multiple peer-reviewed substrates that are suitable for high-throughput manufacturing and that can maximize the yield of the % to 3% increase in conversion efficiency when compared to a MAPbI3 film prepared with a standard
NREL Fuels and Engines Research: Maximizing Vehicle Efficiency and
Laboratory, we analyze the effects of fuel chemistry on ignition and the potential emissions impacts. Our lab research. It can be used to investigate fuel chemistry effects on current and near-term engine technology , independent control allows for deeper interrogation of fuel effects on future-generation engine strategies
Maximizing Energy Savings for Small Business Text Version | Buildings |
owners have a big opportunity to save money and energy, while cutting greenhouse gas emissions. Drawing have the money, nor time, to pursue something like that. Drawing of computer screen, showing NREL's energy and non-energy related benefits. Drawing of money, buildings, machinery, and furniture. Narrator
Maximizing Energy Savings for Small Businesses | Buildings | NREL
significant amounts of money and energy, increase profits, promote their business, and cut greenhouse gas goals and save money: NREL's four-page lender's guide with discussion on timing and low-cost methods for information and design and decision support guides, available for free download The USDA's Business and
Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.
Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich
2016-01-01
We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
Joint channel estimation and multi-user detection for multipath fading channels in DS-CDMA systems
NASA Astrophysics Data System (ADS)
Wu, Sau-Hsuan; Kuo, C.-C. Jay
2002-11-01
The technique of joint blind channel estimation and multiple access interference (MAI) suppression for an asynchronous code-division multiple-access (CDMA) system is investigated in this research. To identify and track dispersive time-varying fading channels and to avoid the phase ambiguity that come with the second-order statistic approaches, a sliding-window scheme using the expectation maximization (EM) algorithm is proposed. The complexity of joint channel equalization and symbol detection for all users increases exponentially with system loading and the channel memory. The situation is exacerbated if strong inter-symbol interference (ISI) exists. To reduce the complexity and the number of samples required for channel estimation, a blind multiuser detector is developed. Together with multi-stage interference cancellation using soft outputs provided by this detector, our algorithm can track fading channels with no phase ambiguity even when channel gains attenuate close to zero.
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Publications - RI 98-12 | Alaska Division of Geological & Geophysical
content DGGS RI 98-12 Publication Details Title: Geology of the Sleetmute C-7, C-8, D-7, and D-8 ., Miller, M.L., Layer, P.W., and Laird, G.M., 1999, Geology of the Sleetmute C-7, C-8, D-7, and D-8 ; Other Oversized Sheets Sheet 1 Geologic map of the Sleetmute C-7, C-8, D-7 and D-8 quadrangles, Horn
Phase Diversity and Polarization Augmented Techniques for Active Imaging
2007-03-01
build up a system model for use in algorithm development. 32 IV. Conventional Imaging and Atmospheric Turbulence With an understanding of scalar...28, 59, 115 Cholesky Factorization, 14, 42 C2n, see Turbulence Coherent Image Model, 36 Complete Data, see EM Algorithm Complex Coherence...Data, see EM Algorithm Homotopic, 62 Impulse Response, 34, 44 Incoherent Image Model, 36 Incomplete Data, see EM Algorithm Lo- Turbulence Outer Scale
Publications - RI 2001-1B | Alaska Division of Geological & Geophysical
content DGGS RI 2001-1B Publication Details Title: Geologic map of the Chulitna region, southcentral , M.L., Reifenstuhl, R.R., and Clough, J.G., 2001, Geologic map of the Chulitna region, southcentral of the Chulitna region, southcentral Alaska, scale 1:63,360 (12.0 M) Digital Geospatial Data Digital
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Examining the effect of initialization strategies on the performance of Gaussian mixture modeling.
Shireman, Emilie; Steinley, Douglas; Brusco, Michael J
2017-02-01
Mixture modeling is a popular technique for identifying unobserved subpopulations (e.g., components) within a data set, with Gaussian (normal) mixture modeling being the form most widely used. Generally, the parameters of these Gaussian mixtures cannot be estimated in closed form, so estimates are typically obtained via an iterative process. The most common estimation procedure is maximum likelihood via the expectation-maximization (EM) algorithm. Like many approaches for identifying subpopulations, finite mixture modeling can suffer from locally optimal solutions, and the final parameter estimates are dependent on the initial starting values of the EM algorithm. Initial values have been shown to significantly impact the quality of the solution, and researchers have proposed several approaches for selecting the set of starting values. Five techniques for obtaining starting values that are implemented in popular software packages are compared. Their performances are assessed in terms of the following four measures: (1) the ability to find the best observed solution, (2) settling on a solution that classifies observations correctly, (3) the number of local solutions found by each technique, and (4) the speed at which the start values are obtained. On the basis of these results, a set of recommendations is provided to the user.
Robust generative asymmetric GMM for brain MR image segmentation.
Ji, Zexuan; Xia, Yong; Zheng, Yuhui
2017-11-01
Accurate segmentation of brain tissues from magnetic resonance (MR) images based on the unsupervised statistical models such as Gaussian mixture model (GMM) has been widely studied during last decades. However, most GMM based segmentation methods suffer from limited accuracy due to the influences of noise and intensity inhomogeneity in brain MR images. To further improve the accuracy for brain MR image segmentation, this paper presents a Robust Generative Asymmetric GMM (RGAGMM) for simultaneous brain MR image segmentation and intensity inhomogeneity correction. First, we develop an asymmetric distribution to fit the data shapes, and thus construct a spatial constrained asymmetric model. Then, we incorporate two pseudo-likelihood quantities and bias field estimation into the model's log-likelihood, aiming to exploit the neighboring priors of within-cluster and between-cluster and to alleviate the impact of intensity inhomogeneity, respectively. Finally, an expectation maximization algorithm is derived to iteratively maximize the approximation of the data log-likelihood function to overcome the intensity inhomogeneity in the image and segment the brain MR images simultaneously. To demonstrate the performances of the proposed algorithm, we first applied the proposed algorithm to a synthetic brain MR image to show the intermediate illustrations and the estimated distribution of the proposed algorithm. The next group of experiments is carried out in clinical 3T-weighted brain MR images which contain quite serious intensity inhomogeneity and noise. Then we quantitatively compare our algorithm to state-of-the-art segmentation approaches by using Dice coefficient (DC) on benchmark images obtained from IBSR and BrainWeb with different level of noise and intensity inhomogeneity. The comparison results on various brain MR images demonstrate the superior performances of the proposed algorithm in dealing with the noise and intensity inhomogeneity. In this paper, the RGAGMM algorithm is proposed which can simply and efficiently incorporate spatial constraints into an EM framework to simultaneously segment brain MR images and estimate the intensity inhomogeneity. The proposed algorithm is flexible to fit the data shapes, and can simultaneously overcome the influence of noise and intensity inhomogeneity, and hence is capable of improving over 5% segmentation accuracy comparing with several state-of-the-art algorithms. Copyright © 2017 Elsevier B.V. All rights reserved.
American Energy Data Challenge - by IdeaScale
part of this community and receive updates on Open Data by Design Contest. PUBLIC VOTING HAS CLOSED - WINNERS WILL BE ANNOUNCED THIS MONTH! Arrow About Contest #3: Open Data by Design The Department of Energy will award $17,500 in prizes for the best designs that maximize the potential of our open energy data
Publications - PDF 93-47 | Alaska Division of Geological & Geophysical
content DGGS PDF 93-47 Publication Details Title: Geologic map of the Sleetmute C-7, D-7, C-8, and D-8 ., Harris, E.E., Kline, J.T., and Miller, M.L., 1993, Geologic map of the Sleetmute C-7, D-7, C-8, and D-8 Sleetmute C-7, C-8, D-7 and D-8 quadrangles, Horn Mountain Area, southwest Alaska, scale 1:63,360 (34.0 M
ERIC Educational Resources Information Center
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-07
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
NASA Astrophysics Data System (ADS)
Mehranian, Abolfazl; Kotasidis, Fotis; Zaidi, Habib
2016-02-01
Time-of-flight (TOF) positron emission tomography (PET) technology has recently regained popularity in clinical PET studies for improving image quality and lesion detectability. Using TOF information, the spatial location of annihilation events is confined to a number of image voxels along each line of response, thereby the cross-dependencies of image voxels are reduced, which in turns results in improved signal-to-noise ratio and convergence rate. In this work, we propose a novel approach to further improve the convergence of the expectation maximization (EM)-based TOF PET image reconstruction algorithm through subsetization of emission data over TOF bins as well as azimuthal bins. Given the prevalence of TOF PET, we elaborated the practical and efficient implementation of TOF PET image reconstruction through the pre-computation of TOF weighting coefficients while exploiting the same in-plane and axial symmetries used in pre-computation of geometric system matrix. In the proposed subsetization approach, TOF PET data were partitioned into a number of interleaved TOF subsets, with the aim of reducing the spatial coupling of TOF bins and therefore to improve the convergence of the standard maximum likelihood expectation maximization (MLEM) and ordered subsets EM (OSEM) algorithms. The comparison of on-the-fly and pre-computed TOF projections showed that the pre-computation of the TOF weighting coefficients can considerably reduce the computation time of TOF PET image reconstruction. The convergence rate and bias-variance performance of the proposed TOF subsetization scheme were evaluated using simulated, experimental phantom and clinical studies. Simulations demonstrated that as the number of TOF subsets is increased, the convergence rate of MLEM and OSEM algorithms is improved. It was also found that for the same computation time, the proposed subsetization gives rise to further convergence. The bias-variance analysis of the experimental NEMA phantom and a clinical FDG-PET study also revealed that for the same noise level, a higher contrast recovery can be obtained by increasing the number of TOF subsets. It can be concluded that the proposed TOF weighting matrix pre-computation and subsetization approaches enable to further accelerate and improve the convergence properties of OSEM and MLEM algorithms, thus opening new avenues for accelerated TOF PET image reconstruction.
2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm
NASA Astrophysics Data System (ADS)
Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.
2017-08-01
Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
Taylor, Lee; Lee, Ben J; Gibson, Oliver R; Midgley, Adrian W; Watt, Peter; Mauger, Alexis; Castle, Paul
2016-01-01
This study determined the effectiveness of antioxidant supplementation on high-intensity exercise-heat stress. Six males completed a high-intensity running protocol twice in temperate conditions (TEMP; 20.4°C), and twice in hot conditions (HOT; 34.7°C). Trials were completed following7 days supplementation with 70 ml·day(-1) effective microorganism-X (EM-X; TEMPEMX or HOTEMX) or placebo (TEMPPLA or HOTPLA). Plasma extracellular Hsp72 (eHsp72) and superoxide dismutase (SOD) were measured by ELISA. eHsp72 and SOD increased pre-post exercise (p < 0.001), with greater eHsp72 (p < 0.001) increases observed in HOT (+1.5 ng·ml(-1)) compared to TEMP (+0.8 ng·ml(-1)). EM-X did not influence eHsp72 (p > 0.05). Greater (p < 0.001) SOD increases were observed in HOT (+0.22 U·ml(-1)) versus TEMP (+0.10 U·ml(-1)) with SOD reduced in HOTEMX versus HOTPLA (p = 0.001). Physiological and perceptual responses were all greater (p < 0.001) in HOT versus TEMP conditions, with no difference followed EM-X (p > 0.05). EM-X supplementation attenuated the SOD increases following HOT, potentiating its application as an ergogenic aid to ameliorate oxidative stress.
Robust EM Continual Reassessment Method in Oncology Dose Finding
Yuan, Ying; Yin, Guosheng
2012-01-01
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Banerjee, Arindam; Ghosh, Joydeep
2004-05-01
Competitive learning mechanisms for clustering, in general, suffer from poor performance for very high-dimensional (>1000) data because of "curse of dimensionality" effects. In applications such as document clustering, it is customary to normalize the high-dimensional input vectors to unit length, and it is sometimes also desirable to obtain balanced clusters, i.e., clusters of comparable sizes. The spherical kmeans (spkmeans) algorithm, which normalizes the cluster centers as well as the inputs, has been successfully used to cluster normalized text documents in 2000+ dimensional space. Unfortunately, like regular kmeans and its soft expectation-maximization-based version, spkmeans tends to generate extremely imbalanced clusters in high-dimensional spaces when the desired number of clusters is large (tens or more). This paper first shows that the spkmeans algorithm can be derived from a certain maximum likelihood formulation using a mixture of von Mises-Fisher distributions as the generative model, and in fact, it can be considered as a batch-mode version of (normalized) competitive learning. The proposed generative model is then adapted in a principled way to yield three frequency-sensitive competitive learning variants that are applicable to static data and produced high-quality and well-balanced clusters for high-dimensional data. Like kmeans, each iteration is linear in the number of data points and in the number of clusters for all the three algorithms. A frequency-sensitive algorithm to cluster streaming data is also proposed. Experimental results on clustering of high-dimensional text data sets are provided to show the effectiveness and applicability of the proposed techniques. Index Terms-Balanced clustering, expectation maximization (EM), frequency-sensitive competitive learning (FSCL), high-dimensional clustering, kmeans, normalized data, scalable clustering, streaming data, text clustering.
ERIC Educational Resources Information Center
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.
2013-01-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454
Nizam-Uddin, N; Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors.
Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors. PMID:28840125
Test of 3D CT reconstructions by EM + TV algorithm from undersampled data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da
2013-05-06
Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Shahriyari, Leili
2017-11-03
One of the main challenges in machine learning (ML) is choosing an appropriate normalization method. Here, we examine the effect of various normalization methods on analyzing FPKM upper quartile (FPKM-UQ) RNA sequencing data sets. We collect the HTSeq-FPKM-UQ files of patients with colon adenocarcinoma from TCGA-COAD project. We compare three most common normalization methods: scaling, standardizing using z-score and vector normalization by visualizing the normalized data set and evaluating the performance of 12 supervised learning algorithms on the normalized data set. Additionally, for each of these normalization methods, we use two different normalization strategies: normalizing samples (files) or normalizing features (genes). Regardless of normalization methods, a support vector machine (SVM) model with the radial basis function kernel had the maximum accuracy (78%) in predicting the vital status of the patients. However, the fitting time of SVM depended on the normalization methods, and it reached its minimum fitting time when files were normalized to the unit length. Furthermore, among all 12 learning algorithms and 6 different normalization techniques, the Bernoulli naive Bayes model after standardizing files had the best performance in terms of maximizing the accuracy as well as minimizing the fitting time. We also investigated the effect of dimensionality reduction methods on the performance of the supervised ML algorithms. Reducing the dimension of the data set did not increase the maximum accuracy of 78%. However, it leaded to discovery of the 7SK RNA gene expression as a predictor of survival in patients with colon adenocarcinoma with accuracy of 78%. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Uncovering the overlapping community structure of complex networks by maximal cliques
NASA Astrophysics Data System (ADS)
Li, Junqiu; Wang, Xingyuan; Cui, Yaozu
2014-12-01
In this paper, a unique algorithm is proposed to detect overlapping communities in the un-weighted and weighted networks with considerable accuracy. The maximal cliques, overlapping vertex, bridge vertex and isolated vertex are introduced. First, all the maximal cliques are extracted by the algorithm based on the deep and bread searching. Then two maximal cliques can be merged into a larger sub-graph by some given rules. In addition, the proposed algorithm successfully finds overlapping vertices and bridge vertices between communities. Experimental results using some real-world networks data show that the performance of the proposed algorithm is satisfactory.
Deep neural network and noise classification-based speech enhancement
NASA Astrophysics Data System (ADS)
Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei
2017-07-01
In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.
Enhancement of plant metabolite fingerprinting by machine learning.
Scott, Ian M; Vermeer, Cornelia P; Liakata, Maria; Corol, Delia I; Ward, Jane L; Lin, Wanchang; Johnson, Helen E; Whitehead, Lynne; Kular, Baldeep; Baker, John M; Walsh, Sean; Dave, Anuja; Larson, Tony R; Graham, Ian A; Wang, Trevor L; King, Ross D; Draper, John; Beale, Michael H
2010-08-01
Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by (1)H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, (1)H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted.
Differential correlation for sequencing data.
Siska, Charlotte; Kechris, Katerina
2017-01-19
Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.
& Simulation Research Interests Remote Sensing Natural Resource Modeling Machine Learning Education Analysis Center. Areas of Expertise Geospatial Analysis Data Visualization Algorithm Development Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicolae, A; Department of Physics, Ryerson University, Toronto, ON; Lu, L
Purpose: A novel, automated, algorithm for permanent prostate brachytherapy (PPB) treatment planning has been developed. The novel approach uses machine-learning (ML), a form of artificial intelligence, to substantially decrease planning time while simultaneously retaining the clinical intuition of plans created by radiation oncologists. This study seeks to compare the ML algorithm against expert-planned PPB plans to evaluate the equivalency of dosimetric and clinical plan quality. Methods: Plan features were computed from historical high-quality PPB treatments (N = 100) and stored in a relational database (RDB). The ML algorithm matched new PPB features to a highly similar case in the RDB;more » this initial plan configuration was then further optimized using a stochastic search algorithm. PPB pre-plans (N = 30) generated using the ML algorithm were compared to plan variants created by an expert dosimetrist (RT), and radiation oncologist (MD). Planning time and pre-plan dosimetry were evaluated using a one-way Student’s t-test and ANOVA, respectively (significance level = 0.05). Clinical implant quality was evaluated by expert PPB radiation oncologists as part of a qualitative study. Results: Average planning time was 0.44 ± 0.42 min compared to 17.88 ± 8.76 min for the ML algorithm and RT, respectively, a significant advantage [t(9), p = 0.01]. A post-hoc ANOVA [F(2,87) = 6.59, p = 0.002] using Tukey-Kramer criteria showed a significantly lower mean prostate V150% for the ML plans (52.9%) compared to the RT (57.3%), and MD (56.2%) plans. Preliminary qualitative study results indicate comparable clinical implant quality between RT and ML plans with a trend towards preference for ML plans. Conclusion: PPB pre-treatment plans highly comparable to those of an expert radiation oncologist can be created using a novel ML planning model. The use of an ML-based planning approach is expected to translate into improved PPB accessibility and plan uniformity.« less
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
Lovely, Lyndsay M; Chishti, Yasmin Z; Woodland, Jennifer L; Lalonde, Donald H
2018-05-01
Many surgeons and emergentologists use non-ultrasound-guided wrist nerve blocks. There is little evidence to guide the ideal volume of local anesthesia or how long we should wait after injection before performing pain-free procedures. This pilot study examined time to maximal anesthesia to painful needle stick in 14 volunteer participants receiving bilateral wrist blocks of 6 versus 11 mL of local. One surgeon performed all 14 bilateral wrist median nerve blocks in participants who remained blinded until after bandages were applied to their wrist. No one could see which wrist received the larger 11-mL volume injection versus the 6-mL block. Blinded sensory assessors then measured perceived maximal numbness time and numbness to needle stick pain in the fingertips of the median nerve distribution. Failure to get a complete median nerve block occurred in seven of fourteen 6-mL wrist blocks versus failure in only one of fourteen 11-mL blocks. Perceived maximal numbness occurred at roughly 40 minutes after injection, but actual numbness to painful needle stick took around 100 minutes. Incomplete median nerve numbness occurred with both 6- and 11-mL non-ultrasound-guided blocks at the wrist. In those with complete blocks, it took a surprisingly long time of 100 minutes for maximal anesthesia to occur to painful needle stick stimuli to the fingertips of the median nerve distribution. Non-ultrasound-guided median nerve blocks at the wrist as described in this article lack reliability and take too long to work.
Heterogeneous responses of human limbs to infused adrenergic agonists: a gravitational effect?
NASA Technical Reports Server (NTRS)
Pawelczyk, James A.; Levine, Benjamin D.
2002-01-01
Unlike quadrupeds, the legs of humans are regularly exposed to elevated pressures relative to the arms. We hypothesized that this "dependent hypertension" would be associated with altered adrenergic responsiveness. Isoproterenol (0.75-24 ng x 100 ml limb volume-1 x min-1) and phenylephrine (0.025-0.8 microg x 100 ml limb volume-1 x min-1) were infused incrementally in the brachial and femoral arteries of 12 normal volunteers; changes in limb blood flow were quantified by using strain-gauge plethysmography. Compared with the forearm, baseline calf vascular resistance was greater (38.8 +/- 2.5 vs. 26.9 +/- 2.0 mmHg x 100 ml x min x ml-1; P < 0.001) and maximal conductance was lower (46.1 +/- 11.9 vs. 59.4 +/- 13.4 ml x ml-1 x min-1 x mmHg-1; P < 0.03). Vascular conductance did not differ between the two limbs during isoproterenol infusions, whereas decreases in vascular conductance were greater in the calf than the forearm during phenylephrine infusions (P < 0.001). With responses normalized to maximal conductance, the half-maximal response for phenylephrine was significantly less for the calf than the forearm (P < 0.001), whereas the half-maximal response for isoproterenol did not differ between limbs. We conclude that alpha1- but not beta-adrenergic-receptor responsiveness in human limbs is nonuniform. The relatively greater response to alpha1-adrenergic-receptor stimulation in the calf may represent an adaptive mechanism that limits blood pooling and capillary filtration in the legs during standing.
Challenges in the Verification of Reinforcement Learning Algorithms
NASA Technical Reports Server (NTRS)
Van Wesel, Perry; Goodloe, Alwyn E.
2017-01-01
Machine learning (ML) is increasingly being applied to a wide array of domains from search engines to autonomous vehicles. These algorithms, however, are notoriously complex and hard to verify. This work looks at the assumptions underlying machine learning algorithms as well as some of the challenges in trying to verify ML algorithms. Furthermore, we focus on the specific challenges of verifying reinforcement learning algorithms. These are highlighted using a specific example. Ultimately, we do not offer a solution to the complex problem of ML verification, but point out possible approaches for verification and interesting research opportunities.
NASA Astrophysics Data System (ADS)
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
McFalls, E O; Duncker, D J; Sassen, L M; Gho, B C; Verdouw, P D
1991-12-01
The effect of nifedipine (0.5, 1.0, and 2.0 micrograms/kg/min), metoprolol (0.1, 0.5, and 1.0 mg/kg), the beta 1-selective adrenoceptor partial agonist epanolol (10, 50, and 200 micrograms/kg), or equivalent volumes of isotonic saline (n = 6, in each group), on coronary blood flow capacity were studied in anesthetized swine. Intracoronary bolus injections of adenosine (20 micrograms/kg/0.2 ml) were administered without and during three levels of coronary stenosis, prior to and following each dose of drug, to obtain maximal coronary blood flows at different perfusion pressures in the autoregulatory range. Coronary perfusion pressures were varied by partial inflation of a balloon around the left anterior descending coronary artery. Special care was taken that the stenoses not lead to myocardial ischemia. Three indices of coronary blood flow capacity were used: absolute coronary flow reserve (ACFR, the ratio of maximal to resting coronary blood flow), the slope and the extrapolated pressure at zero flow (Pzf) of the pressure-maximal coronary flow (PMCF) relationship, and relative coronary flow reserve (RCFR, the ratio of maximal coronary blood flow with a stenosis to maximal coronary blood flow without a stenosis) at two of the three levels of stenosis. Nifedipine decreased ACFR from 4.5 +/- 1.9 to 1.9 +/- 0.3 (mean +/- SD; p less than 0.05), reflecting in part the increase in resting coronary blood flow. The nifedipine-induced changes in maximal coronary blood flow were not only due to a drop in perfusion pressure, as the slope of the PMCF relationship decreased from 2.27 +/- 0.49 ml/(min.mm Hg) to 1.54 +/- 0.51 ml/(min.mm Hg) (p less than 0.05), and Pzf decreased from 30 +/- 4 mm Hg to 20 +/- 7 mm Hg (p less than 0.05). Consequently, calculated maximal coronary blood flow was attenuated from 114 +/- 31 ml/min to 93 +/- 37 ml/min at 80 mm Hg, but was enhanced from 23 +/- 13 to 37 +/- 24 ml/min at 40 mm Hg coronary perfusion pressure. In concert with the change in the PMCF relationship, RCFR at equivalent severe stenosis increased from 0.33 +/- 0.06 to 0.47 +/- 0.10 (p less than 0.05). No changes were observed with metoprolol, epanolol, or saline. The effect of nifedipine on the PMCF relationship not only provides a mechanism for the drug's antiischemic action, but should also be considered in the interpretation of coronary flow reserve measurements in patients on nifedipine treatment.
[Imputation methods for missing data in educational diagnostic evaluation].
Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José
2012-02-01
In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.
An open source multivariate framework for n-tissue segmentation with evaluation on public data.
Avants, Brian B; Tustison, Nicholas J; Wu, Jue; Cook, Philip A; Gee, James C
2011-12-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs ( http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool.
An Open Source Multivariate Framework for n-Tissue Segmentation with Evaluation on Public Data
Tustison, Nicholas J.; Wu, Jue; Cook, Philip A.; Gee, James C.
2012-01-01
We introduce Atropos, an ITK-based multivariate n-class open source segmentation algorithm distributed with ANTs (http://www.picsl.upenn.edu/ANTs). The Bayesian formulation of the segmentation problem is solved using the Expectation Maximization (EM) algorithm with the modeling of the class intensities based on either parametric or non-parametric finite mixtures. Atropos is capable of incorporating spatial prior probability maps (sparse), prior label maps and/or Markov Random Field (MRF) modeling. Atropos has also been efficiently implemented to handle large quantities of possible labelings (in the experimental section, we use up to 69 classes) with a minimal memory footprint. This work describes the technical and implementation aspects of Atropos and evaluates its performance on two different ground-truth datasets. First, we use the BrainWeb dataset from Montreal Neurological Institute to evaluate three-tissue segmentation performance via (1) K-means segmentation without use of template data; (2) MRF segmentation with initialization by prior probability maps derived from a group template; (3) Prior-based segmentation with use of spatial prior probability maps derived from a group template. We also evaluate Atropos performance by using spatial priors to drive a 69-class EM segmentation problem derived from the Hammers atlas from University College London. These evaluation studies, combined with illustrative examples that exercise Atropos options, demonstrate both performance and wide applicability of this new platform-independent open source segmentation tool. PMID:21373993
Integration program, developing inverse modeling algorithms to calibrate building energy models, and is part related equipment. This work included developing an engineering grade operator training simulator for an
ERIC Educational Resources Information Center
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2017-01-15
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.
Bernhardt, Paul W.; Zhang, Daowen; Wang, Huixia Judy
2014-01-01
Joint modeling techniques have become a popular strategy for studying the association between a response and one or more longitudinal covariates. Motivated by the GenIMS study, where it is of interest to model the event of survival using censored longitudinal biomarkers, a joint model is proposed for describing the relationship between a binary outcome and multiple longitudinal covariates subject to detection limits. A fast, approximate EM algorithm is developed that reduces the dimension of integration in the E-step of the algorithm to one, regardless of the number of random effects in the joint model. Numerical studies demonstrate that the proposed approximate EM algorithm leads to satisfactory parameter and variance estimates in situations with and without censoring on the longitudinal covariates. The approximate EM algorithm is applied to analyze the GenIMS data set. PMID:25598564
Battery Control Algorithms | Transportation Research | NREL
publications. Accounting for Lithium-Ion Battery Degradation in Electric Vehicle Charging Optimization Advanced Reformulation of Lithium-Ion Battery Models for Enabling Electric Transportation Fail-Safe Design for Large Capacity Lithium-Ion Battery Systems Contact Ying Shi Email | 303-275-4240
NASA Astrophysics Data System (ADS)
Wei, Chengying; Xiong, Cuilian; Liu, Huanlin
2017-12-01
Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Toward automated assessment of health Web page quality using the DISCERN instrument.
Allam, Ahmed; Schulz, Peter J; Krauthammer, Michael
2017-05-01
As the Internet becomes the number one destination for obtaining health-related information, there is an increasing need to identify health Web pages that convey an accurate and current view of medical knowledge. In response, the research community has created multicriteria instruments for reliably assessing online medical information quality. One such instrument is DISCERN, which measures health Web page quality by assessing an array of features. In order to scale up use of the instrument, there is interest in automating the quality evaluation process by building machine learning (ML)-based DISCERN Web page classifiers. The paper addresses 2 key issues that are essential before constructing automated DISCERN classifiers: (1) generation of a robust DISCERN training corpus useful for training classification algorithms, and (2) assessment of the usefulness of the current DISCERN scoring schema as a metric for evaluating the performance of these algorithms. Using DISCERN, 272 Web pages discussing treatment options in breast cancer, arthritis, and depression were evaluated and rated by trained coders. First, different consensus models were compared to obtain a robust aggregated rating among the coders, suitable for a DISCERN ML training corpus. Second, a new DISCERN scoring criterion was proposed (features-based score) as an ML performance metric that is more reflective of the score distribution across different DISCERN quality criteria. First, we found that a probabilistic consensus model applied to the DISCERN instrument was robust against noise (random ratings) and superior to other approaches for building a training corpus. Second, we found that the established DISCERN scoring schema (overall score) is ill-suited to measure ML performance for automated classifiers. Use of a probabilistic consensus model is advantageous for building a training corpus for the DISCERN instrument, and use of a features-based score is an appropriate ML metric for automated DISCERN classifiers. The code for the probabilistic consensus model is available at https://bitbucket.org/A_2/em_dawid/ . © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Chen, Siyue; Leung, Henry; Dondo, Maxwell
2014-05-01
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
Chen, Po-Hao; Zafar, Hanna; Galperin-Aizenberg, Maya; Cook, Tessa
2018-04-01
A significant volume of medical data remains unstructured. Natural language processing (NLP) and machine learning (ML) techniques have shown to successfully extract insights from radiology reports. However, the codependent effects of NLP and ML in this context have not been well-studied. Between April 1, 2015 and November 1, 2016, 9418 cross-sectional abdomen/pelvis CT and MR examinations containing our internal structured reporting element for cancer were separated into four categories: Progression, Stable Disease, Improvement, or No Cancer. We combined each of three NLP techniques with five ML algorithms to predict the assigned label using the unstructured report text and compared the performance of each combination. The three NLP algorithms included term frequency-inverse document frequency (TF-IDF), term frequency weighting (TF), and 16-bit feature hashing. The ML algorithms included logistic regression (LR), random decision forest (RDF), one-vs-all support vector machine (SVM), one-vs-all Bayes point machine (BPM), and fully connected neural network (NN). The best-performing NLP model consisted of tokenized unigrams and bigrams with TF-IDF. Increasing N-gram length yielded little to no added benefit for most ML algorithms. With all parameters optimized, SVM had the best performance on the test dataset, with 90.6 average accuracy and F score of 0.813. The interplay between ML and NLP algorithms and their effect on interpretation accuracy is complex. The best accuracy is achieved when both algorithms are optimized concurrently.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Foulley, Jean-Louis; Van Dyk, David A
2000-01-01
This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence) are obtained for PX-EM relative to the basic EM algorithm in the random regression. PMID:14736399
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.
Vargas, A; Cornejo, N; Camp, A
2018-07-01
The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
Distributed Wind Research | Wind | NREL
evaluation, and improve wind turbine and wind power plant performance. A photo of a snowy road leading to a single wind turbine surrounded by snow-covered pine trees against blue sky. Capabilities NREL's power plant and small wind turbine development. Algorithms and programs exist for simulating, designing
Energy Systems Integration News | Energy Systems Integration Facility |
hierarchical control architecture that enables a hybrid control approach, where centralized control systems will be complemented by distributed control algorithms for solar inverters and autonomous control of ), involves developing a novel control scheme that provides system-wide monitoring and control using a small
Araujo, Pedro Henrique Xavier Nabuco de; Terra, Ricardo Mingarini; Santos, Thiago da Silva; Chate, Rodrigo Caruso; Paiva, Antonio Fernando Lins de; Pêgo-Fernandes, Paulo Manuel
2017-01-01
To evaluate the role of intrapleural positioning of a pleural catheter in early lung expansion and pleurodesis success in patients with recurrent malignant pleural effusion (RMPE). This was a retrospective study nested into a larger prospective cohort study including patients with RMPE recruited from a tertiary university teaching hospital between June of 2009 and September of 2014. The patients underwent pleural catheter insertion followed by bedside pleurodesis. Chest CT scans were performed twice: immediately before pleurodesis (iCT) and 30 days after pleurodesis (CT30). Catheter positioning was categorized based on iCT scans as posterolateral, anterior, fissural, and subpulmonary. We used the pleural volume on iCT scans to estimate early lung expansion and the difference between the pleural volumes on CT30 and iCT scans to evaluate radiological success of pleurodesis. Clinical pleurodesis success was defined as no need for any other pleural procedure. Of the 131 eligible patients from the original study, 85 were included in this nested study (64 women; mean age: 60.74 years). Catheter tip positioning was subpulmonary in 35 patients (41%), anterior in 23 (27%), posterolateral in 17 (20%), and fissural in 10 (12%). No significant differences were found among the groups regarding early lung expansion (median residual pleural cavity = 377 mL; interquartile range: 171-722 mL; p = 0.645), radiological success of pleurodesis (median volume = 33 mL; interquartile range: -225 to 257 mL; p = 0.923), and clinical success of pleurodesis (85.8%; p = 0.676). Our results suggest that the position of the tip of the pleural catheter influences neither early lung expansion nor bedside pleurodesis success in patients with RMPE. Avaliar o papel do posicionamento intrapleural do cateter pleural na expansão pulmonar precoce e no sucesso da pleurodese em pacientes com derrame pleural maligno recorrente (DPMR). Trata-se de um estudo retrospectivo aninhado em um estudo prospectivo de coorte maior com pacientes com DPMR recrutados em um hospital-escola universitário terciário entre junho de 2009 e setembro de 2014. Os pacientes foram submetidos a inserção de cateter pleural e, em seguida, pleurodese à beira do leito. A TC de tórax foi realizada duas vezes: imediatamente antes da pleurodese (TCi) e 30 dias após a pleurodese (TC30). Com base na TCi, a posição do cateter foi classificada em posterolateral, anterior, fissural e subpulmonar. Usamos o volume pleural na TCi para estimar a expansão pulmonar precoce e a diferença entre os volumes pleurais na TC30 e na TCi a fim de avaliar o sucesso radiológico da pleurodese. Considerou-se que a pleurodese teve êxito clínico quando não foi necessário realizar nenhum outro procedimento pleural. Dos 131 pacientes elegíveis do estudo original, 85 foram incluídos neste estudo aninhado (64 mulheres; média de idade: 60,74 anos). A posição da ponta do cateter foi subpulmonar em 35 pacientes (41%), anterior em 23 (27%), posterolateral em 17 (20%) e fissural em 10 (12%). Não houve diferenças significativas entre os grupos quanto à expansão pulmonar precoce (mediana da cavidade pleural residual = 377 ml; intervalo interquartil: 171-722 ml; p = 0,645), sucesso radiológico da pleurodese (mediana do volume = 33 ml; intervalo interquartil: -225 a 257 ml; p = 0,923) e sucesso clínico da pleurodese (85,8%; p = 0,676). Nossos resultados sugerem que a posição da ponta do cateter pleural não influencia nem a expansão pulmonar precoce nem o sucesso da pleurodese à beira do leito em pacientes com DPMR.
Enhancement of Plant Metabolite Fingerprinting by Machine Learning1[W
Scott, Ian M.; Vermeer, Cornelia P.; Liakata, Maria; Corol, Delia I.; Ward, Jane L.; Lin, Wanchang; Johnson, Helen E.; Whitehead, Lynne; Kular, Baldeep; Baker, John M.; Walsh, Sean; Dave, Anuja; Larson, Tony R.; Graham, Ian A.; Wang, Trevor L.; King, Ross D.; Draper, John; Beale, Michael H.
2010-01-01
Metabolite fingerprinting of Arabidopsis (Arabidopsis thaliana) mutants with known or predicted metabolic lesions was performed by 1H-nuclear magnetic resonance, Fourier transform infrared, and flow injection electrospray-mass spectrometry. Fingerprinting enabled processing of five times more plants than conventional chromatographic profiling and was competitive for discriminating mutants, other than those affected in only low-abundance metabolites. Despite their rapidity and complexity, fingerprints yielded metabolomic insights (e.g. that effects of single lesions were usually not confined to individual pathways). Among fingerprint techniques, 1H-nuclear magnetic resonance discriminated the most mutant phenotypes from the wild type and Fourier transform infrared discriminated the fewest. To maximize information from fingerprints, data analysis was crucial. One-third of distinctive phenotypes might have been overlooked had data models been confined to principal component analysis score plots. Among several methods tested, machine learning (ML) algorithms, namely support vector machine or random forest (RF) classifiers, were unsurpassed for phenotype discrimination. Support vector machines were often the best performing classifiers, but RFs yielded some particularly informative measures. First, RFs estimated margins between mutant phenotypes, whose relations could then be visualized by Sammon mapping or hierarchical clustering. Second, RFs provided importance scores for the features within fingerprints that discriminated mutants. These scores correlated with analysis of variance F values (as did Kruskal-Wallis tests, true- and false-positive measures, mutual information, and the Relief feature selection algorithm). ML classifiers, as models trained on one data set to predict another, were ideal for focused metabolomic queries, such as the distinctiveness and consistency of mutant phenotypes. Accessible software for use of ML in plant physiology is highlighted. PMID:20566707
AUC-Maximizing Ensembles through Metalearning.
LeDell, Erin; van der Laan, Mark J; Petersen, Maya
2016-05-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.
AUC-Maximizing Ensembles through Metalearning
LeDell, Erin; van der Laan, Mark J.; Peterson, Maya
2016-01-01
Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree. PMID:27227721
-redshifted), Observed Flux, Statistical Error (Based on the optimal extraction algorithm of the IRAF packages were acquired using different instrumental settings for the blue and red parts of the spectrum to avoid extracted for systematics checks of the wavelength calibration. Wavelength and flux calibration were applied
Influence of simulated weightlessness on maximal oxygen uptake of untrained rats
NASA Technical Reports Server (NTRS)
Overton, J. Michael; Tipton, Charles M.
1987-01-01
The purpose of this study was to determine the effect of hindlimb suspension on maximal oxygen uptake of rodents. Male Sprague-Dawley rats were assigned to head-down (HD) suspension, horizontal (HOZ) suspension, or cage (C) control for 6-9 days. Rats were tested for maximal oxygen uptake before and after surgical instrumentation (Doppler flow probes, carotid and jugular cannulae), and after suspension. Body weight was significantly decreased after suspension in both HD and HOZ groups, but was significantly increased in the C group. Absolute maximal O2 uptake (ml/min) was not different in the C group. However, because of their increased weight, relative maximal O2 uptake (ml/min per kg) was significantly reduced. In contrast, both relative and absolute maximal O2 uptake were significantly lower, following suspension, for the HD and HOZ groups. These preliminary results support the use of hindlimb suspension as an effective model to study the mechanism(s) of cardiovascular deconditioning.
A Model and Simple Iterative Algorithm for Redundancy Analysis.
ERIC Educational Resources Information Center
Fornell, Claes; And Others
1988-01-01
This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A
2009-06-01
In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.
Golden Rays - March 2017 | Solar Research | NREL
, test and deploy a data enhanced hierarchical control architecture that adopts a hybrid approach to grid control. A centralized control layer will be complemented by distributed control algorithms for solar inverters and autonomous control of grid edge devices. The other NREL project will develop a novel control
Smith, Justin D; Borckardt, Jeffrey J; Nash, Michael R
2012-09-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. Copyright © 2011. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nicolae, Alexandru; Department of Medical Physics, Odette Cancer Center, Sunnybrook Health Sciences Centre, Toronto, Ontario; Morton, Gerard
Purpose: This work presents the application of a machine learning (ML) algorithm to automatically generate high-quality, prostate low-dose-rate (LDR) brachytherapy treatment plans. The ML algorithm can mimic characteristics of preoperative treatment plans deemed clinically acceptable by brachytherapists. The planning efficiency, dosimetry, and quality (as assessed by experts) of preoperative plans generated with an ML planning approach was retrospectively evaluated in this study. Methods and Materials: Preimplantation and postimplantation treatment plans were extracted from 100 high-quality LDR treatments and stored within a training database. The ML training algorithm matches similar features from a new LDR case to those within the trainingmore » database to rapidly obtain an initial seed distribution; plans were then further fine-tuned using stochastic optimization. Preimplantation treatment plans generated by the ML algorithm were compared with brachytherapist (BT) treatment plans in terms of planning time (Wilcoxon rank sum, α = 0.05) and dosimetry (1-way analysis of variance, α = 0.05). Qualitative preimplantation plan quality was evaluated by expert LDR radiation oncologists using a Likert scale questionnaire. Results: The average planning time for the ML approach was 0.84 ± 0.57 minutes, compared with 17.88 ± 8.76 minutes for the expert planner (P=.020). Preimplantation plans were dosimetrically equivalent to the BT plans; the average prostate V150% was 4% lower for ML plans (P=.002), although the difference was not clinically significant. Respondents ranked the ML-generated plans as equivalent to expert BT treatment plans in terms of target coverage, normal tissue avoidance, implant confidence, and the need for plan modifications. Respondents had difficulty differentiating between plans generated by a human or those generated by the ML algorithm. Conclusions: Prostate LDR preimplantation treatment plans that have equivalent quality to plans created by brachytherapists can be rapidly generated using ML. The adoption of ML in the brachytherapy workflow is expected to improve LDR treatment plan uniformity while reducing planning time and resources.« less
Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J
1996-02-01
The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu; Zhang, Jiahan
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work.more » Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data.« less
Li, Si; Zhang, Jiahan; Krol, Andrzej; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin; Lipson, Edward; Feiglin, David; Xu, Yuesheng
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data. PMID:26233214
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
Cui, Zaixu; Gong, Gaolang
2018-06-02
Individualized behavioral/cognitive prediction using machine learning (ML) regression approaches is becoming increasingly applied. The specific ML regression algorithm and sample size are two key factors that non-trivially influence prediction accuracies. However, the effects of the ML regression algorithm and sample size on individualized behavioral/cognitive prediction performance have not been comprehensively assessed. To address this issue, the present study included six commonly used ML regression algorithms: ordinary least squares (OLS) regression, least absolute shrinkage and selection operator (LASSO) regression, ridge regression, elastic-net regression, linear support vector regression (LSVR), and relevance vector regression (RVR), to perform specific behavioral/cognitive predictions based on different sample sizes. Specifically, the publicly available resting-state functional MRI (rs-fMRI) dataset from the Human Connectome Project (HCP) was used, and whole-brain resting-state functional connectivity (rsFC) or rsFC strength (rsFCS) were extracted as prediction features. Twenty-five sample sizes (ranged from 20 to 700) were studied by sub-sampling from the entire HCP cohort. The analyses showed that rsFC-based LASSO regression performed remarkably worse than the other algorithms, and rsFCS-based OLS regression performed markedly worse than the other algorithms. Regardless of the algorithm and feature type, both the prediction accuracy and its stability exponentially increased with increasing sample size. The specific patterns of the observed algorithm and sample size effects were well replicated in the prediction using re-testing fMRI data, data processed by different imaging preprocessing schemes, and different behavioral/cognitive scores, thus indicating excellent robustness/generalization of the effects. The current findings provide critical insight into how the selected ML regression algorithm and sample size influence individualized predictions of behavior/cognition and offer important guidance for choosing the ML regression algorithm or sample size in relevant investigations. Copyright © 2018 Elsevier Inc. All rights reserved.
Quantum speedup in solving the maximal-clique problem
NASA Astrophysics Data System (ADS)
Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang
2018-03-01
The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.
Solar Energy Innovation Network | Solar Research | NREL
Coordinated Control Algorithms for Distributed Battery Energy Storage Systems to Provide Grid Support Services local governments, nonprofits, innovative companies, and system operators-with analytical support from a Affordability of Renewable Energy through Options Analysis and Systems Design (or "Options Analysis"
2011-01-01
Background Autism spectrum disorders (ASD) are associated with complications of pregnancy that implicate fetal hypoxia (FH); the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR) identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH), were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR) and specific not-missing-at-random (NMAR) mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption). Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting. PMID:21208442
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
Hormonal responses to resistance exercise during different menstrual cycle states.
Nakamura, Yuki; Aizawa, Katsuji; Imai, Tomoko; Kono, Ichiro; Mesaki, Noboru
2011-06-01
To investigate the effect of menstrual cycle states on ovarian and anabolic hormonal responses to acute resistance exercise in young women. Eight healthy women (eumenorrhea; EM) and eight women with menstrual disorders including oligomenorrhea and amenorrhea (OAM) participated in this study. The EM group performed acute resistance exercises during the early follicular (EF) and midluteal (ML) phases, and the OAM group performed the same exercises. All subjects performed three sets each of lat pull-downs, leg curls, bench presses, leg extensions, and squats at 75%-80% of one-repetition maximum with a 1-min rest between sets. Blood samples were obtained before exercise, immediately after, 30 min after, and 60 min after the exercise. In the EM group, resting serum levels of estradiol and progesterone in the ML phase were higher than those in the EF phase and higher than those in the OAM group. Serum estradiol and progesterone in the ML phase increased after the exercise but did not change in the EF phase or in the OAM group. In contrast, resting levels of testosterone in the OAM group were higher than those in both the ML and EF phases of the EM group. After the exercise, serum growth hormone increased in both the ML and EF phases but did not change in the OAM group. The responses of anabolic hormones to acute resistance exercise are different among the menstrual cycle states in young women. Women with menstrual disturbances with low estradiol and progesterone serum levels have an attenuated anabolic hormone response to acute resistance exercise, suggesting that menstrual disorders accompanying low ovarian hormone levels may affect exercise-induced change in anabolic hormones in women.
NASA Astrophysics Data System (ADS)
Roche-Lima, Abiel; Thulasiram, Ruppa K.
2012-02-01
Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.
DIY Solar Market Analysis Webinar Series: PVWatts® | State, Local, and
, and updates the energy prediction algorithms to be in line with the actual performance of modern the latest update." In this webinar, one of the tool's developers explains how the new version of ® Wednesday, July 9, 2014 As part of a Do-It-Yourself Solar Market Analysis summer series, NREL's Solar
architectures. Crowlely's group has designed and implemented new methods and algorithms specifically for biomass , Crowley developed highly parallel methods for simulations of bio-macromolecules. Affiliated Research advanced sampling methods, Crowley and his team determine free energies such as binding of substrates
Materials Discovery | Photovoltaic Research | NREL
and specialized analysis algorithms. The Center for Next Generation of Materials by Design (CNGMD) is , incorporating metastable materials into predictive design, and developing theory to guide materials synthesis design, accuracy and relevance, metastability, and synthesizability-to make computational materials
Materials Discovery | Materials Science | NREL
measurement methods and specialized analysis algorithms. Projects Basic Research The basic research projects applications using high-throughput combinatorial research methods. Email | 303-384-6467 Photo of John Perkins
Staff | Computational Science | NREL
develops and leads laboratory-wide efforts in high-performance computing and energy-efficient data centers Professional IV-High Perf Computing Jim.Albin@nrel.gov 303-275-4069 Ananthan, Shreyas Senior Scientist - High -Performance Algorithms and Modeling Shreyas.Ananthan@nrel.gov 303-275-4807 Bendl, Kurt IT Professional IV-High
Power Systems Design and Studies | Grid Modernization | NREL
Design and Studies Power Systems Design and Studies NREL develops new tools, algorithms, and market design and performance evaluations; and planning, operations, and protection studies. Photo of two researchers looking at a screen showing a distribution grid map Current design and planning tools for the
Machine Learning and Computer Vision System for Phenotype Data Acquisition and Analysis in Plants.
Navarro, Pedro J; Pérez, Fernando; Weiss, Julia; Egea-Cortines, Marcos
2016-05-05
Phenomics is a technology-driven approach with promising future to obtain unbiased data of biological systems. Image acquisition is relatively simple. However data handling and analysis are not as developed compared to the sampling capacities. We present a system based on machine learning (ML) algorithms and computer vision intended to solve the automatic phenotype data analysis in plant material. We developed a growth-chamber able to accommodate species of various sizes. Night image acquisition requires near infrared lightning. For the ML process, we tested three different algorithms: k-nearest neighbour (kNN), Naive Bayes Classifier (NBC), and Support Vector Machine. Each ML algorithm was executed with different kernel functions and they were trained with raw data and two types of data normalisation. Different metrics were computed to determine the optimal configuration of the machine learning algorithms. We obtained a performance of 99.31% in kNN for RGB images and a 99.34% in SVM for NIR. Our results show that ML techniques can speed up phenomic data analysis. Furthermore, both RGB and NIR images can be segmented successfully but may require different ML algorithms for segmentation.
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
NASA Astrophysics Data System (ADS)
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Kim, Eun Young; Magnotta, Vincent A; Liu, Dawei; Johnson, Hans J
2014-09-01
Machine learning (ML)-based segmentation methods are a common technique in the medical image processing field. In spite of numerous research groups that have investigated ML-based segmentation frameworks, there remains unanswered aspects of performance variability for the choice of two key components: ML algorithm and intensity normalization. This investigation reveals that the choice of those elements plays a major part in determining segmentation accuracy and generalizability. The approach we have used in this study aims to evaluate relative benefits of the two elements within a subcortical MRI segmentation framework. Experiments were conducted to contrast eight machine-learning algorithm configurations and 11 normalization strategies for our brain MR segmentation framework. For the intensity normalization, a Stable Atlas-based Mapped Prior (STAMP) was utilized to take better account of contrast along boundaries of structures. Comparing eight machine learning algorithms on down-sampled segmentation MR data, it was obvious that a significant improvement was obtained using ensemble-based ML algorithms (i.e., random forest) or ANN algorithms. Further investigation between these two algorithms also revealed that the random forest results provided exceptionally good agreement with manual delineations by experts. Additional experiments showed that the effect of STAMP-based intensity normalization also improved the robustness of segmentation for multicenter data sets. The constructed framework obtained good multicenter reliability and was successfully applied on a large multicenter MR data set (n>3000). Less than 10% of automated segmentations were recommended for minimal expert intervention. These results demonstrate the feasibility of using the ML-based segmentation tools for processing large amount of multicenter MR images. We demonstrated dramatically different result profiles in segmentation accuracy according to the choice of ML algorithm and intensity normalization chosen. Copyright © 2014 Elsevier Inc. All rights reserved.
High-Performance Algorithms and Complex Fluids | Computational Science |
only possible by combining experimental data with simulation. Capabilities Capabilities include: Block -laden, non-Newtonian, as well as traditional internal and external flows. Contact Ray Grout Group
Multimodal Event Detection in Twitter Hashtag Networks
Yilmaz, Yasin; Hero, Alfred O.
2016-07-01
In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less
Sampling-based ensemble segmentation against inter-operator variability
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew
2011-03-01
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).
Leighton, B; Challiss, R A; Lozeman, F J; Newsholme, E A
1987-01-01
1. Rats were treated with dexamethasone for 4 days before measurement of the rates of lactate formation [which is an index of hexose transport; see Challiss, Lozeman, Leighton & Newsholme (1986) Biochem. J. 233, 377-381] and glycogen synthesis in response to various concentrations of insulin in isolated incubated soleus and extensor digitorum longus muscle preparations. 2. The concentration of insulin required to stimulate these processes half-maximally in soleus and extensor digitorum longus muscles isolated from control rats was about 100 muunits/ml. 3. Dexamethasone increases the concentration of insulin required to stimulate glycolysis half-maximally in soleus and extensor digitorum longus preparations to 250 and 300 muunits/ml respectively. The respective insulin concentrations necessary to stimulate glycogen synthesis half-maximally were about 430 and 370 muunits/ml for soleus and extensor digitorum longus muscle preparations isolated from steroid-treated rats. 5. Dexamethasone treatment did not change the amount of insulin bound to soleus muscle. PMID:3318810
Felt-Bersma, R J; Sloots, C E; Poen, A C; Cuesta, M A; Meuwissen, S G
2000-12-01
The clinical impact of rectal compliance and sensitivity measurement is not clear. The aim of this study was to measure the rectal compliance in different patient groups compared with controls and to establish the clinical effect of rectal compliance. Anorectal function tests were performed in 974 consecutive patients (284 men). Normal values were obtained from 24 controls. Rectal compliance measurement was performed by filling a latex rectal balloon with water at a rate of 60 ml per minute. Volume and intraballoon pressure were measured. Volume and pressure at three sensitivity thresholds were recorded for analysis: first sensation, urge, and maximal toleration. At maximal toleration, the rectal compliance (volume/pressure) was calculated. Proctoscopy, anal manometry, anal mucosal sensitivity, and anal endosonography were also performed as part of our anorectal function tests. No effect of age or gender was observed in either controls or patients. Patients with fecal incontinence had a higher volume at first sensation and a higher pressure at maximal toleration (P = 0.03), the presence of a sphincter defect or low or normal anal pressures made no difference. Patients with constipation had a larger volume at first sensation and urge (P < 0.0001 and P < 0.01). Patients with a rectocele had a larger volume at first sensation (P = 0.004). Patients with rectal prolapse did not differ from controls; after rectopexy, rectal compliance decreased (P < 0.0003). Patients with inflammatory bowel disease had a lower rectal compliance, most pronounced in active proctitis (P = 0.003). Patients with ileoanal pouches also had a lower compliance (P < 0.0001). In the 17 patients where a maximal toleration volume < 60 ml was found, 11 had complaints of fecal incontinence, and 6 had a stoma. In 31 patients a maximal toleration volume between 60 and 100 ml was found; 12 patients had complaints of fecal incontinence, and 6 had a stoma. Proctitis or pouchitis was the main cause for a small compliance. All 29 patients who had a maximal toleration volume > 500 ml had complaints of constipation. No correlation between rectal and anal mucosal sensitivity was found. Rectal compliance measurement with a latex balloon is easily feasible. In this series of 974 patients, some patient groups showed an abnormal rectal visceral sensitivity and compliance, but there was an overlap with controls. Rectal compliance measurement gave a good clinical impression about the contribution of the rectum to the anorectal problem. Patients with proctitis and pouchitis had the smallest rectal compliance. A maximal toleration volume < 60 ml always led to fecal incontinence, and stomas should be considered for such patients. A maximal toleration volume > 500 ml was only seen in constipated patients, and therapy should be given to prevent further damage to the pelvic floor. Values close to or within the normal range rule out the rectum as an important factor in the anorectal problem of the patient.
Spectral unmixing of urban land cover using a generic library approach
NASA Astrophysics Data System (ADS)
Degerickx, Jeroen; Lordache, Marian-Daniel; Okujeni, Akpona; Hermy, Martin; van der Linden, Sebastian; Somers, Ben
2016-10-01
Remote sensing based land cover classification in urban areas generally requires the use of subpixel classification algorithms to take into account the high spatial heterogeneity. These spectral unmixing techniques often rely on spectral libraries, i.e. collections of pure material spectra (endmembers, EM), which ideally cover the large EM variability typically present in urban scenes. Despite the advent of several (semi-) automated EM detection algorithms, the collection of such image-specific libraries remains a tedious and time-consuming task. As an alternative, we suggest the use of a generic urban EM library, containing material spectra under varying conditions, acquired from different locations and sensors. This approach requires an efficient EM selection technique, capable of only selecting those spectra relevant for a specific image. In this paper, we evaluate and compare the potential of different existing library pruning algorithms (Iterative Endmember Selection and MUSIC) using simulated hyperspectral (APEX) data of the Brussels metropolitan area. In addition, we develop a new hybrid EM selection method which is shown to be highly efficient in dealing with both imagespecific and generic libraries, subsequently yielding more robust land cover classification results compared to existing methods. Future research will include further optimization of the proposed algorithm and additional tests on both simulated and real hyperspectral data.
Maximum likelihood positioning algorithm for high-resolution PET scanners
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross-Weege, Nicolas, E-mail: nicolas.gross-weege@pmi.rwth-aachen.de, E-mail: schulz@pmi.rwth-aachen.de; Schug, David; Hallen, Patrick
2016-06-15
Purpose: In high-resolution positron emission tomography (PET), lightsharing elements are incorporated into typical detector stacks to read out scintillator arrays in which one scintillator element (crystal) is smaller than the size of the readout channel. In order to identify the hit crystal by means of the measured light distribution, a positioning algorithm is required. One commonly applied positioning algorithm uses the center of gravity (COG) of the measured light distribution. The COG algorithm is limited in spatial resolution by noise and intercrystal Compton scatter. The purpose of this work is to develop a positioning algorithm which overcomes this limitation. Methods:more » The authors present a maximum likelihood (ML) algorithm which compares a set of expected light distributions given by probability density functions (PDFs) with the measured light distribution. Instead of modeling the PDFs by using an analytical model, the PDFs of the proposed ML algorithm are generated assuming a single-gamma-interaction model from measured data. The algorithm was evaluated with a hot-rod phantom measurement acquired with the preclinical HYPERION II {sup D} PET scanner. In order to assess the performance with respect to sensitivity, energy resolution, and image quality, the ML algorithm was compared to a COG algorithm which calculates the COG from a restricted set of channels. The authors studied the energy resolution of the ML and the COG algorithm regarding incomplete light distributions (missing channel information caused by detector dead time). Furthermore, the authors investigated the effects of using a filter based on the likelihood values on sensitivity, energy resolution, and image quality. Results: A sensitivity gain of up to 19% was demonstrated in comparison to the COG algorithm for the selected operation parameters. Energy resolution and image quality were on a similar level for both algorithms. Additionally, the authors demonstrated that the performance of the ML algorithm is less prone to missing channel information. A likelihood filter visually improved the image quality, i.e., the peak-to-valley increased up to a factor of 3 for 2-mm-diameter phantom rods by rejecting 87% of the coincidences. A relative improvement of the energy resolution of up to 12.8% was also measured rejecting 91% of the coincidences. Conclusions: The developed ML algorithm increases the sensitivity by correctly handling missing channel information without influencing energy resolution or image quality. Furthermore, the authors showed that energy resolution and image quality can be improved substantially by rejecting events that do not comply well with the single-gamma-interaction model, such as Compton-scattered events.« less
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
dynamics of the social network are known. In theory, our online algorithms could be used to adapt a marketing campaign to unknown or time-varying social...An Online Algorithm for Maximizing Submodular Functions Matthew Streeter Daniel Golovin December 20, 2007 CMU-CS-07-171 School of Computer Science...number. 1. REPORT DATE 20 DEC 2007 2. REPORT TYPE 3. DATES COVERED 00-00-2007 to 00-00-2007 4. TITLE AND SUBTITLE An Online Algorithm for
Detecting community structure via the maximal sub-graphs and belonging degrees in complex networks
NASA Astrophysics Data System (ADS)
Cui, Yaozu; Wang, Xingyuan; Eustace, Justine
2014-12-01
Community structure is a common phenomenon in complex networks, and it has been shown that some communities in complex networks often overlap each other. So in this paper we propose a new algorithm to detect overlapping community structure in complex networks. To identify the overlapping community structure, our algorithm firstly extracts fully connected sub-graphs which are maximal sub-graphs from original networks. Then two maximal sub-graphs having the key pair-vertices can be merged into a new larger sub-graph using some belonging degree functions. Furthermore we extend the modularity function to evaluate the proposed algorithm. In addition, overlapping nodes between communities are founded successfully. Finally we report the comparison between the modularity and the computational complexity of the proposed algorithm with some other existing algorithms. The experimental results show that the proposed algorithm gives satisfactory results.
Needs, Pains, and Motivations in Autonomous Agents.
Starzyk, Janusz A; Graham, James; Puzio, Leszek
This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.This paper presents the development of a motivated learning (ML) agent with symbolic I/O. Our earlier work on the ML agent was enhanced, giving it autonomy for interaction with other agents. Specifically, we equipped the agent with drives and pains that establish its motivations to learn how to respond to desired and undesired events and create related abstract goals. The purpose of this paper is to explore the autonomous development of motivations and memory in agents within a simulated environment. The ML agent has been implemented in a virtual environment created within the NeoAxis game engine. Additionally, to illustrate the benefits of an ML-based agent, we compared the performance of our algorithm against various reinforcement learning (RL) algorithms in a dynamic test scenario, and demonstrated that our ML agent learns better than any of the tested RL agents.
A closed-form solution to tensor voting: theory and applications.
Wu, Tai-Pang; Yeung, Sai-Kit; Jia, Jiaya; Tang, Chi-Keung; Medioni, Gérard
2012-08-01
We prove a closed-form solution to tensor voting (CFTV): Given a point set in any dimensions, our closed-form solution provides an exact, continuous, and efficient algorithm for computing a structure-aware tensor that simultaneously achieves salient structure detection and outlier attenuation. Using CFTV, we prove the convergence of tensor voting on a Markov random field (MRF), thus termed as MRFTV, where the structure-aware tensor at each input site reaches a stationary state upon convergence in structure propagation. We then embed structure-aware tensor into expectation maximization (EM) for optimizing a single linear structure to achieve efficient and robust parameter estimation. Specifically, our EMTV algorithm optimizes both the tensor and fitting parameters and does not require random sampling consensus typically used in existing robust statistical techniques. We performed quantitative evaluation on its accuracy and robustness, showing that EMTV performs better than the original TV and other state-of-the-art techniques in fundamental matrix estimation for multiview stereo matching. The extensions of CFTV and EMTV for extracting multiple and nonlinear structures are underway.
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
Solar Accuracy to the 3/10000 Degree - Continuum Magazine | NREL
Laboratory, where he has developed Solar Position Algorithm software. Photo by Dennis Schroeder, NREL Solar -Pyrheliometer Comparison (NPC), on the deck of NREL's Solar Radiation Research Laboratory. Photo by Dennis
Maximal clique enumeration with data-parallel primitives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lessley, Brenton; Perciano, Talita; Mathai, Manish
The enumeration of all maximal cliques in an undirected graph is a fundamental problem arising in several research areas. We consider maximal clique enumeration on shared-memory, multi-core architectures and introduce an approach consisting entirely of data-parallel operations, in an effort to achieve efficient and portable performance across different architectures. We study the performance of the algorithm via experiments varying over benchmark graphs and architectures. Overall, we observe that our algorithm achieves up to a 33-time speedup and 9-time speedup over state-of-the-art distributed and serial algorithms, respectively, for graphs with higher ratios of maximal cliques to total cliques. Further, we attainmore » additional speedups on a GPU architecture, demonstrating the portable performance of our data-parallel design.« less
NASA Astrophysics Data System (ADS)
Yang, Chun-Chieh; Kim, Moon S.; Chuang, Yung-Kun; Lee, Hoyoung
2013-05-01
This paper reports the development of a multispectral algorithm, using the line-scan hyperspectral imaging system, to detect fecal contamination on leafy greens. Fresh bovine feces were applied to the surfaces of washed loose baby spinach leaves. A hyperspectral line-scan imaging system was used to acquire hyperspectral fluorescence images of the contaminated leaves. Hyperspectral image analysis resulted in the selection of the 666 nm and 688 nm wavebands for a multispectral algorithm to rapidly detect feces on leafy greens, by use of the ratio of fluorescence intensities measured at those two wavebands (666 nm over 688 nm). The algorithm successfully distinguished most of the lowly diluted fecal spots (0.05 g feces/ml water and 0.025 g feces/ml water) and some of the highly diluted spots (0.0125 g feces/ml water and 0.00625 g feces/ml water) from the clean spinach leaves. The results showed the potential of the multispectral algorithm with line-scan imaging system for application to automated food processing lines for food safety inspection of leafy green vegetables.
Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors
NASA Astrophysics Data System (ADS)
Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin
2014-03-01
One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Microgrids | Grid Modernization | NREL
algorithms for microgrid integration Controller hardware-in-the-loop testing, where the physical controller interacts with a model of the microgrid and associated power devices Power hardware-in-the-loop testing of operation was validated in a power hardware-in-the-loop experiment using a programmable DC power supply to
Species Tree Inference Using a Mixture Model.
Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens
2015-09-01
Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic analyses using gene tree parsimony. Bioinformatics 24(13):1540-1541). Our method is competitive with PHYLDOG in terms of accuracy and runs significantly faster and our method outperforms Duptree in accuracy. The analysis constituted by MixTreEM without DLRS may also be used for selecting the target species tree, yielding a fast and yet accurate algorithm for larger data sets. MixTreEM is freely available at http://prime.scilifelab.se/mixtreem/. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Receiver Diversity Combining Using Evolutionary Algorithms in Rayleigh Fading Channel
Akbari, Mohsen; Manesh, Mohsen Riahi
2014-01-01
In diversity combining at the receiver, the output signal-to-noise ratio (SNR) is often maximized by using the maximal ratio combining (MRC) provided that the channel is perfectly estimated at the receiver. However, channel estimation is rarely perfect in practice, which results in deteriorating the system performance. In this paper, an imperialistic competitive algorithm (ICA) is proposed and compared with two other evolutionary based algorithms, namely, particle swarm optimization (PSO) and genetic algorithm (GA), for diversity combining of signals travelling across the imperfect channels. The proposed algorithm adjusts the combiner weights of the received signal components in such a way that maximizes the SNR and minimizes the bit error rate (BER). The results indicate that the proposed method eliminates the need of channel estimation and can outperform the conventional diversity combining methods. PMID:25045725
Dynamic Task Optimization in Remote Diabetes Monitoring Systems.
Suh, Myung-Kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid
2012-09-01
Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %.
Dynamic Task Optimization in Remote Diabetes Monitoring Systems
Suh, Myung-kyung; Woodbridge, Jonathan; Moin, Tannaz; Lan, Mars; Alshurafa, Nabil; Samy, Lauren; Mortazavi, Bobak; Ghasemzadeh, Hassan; Bui, Alex; Ahmadi, Sheila; Sarrafzadeh, Majid
2016-01-01
Diabetes is the seventh leading cause of death in the United States, but careful symptom monitoring can prevent adverse events. A real-time patient monitoring and feedback system is one of the solutions to help patients with diabetes and their healthcare professionals monitor health-related measurements and provide dynamic feedback. However, data-driven methods to dynamically prioritize and generate tasks are not well investigated in the domain of remote health monitoring. This paper presents a wireless health project (WANDA) that leverages sensor technology and wireless communication to monitor the health status of patients with diabetes. The WANDA dynamic task management function applies data analytics in real-time to discretize continuous features, applying data clustering and association rule mining techniques to manage a sliding window size dynamically and to prioritize required user tasks. The developed algorithm minimizes the number of daily action items required by patients with diabetes using association rules that satisfy a minimum support, confidence and conditional probability thresholds. Each of these tasks maximizes information gain, thereby improving the overall level of patient adherence and satisfaction. Experimental results from applying EM-based clustering and Apriori algorithms show that the developed algorithm can predict further events with higher confidence levels and reduce the number of user tasks by up to 76.19 %. PMID:27617297
Improved Understanding of In Situ Chemical Oxidation. Technical Objective 2: Soil Reactivity
2009-05-01
each reaction tube. The reagents ( sulfuric acid /potassium dichromate/mercury sulfate ) were prepared following standard methods (APHA, 1998). When...reaction tubes. A volume (~20 mL) of potassium dichromate digestion solution (potassium dichromate (BDH Laboratories), sulfuric acid (EM Science...and mercury sulphate (EM Science)) and a sulfuric acid reagent solution ( sulfuric acid (EM Science) and silver sulphate (Alfa Aesar)) were added to
Bone, Daniel; Bishop, Somer; Black, Matthew P.; Goodwin, Matthew S.; Lord, Catherine; Narayanan, Shrikanth S.
2016-01-01
Background Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely-used ASD screening and diagnostic tools. Methods The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders (DD), split at age 10. Algorithms were created via a robust ML classifier, support vector machine (SVM), while targeting best-estimate clinical diagnosis of ASD vs. non-ASD. Parameter settings were tuned in multiple levels of cross-validation. Results The created algorithms were more effective (higher performing) than current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. Conclusions ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. PMID:27090613
Bone, Daniel; Bishop, Somer L; Black, Matthew P; Goodwin, Matthew S; Lord, Catherine; Narayanan, Shrikanth S
2016-08-01
Machine learning (ML) provides novel opportunities for human behavior research and clinical translation, yet its application can have noted pitfalls (Bone et al., 2015). In this work, we fastidiously utilize ML to derive autism spectrum disorder (ASD) instrument algorithms in an attempt to improve upon widely used ASD screening and diagnostic tools. The data consisted of Autism Diagnostic Interview-Revised (ADI-R) and Social Responsiveness Scale (SRS) scores for 1,264 verbal individuals with ASD and 462 verbal individuals with non-ASD developmental or psychiatric disorders, split at age 10. Algorithms were created via a robust ML classifier, support vector machine, while targeting best-estimate clinical diagnosis of ASD versus non-ASD. Parameter settings were tuned in multiple levels of cross-validation. The created algorithms were more effective (higher performing) than the current algorithms, were tunable (sensitivity and specificity can be differentially weighted), and were more efficient (achieving near-peak performance with five or fewer codes). Results from ML-based fusion of ADI-R and SRS are reported. We present a screener algorithm for below (above) age 10 that reached 89.2% (86.7%) sensitivity and 59.0% (53.4%) specificity with only five behavioral codes. ML is useful for creating robust, customizable instrument algorithms. In a unique dataset comprised of controls with other difficulties, our findings highlight the limitations of current caregiver-report instruments and indicate possible avenues for improving ASD screening and diagnostic tools. © 2016 Association for Child and Adolescent Mental Health.
Bottom-up GGM algorithm for constructing multiple layered hierarchical gene regulatory networks
USDA-ARS?s Scientific Manuscript database
Multilayered hierarchical gene regulatory networks (ML-hGRNs) are very important for understanding genetics regulation of biological pathways. However, there are currently no computational algorithms available for directly building ML-hGRNs that regulate biological pathways. A bottom-up graphic Gaus...
Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery
NASA Astrophysics Data System (ADS)
Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn
2015-04-01
Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.
NASA Astrophysics Data System (ADS)
Hutton, Brian F.; Lau, Yiu H.
1998-06-01
Compensation for distance-dependent resolution can be directly incorporated in maximum likelihood reconstruction. Our objective was to examine the effectiveness of this compensation using either the standard expectation maximization (EM) algorithm or an accelerated algorithm based on use of ordered subsets (OSEM). We also investigated the application of post-reconstruction filtering in combination with resolution compensation. Using the MCAT phantom, projections were simulated for
data, including attenuation and distance-dependent resolution. Projection data were reconstructed using conventional EM and OSEM with subset size 2 and 4, with/without 3D compensation for detector response (CDR). Also post-reconstruction filtering (PRF) was performed using a 3D Butterworth filter of order 5 with various cutoff frequencies (0.2-
). Image quality and reconstruction accuracy were improved when CDR was included. Image noise was lower with CDR for a given iteration number. PRF with cutoff frequency greater than
improved noise with no reduction in recovery coefficient for myocardium but the effect was less when CDR was incorporated in the reconstruction. CDR alone provided better results than use of PRF without CDR. Results suggest that using CDR without PRF, and stopping at a small number of iterations, may provide sufficiently good results for myocardial SPECT. Similar behaviour was demonstrated for OSEM.
Development of a Wireless Computer Vision Instrument to Detect Biotic Stress in Wheat
Casanova, Joaquin J.; O'Shaughnessy, Susan A.; Evett, Steven R.; Rush, Charles M.
2014-01-01
Knowledge of crop abiotic and biotic stress is important for optimal irrigation management. While spectral reflectance and infrared thermometry provide a means to quantify crop stress remotely, these measurements can be cumbersome. Computer vision offers an inexpensive way to remotely detect crop stress independent of vegetation cover. This paper presents a technique using computer vision to detect disease stress in wheat. Digital images of differentially stressed wheat were segmented into soil and vegetation pixels using expectation maximization (EM). In the first season, the algorithm to segment vegetation from soil and distinguish between healthy and stressed wheat was developed and tested using digital images taken in the field and later processed on a desktop computer. In the second season, a wireless camera with near real-time computer vision capabilities was tested in conjunction with the conventional camera and desktop computer. For wheat irrigated at different levels and inoculated with wheat streak mosaic virus (WSMV), vegetation hue determined by the EM algorithm showed significant effects from irrigation level and infection. Unstressed wheat had a higher hue (118.32) than stressed wheat (111.34). In the second season, the hue and cover measured by the wireless computer vision sensor showed significant effects from infection (p = 0.0014), as did the conventional camera (p < 0.0001). Vegetation hue obtained through a wireless computer vision system in this study is a viable option for determining biotic crop stress in irrigation scheduling. Such a low-cost system could be suitable for use in the field in automated irrigation scheduling applications. PMID:25251410
A Genetic-Based Scheduling Algorithm to Minimize the Makespan of the Grid Applications
NASA Astrophysics Data System (ADS)
Entezari-Maleki, Reza; Movaghar, Ali
Task scheduling algorithms in grid environments strive to maximize the overall throughput of the grid. In order to maximize the throughput of the grid environments, the makespan of the grid tasks should be minimized. In this paper, a new task scheduling algorithm is proposed to assign tasks to the grid resources with goal of minimizing the total makespan of the tasks. The algorithm uses the genetic approach to find the suitable assignment within grid resources. The experimental results obtained from applying the proposed algorithm to schedule independent tasks within grid environments demonstrate the applicability of the algorithm in achieving schedules with comparatively lower makespan in comparison with other well-known scheduling algorithms such as, Min-min, Max-min, RASA and Sufferage algorithms.
Nonlinear Semi-Supervised Metric Learning Via Multiple Kernels and Local Topology.
Li, Xin; Bai, Yanqin; Peng, Yaxin; Du, Shaoyi; Ying, Shihui
2018-03-01
Changing the metric on the data may change the data distribution, hence a good distance metric can promote the performance of learning algorithm. In this paper, we address the semi-supervised distance metric learning (ML) problem to obtain the best nonlinear metric for the data. First, we describe the nonlinear metric by the multiple kernel representation. By this approach, we project the data into a high dimensional space, where the data can be well represented by linear ML. Then, we reformulate the linear ML by a minimization problem on the positive definite matrix group. Finally, we develop a two-step algorithm for solving this model and design an intrinsic steepest descent algorithm to learn the positive definite metric matrix. Experimental results validate that our proposed method is effective and outperforms several state-of-the-art ML methods.
Quantum machine learning: a classical perspective
NASA Astrophysics Data System (ADS)
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.
Quantum machine learning: a classical perspective
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed. PMID:29434508
Quantum machine learning: a classical perspective.
Ciliberto, Carlo; Herbster, Mark; Ialongo, Alessandro Davide; Pontil, Massimiliano; Rocchetto, Andrea; Severini, Simone; Wossnig, Leonard
2018-01-01
Recently, increased computational power and data availability, as well as algorithmic advances, have led machine learning (ML) techniques to impressive results in regression, classification, data generation and reinforcement learning tasks. Despite these successes, the proximity to the physical limits of chip fabrication alongside the increasing size of datasets is motivating a growing number of researchers to explore the possibility of harnessing the power of quantum computation to speed up classical ML algorithms. Here we review the literature in quantum ML and discuss perspectives for a mixed readership of classical ML and quantum computation experts. Particular emphasis will be placed on clarifying the limitations of quantum algorithms, how they compare with their best classical counterparts and why quantum resources are expected to provide advantages for learning problems. Learning in the presence of noise and certain computationally hard problems in ML are identified as promising directions for the field. Practical questions, such as how to upload classical data into quantum form, will also be addressed.
Advanced Fast 3-D Electromagnetic Solver for Microwave Tomography Imaging.
Simonov, Nikolai; Kim, Bo-Ra; Lee, Kwang-Jae; Jeon, Soon-Ik; Son, Seong-Ho
2017-10-01
This paper describes a fast-forward electromagnetic solver (FFS) for the image reconstruction algorithm of our microwave tomography system. Our apparatus is a preclinical prototype of a biomedical imaging system, designed for the purpose of early breast cancer detection. It operates in the 3-6-GHz frequency band using a circular array of probe antennas immersed in a matching liquid; it produces image reconstructions of the permittivity and conductivity profiles of the breast under examination. Our reconstruction algorithm solves the electromagnetic (EM) inverse problem and takes into account the real EM properties of the probe antenna array as well as the influence of the patient's body and that of the upper metal screen sheet. This FFS algorithm is much faster than conventional EM simulation solvers. In comparison, in the same PC, the CST solver takes ~45 min, while the FFS takes ~1 s of effective simulation time for the same EM model of a numerical breast phantom.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. Tomore » alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.« less
Measurement and Structural Model Class Separation in Mixture CFA: ML/EM versus MCMC
ERIC Educational Resources Information Center
Depaoli, Sarah
2012-01-01
Parameter recovery was assessed within mixture confirmatory factor analysis across multiple estimator conditions under different simulated levels of mixture class separation. Mixture class separation was defined in the measurement model (through factor loadings) and the structural model (through factor variances). Maximum likelihood (ML) via the…
Biceps brachii muscle oxygenation in electrical muscle stimulation.
Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Ferrari, Marco; Nosaka, Kazunori
2010-09-01
The purpose of this study was to compare between electrical muscle stimulation (EMS) and maximal voluntary (VOL) isometric contractions of the elbow flexors for changes in biceps brachii muscle oxygenation (tissue oxygenation index, TOI) and haemodynamics (total haemoglobin volume, tHb = oxygenated-Hb + deoxygenated-Hb) determined by near-infrared spectroscopy (NIRS). The biceps brachii muscle of 10 healthy men (23-39 years) was electrically stimulated at high frequency (75 Hz) via surface electrodes to evoke 50 intermittent (4-s contraction, 15-s relaxation) isometric contractions at maximum tolerated current level (EMS session). The contralateral arm performed 50 intermittent (4-s contraction, 15-s relaxation) maximal voluntary isometric contractions (VOL session) in a counterbalanced order separated by 2-3 weeks. Results indicated that although the torque produced during EMS was approximately 50% of VOL (P<0.05), there was no significant difference in the changes in TOI amplitude or TOI slope between EMS and VOL over the 50 contractions. However, the TOI amplitude divided by peak torque was approximately 50% lower for EMS than VOL (P<0.05), which indicates EMS was less efficient than VOL. This seems likely because of the difference in the muscles involved in the force production between conditions. Mean decrease in tHb amplitude during the contraction phases was significantly (P<0.05) greater for EMS than VOL from the 10th contraction onwards, suggesting that the muscle blood volume was lower in EMS than VOL. It is concluded that local oxygen demand of the biceps brachii sampled by NIRS is similar between VOL and EMS.
Matkowski, Boris; Lepers, Romuald; Martin, Alain
2015-05-01
The aim of this study was to analyze the neuromuscular mechanisms involved in the torque decrease induced by submaximal electromyostimulation (EMS) of the quadriceps muscle. It was hypothesized that torque decrease after EMS would reflect the fatigability of the activated motor units (MUs), but also a reduction in the number of MUs recruited as a result of changes in axonal excitability threshold. Two experiments were performed on 20 men to analyze 1) the supramaximal twitch superimposed and evoked at rest during EMS (Experiment 1, n = 9) and 2) the twitch response and torque-frequency relation of the MUs activated by EMS (Experiment 2, n = 11). Torque loss was assessed by 15 EMS-evoked contractions (50 Hz; 6 s on/6 s off), elicited at a constant intensity that evoked 20% of the maximal voluntary contraction (MVC) torque. The same stimulation intensity delivered over the muscles was used to induce the torque-frequency relation and the single electrical pulse evoked after each EMS contraction (Experiment 2). In Experiment 1, supramaximal twitch was induced by femoral nerve stimulation. Torque decreased by ~60% during EMS-evoked contractions and by only ~18% during MVCs. This was accompanied by a rightward shift of the torque-frequency relation of MUs activated and an increase of the ratio between the superimposed and posttetanic maximal twitch evoked during EMS contraction. These findings suggest that the torque decrease observed during submaximal EMS-evoked contractions involved muscular mechanisms but also a reduction in the number of MUs recruited due to changes in axonal excitability. Copyright © 2015 the American Physiological Society.
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina
2016-01-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702
Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina
2016-04-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.
The E-Step of the MGROUP EM Algorithm. Program Statistics Research Technical Report No. 93-37.
ERIC Educational Resources Information Center
Thomas, Neal
Mislevy (1984, 1985) introduced an EM algorithm for estimating the parameters of a latent distribution model that is used extensively by the National Assessment of Educational Progress. Second order asymptotic corrections are derived and applied along with more common first order asymptotic corrections to approximate the expectations required by…
A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.
Lloyd, Chris J
2008-09-01
We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016 accomplishments and primary areas of focus for the Department of Energy's (DOE's) Office of Environmental Management and EM sites are presented. For DOE EM, these include Focusing on the Field, Teaming with Cleanup Partners, Developing New Technology, and Maximizing Cleanup Dollars. Major 2016 achievements are highlighted for EM, Richland Operations Office, Office of River Protection, Savannah River Site, Oak Ridge, Idaho, Waste Isolation Pilot Plant, Los Alamos, Portsmouth, Paducah, West Valley Demonstration Project, and the Nevada National Security Site,
Guindon, Stéphane; Dufayard, Jean-François; Lefort, Vincent; Anisimova, Maria; Hordijk, Wim; Gascuel, Olivier
2010-05-01
PhyML is a phylogeny software based on the maximum-likelihood principle. Early PhyML versions used a fast algorithm performing nearest neighbor interchanges to improve a reasonable starting tree topology. Since the original publication (Guindon S., Gascuel O. 2003. A simple, fast and accurate algorithm to estimate large phylogenies by maximum likelihood. Syst. Biol. 52:696-704), PhyML has been widely used (>2500 citations in ISI Web of Science) because of its simplicity and a fair compromise between accuracy and speed. In the meantime, research around PhyML has continued, and this article describes the new algorithms and methods implemented in the program. First, we introduce a new algorithm to search the tree space with user-defined intensity using subtree pruning and regrafting topological moves. The parsimony criterion is used here to filter out the least promising topology modifications with respect to the likelihood function. The analysis of a large collection of real nucleotide and amino acid data sets of various sizes demonstrates the good performance of this method. Second, we describe a new test to assess the support of the data for internal branches of a phylogeny. This approach extends the recently proposed approximate likelihood-ratio test and relies on a nonparametric, Shimodaira-Hasegawa-like procedure. A detailed analysis of real alignments sheds light on the links between this new approach and the more classical nonparametric bootstrap method. Overall, our tests show that the last version (3.0) of PhyML is fast, accurate, stable, and ready to use. A Web server and binary files are available from http://www.atgc-montpellier.fr/phyml/.
3D forward modeling and response analysis for marine CSEMs towed by two ships
NASA Astrophysics Data System (ADS)
Zhang, Bo; Yin, Chang-Chun; Liu, Yun-He; Ren, Xiu-Yan; Qi, Yan-Fu; Cai, Jing
2018-03-01
A dual-ship-towed marine electromagnetic (EM) system is a new marine exploration technology recently being developed in China. Compared with traditional marine EM systems, the new system tows the transmitters and receivers using two ships, rendering it unnecessary to position EM receivers at the seafloor in advance. This makes the system more flexible, allowing for different configurations (e.g., in-line, broadside, and azimuthal and concentric scanning) that can produce more detailed underwater structural information. We develop a three-dimensional goal-oriented adaptive forward modeling method for the new marine EM system and analyze the responses for four survey configurations. Oceanbottom topography has a strong effect on the marine EM responses; thus, we develop a forward modeling algorithm based on the finite-element method and unstructured grids. To satisfy the requirements for modeling the moving transmitters of a dual-ship-towed EM system, we use a single mesh for each of the transmitter locations. This mitigates the mesh complexity by refining the grids near the transmitters and minimizes the computational cost. To generate a rational mesh while maintaining the accuracy for single transmitter, we develop a goal-oriented adaptive method with separate mesh refinements for areas around the transmitting source and those far away. To test the modeling algorithm and accuracy, we compare the EM responses calculated by the proposed algorithm and semi-analytical results and from published sources. Furthermore, by analyzing the EM responses for four survey configurations, we are confirm that compared with traditional marine EM systems with only in-line array, a dual-ship-towed marine system can collect more data.
Deng, Wenping; Zhang, Kui; Busov, Victor; Wei, Hairong
2017-01-01
Present knowledge indicates a multilayered hierarchical gene regulatory network (ML-hGRN) often operates above a biological pathway. Although the ML-hGRN is very important for understanding how a pathway is regulated, there is almost no computational algorithm for directly constructing ML-hGRNs. A backward elimination random forest (BWERF) algorithm was developed for constructing the ML-hGRN operating above a biological pathway. For each pathway gene, the BWERF used a random forest model to calculate the importance values of all transcription factors (TFs) to this pathway gene recursively with a portion (e.g. 1/10) of least important TFs being excluded in each round of modeling, during which, the importance values of all TFs to the pathway gene were updated and ranked until only one TF was remained in the list. The above procedure, termed BWERF. After that, the importance values of a TF to all pathway genes were aggregated and fitted to a Gaussian mixture model to determine the TF retention for the regulatory layer immediately above the pathway layer. The acquired TFs at the secondary layer were then set to be the new bottom layer to infer the next upper layer, and this process was repeated until a ML-hGRN with the expected layers was obtained. BWERF improved the accuracy for constructing ML-hGRNs because it used backward elimination to exclude the noise genes, and aggregated the individual importance values for determining the TFs retention. We validated the BWERF by using it for constructing ML-hGRNs operating above mouse pluripotency maintenance pathway and Arabidopsis lignocellulosic pathway. Compared to GENIE3, BWERF showed an improvement in recognizing authentic TFs regulating a pathway. Compared to the bottom-up Gaussian graphical model algorithm we developed for constructing ML-hGRNs, the BWERF can construct ML-hGRNs with significantly reduced edges that enable biologists to choose the implicit edges for experimental validation.
Touw, D J; Vinks, A A; Neef, C
1997-06-01
The availability of personal computer programs for individualizing drug dosage regimens has stimulated the interest in modelling population pharmacokinetics. Data from 82 adolescent and adult patients with cystic fibrosis (CF) who were treated with intravenous tobramycin because of an exacerbation of their pulmonary infection were analysed with a non-parametric expectation maximization (NPEM) algorithm. This algorithm estimates the entire discrete joint probability density of the pharmacokinetic parameters. It also provides traditional parametric statistics such as the means, standard deviation, median, covariances and correlations among the various parameters. It also provides graphic-2- and 3-dimensional representations of the marginal densities of the parameters investigated. Several models for intravenous tobramycin in adolescent and adult patients with CF were compared. Covariates were total body weight (for the volume of distribution) and creatinine clearance (for the total body clearance and elimination rate). Because of lack of data on patients with poor renal function, restricted models with non-renal clearance and the non-renal elimination rate constant fixed at literature values of 0.15 L/h and 0.01 h-1 were also included. In this population, intravenous tobramycin could be best described by median (+/-dispersion factor) volume of distribution per unit of total body weight of 0.28 +/- 0.05 L/kg, elimination rate constant of 0.25 +/- 0.10 h-1 and elimination rate constant per unit of creatinine clearance of 0.0008 +/- 0.0009 h-1/(ml/min/1.73 m2). Analysis of populations of increasing size showed that using a restricted model with a non-renal elimination rate constant fixed at 0.01 h-1, a model based on a population of only 10 to 20 patients, contained parameter values similar to those of the entire population and, using the full model, a larger population (at least 40 patients) was needed.
NASA Astrophysics Data System (ADS)
Yeh, Cheng-Ta; Lin, Yi-Kuei; Yang, Jo-Yun
2018-07-01
Network reliability is an important performance index for many real-life systems, such as electric power systems, computer systems and transportation systems. These systems can be modelled as stochastic-flow networks (SFNs) composed of arcs and nodes. Most system supervisors respect the network reliability maximization by finding the optimal multi-state resource assignment, which is one resource to each arc. However, a disaster may cause correlated failures for the assigned resources, affecting the network reliability. This article focuses on determining the optimal resource assignment with maximal network reliability for SFNs. To solve the problem, this study proposes a hybrid algorithm integrating the genetic algorithm and tabu search to determine the optimal assignment, called the hybrid GA-TS algorithm (HGTA), and integrates minimal paths, recursive sum of disjoint products and the correlated binomial distribution to calculate network reliability. Several practical numerical experiments are adopted to demonstrate that HGTA has better computational quality than several popular soft computing algorithms.
3D reconstruction of synapses with deep learning based on EM Images
NASA Astrophysics Data System (ADS)
Xiao, Chi; Rao, Qiang; Zhang, Dandan; Chen, Xi; Han, Hua; Xie, Qiwei
2017-03-01
Recently, due to the rapid development of electron microscope (EM) with its high resolution, stacks delivered by EM can be used to analyze a variety of components that are critical to understand brain function. Since synaptic study is essential in neurobiology and can be analyzed by EM stacks, the automated routines for reconstruction of synapses based on EM Images can become a very useful tool for analyzing large volumes of brain tissue and providing the ability to understand the mechanism of brain. In this article, we propose a novel automated method to realize 3D reconstruction of synapses for Automated Tapecollecting Ultra Microtome Scanning Electron Microscopy (ATUM-SEM) with deep learning. Being different from other reconstruction algorithms, which employ classifier to segment synaptic clefts directly. We utilize deep learning method and segmentation algorithm to obtain synaptic clefts as well as promote the accuracy of reconstruction. The proposed method contains five parts: (1) using modified Moving Least Square (MLS) deformation algorithm and Scale Invariant Feature Transform (SIFT) features to register adjacent sections, (2) adopting Faster Region Convolutional Neural Networks (Faster R-CNN) algorithm to detect synapses, (3) utilizing screening method which takes context cues of synapses into consideration to reduce the false positive rate, (4) combining a practical morphology algorithm with a suitable fitting function to segment synaptic clefts and optimize the shape of them, (5) applying the plugin in FIJI to show the final 3D visualization of synapses. Experimental results on ATUM-SEM images demonstrate the effectiveness of our proposed method.
Dextrose 10% in the treatment of out-of-hospital hypoglycemia.
Kiefer, Matthew V; Gene Hern, H; Alter, Harrison J; Barger, Joseph B
2014-04-01
Prehospital first responders historically have treated hypoglycemia in the field with an IV bolus of 50 mL of 50% dextrose solution (D50). The California Contra Costa County Emergency Medical Services (EMS) system recently adopted a protocol of IV 10% dextrose solution (D10), due to frequent shortages and relatively high cost of D50. The feasibility, safety, and efficacy of this approach are reported using the experience of this EMS system. Over the course of 18 weeks, paramedics treated 239 hypoglycemic patients with D10 and recorded patient demographics and clinical outcomes. Of these, 203 patients were treated with 100 mL of D10 initially upon EMS arrival, and full data on response to treatment was available on 164 of the 203 patients. The 164 patients' capillary glucose response to initial infusion of 100 mL of D10 was calculated and a linear regression line fit between elapsed time and difference between initial and repeat glucose values. Feasibility, safety, and the need for repeat glucose infusions were examined. The study cohort included 102 men and 62 women with a median age of 68 years. The median initial field blood glucose was 38 mg/dL, with a subsequent blood glucose median of 98 mg/dL. The median time to second glucose testing was eight minutes after beginning the 100 mL D10 infusion. Of 164 patients, 29 (18%) required an additional dose of IV D10 solution due to persistent or recurrent hypoglycemia, and one patient required a third dose. There were no reported adverse events or deaths related to D10 administration. Linear regression analysis of elapsed time and difference between initial and repeat glucose values showed near-zero correlation. In addition to practical reasons of cost and availability, theoretical risks of using 50 mL of D50 in the out-of-hospital setting include extravasation injury, direct toxic effects of hypertonic dextrose, and potential neurotoxic effects of hyperglycemia. The results of one local EMS system over an 18-week period demonstrate the feasibility, safety, and efficacy of using 100 mL of D10 as an alternative. Additionally, the linear regression line of repeat glucose measurements suggests that there may be little or no short-term decay in blood glucose values after D10 administration.
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Wognum, S; Heethuis, S E; Rosario, T; Hoogeman, M S; Bel, A
2014-07-01
The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images of ex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Five excised porcine bladders with a grid of 30-40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100-400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100-400 ml). In general, for the small volume difference (100-150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.
Multimodal Speaker Diarization.
Noulas, A; Englebienne, G; Krose, B J A
2012-01-01
We present a novel probabilistic framework that fuses information coming from the audio and video modality to perform speaker diarization. The proposed framework is a Dynamic Bayesian Network (DBN) that is an extension of a factorial Hidden Markov Model (fHMM) and models the people appearing in an audiovisual recording as multimodal entities that generate observations in the audio stream, the video stream, and the joint audiovisual space. The framework is very robust to different contexts, makes no assumptions about the location of the recording equipment, and does not require labeled training data as it acquires the model parameters using the Expectation Maximization (EM) algorithm. We apply the proposed model to two meeting videos and a news broadcast video, all of which come from publicly available data sets. The results acquired in speaker diarization are in favor of the proposed multimodal framework, which outperforms the single modality analysis results and improves over the state-of-the-art audio-based speaker diarization.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
Sparse Bayesian learning for DOA estimation with mutual coupling.
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-10-16
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
ERIC Educational Resources Information Center
Tian, Wei; Cai, Li; Thissen, David; Xin, Tao
2013-01-01
In item response theory (IRT) modeling, the item parameter error covariance matrix plays a critical role in statistical inference procedures. When item parameters are estimated using the EM algorithm, the parameter error covariance matrix is not an automatic by-product of item calibration. Cai proposed the use of Supplemented EM algorithm for…
2017-05-30
A view of the mobile launcher (ML) taken from the "eyebrow" of the nearby Vehicle Assembly Building at NASA's Kennedy Space Center in Florida. The ML tower will be equipped with a number of lines, called umbilicals that will connect to the Space Launch System rocket and Orion spacecraft for Exploration Mission-1 (EM-1). The Orion Service Module Umbilical and Core State Forward Skirt Umbilical were recently installed on the ML. The Ground Systems Development and Operations Program is overseeing installation of the umbilicals.
Methods to assess an exercise intervention trial based on 3-level functional data.
Li, Haocheng; Kozey Keadle, Sarah; Staudenmayer, John; Assaad, Houssein; Huang, Jianhua Z; Carroll, Raymond J
2015-10-01
Motivated by data recording the effects of an exercise intervention on subjects' physical activity over time, we develop a model to assess the effects of a treatment when the data are functional with 3 levels (subjects, weeks and days in our application) and possibly incomplete. We develop a model with 3-level mean structure effects, all stratified by treatment and subject random effects, including a general subject effect and nested effects for the 3 levels. The mean and random structures are specified as smooth curves measured at various time points. The association structure of the 3-level data is induced through the random curves, which are summarized using a few important principal components. We use penalized splines to model the mean curves and the principal component curves, and cast the proposed model into a mixed effects model framework for model fitting, prediction and inference. We develop an algorithm to fit the model iteratively with the Expectation/Conditional Maximization Either (ECME) version of the EM algorithm and eigenvalue decompositions. Selection of the number of principal components and handling incomplete data issues are incorporated into the algorithm. The performance of the Wald-type hypothesis test is also discussed. The method is applied to the physical activity data and evaluated empirically by a simulation study. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Lung Function in Rescue Workers at the World Trade Center after 7 Years
Aldrich, Thomas K.; Gustave, Jackson; Hall, Charles B.; Cohen, Hillel W.; Webber, Mayris P.; Zeig-Owens, Rachel; Cosenza, Kaitlyn; Christodoulou, Vasilios; Glass, Lara; Al-Othman, Fairouz; Weiden, Michael D.; Kelly, Kerry J.; Prezant, David J.
2016-01-01
Background The terrorist attacks on the World Trade Center on September 11, 2001, exposed thousands of Fire Department of New York City (FDNY) rescue workers to dust, leading to substantial declines in lung function in the first year. We sought to determine the longer-term effects of exposure. Methods Using linear mixed models, we analyzed the forced expiratory volume in 1 second (FEV1) of both active and retired FDNY rescue workers on the basis of spirometry routinely performed at intervals of 12 to 18 months from March 12, 2000, to September 11, 2008. Results Of the 13,954 FDNY workers who were present at the World Trade Center between September 11, 2001, and September 24, 2001, a total of 12,781 (91.6%) participated in this study, contributing 61,746 quality-screened spirometric measurements. The median follow-up was 6.1 years for firefighters and 6.4 years for emergency-medical-services (EMS) workers. In the first year, the mean FEV1 decreased significantly for all workers, more for firefighters who had never smoked (a reduction of 439 ml; 95% confidence interval [CI], 408 to 471) than for EMS workers who had never smoked (a reduction of 267 ml; 95% CI, 263 to 271) (P<0.001 for both comparisons). There was little or no recovery in FEV1 during the subsequent 6 years, with a mean annualized reduction in FEV1 of 25 ml per year for firefighters and 40 ml per year for EMS workers. The proportion of workers who had never smoked and who had an FEV1 below the lower limit of the normal range increased during the first year, from 3% to 18% for firefighters and from 12% to 22% for EMS workers, stabilizing at about 13% for firefighters and 22% for EMS workers during the subsequent 6 years. Conclusions Exposure to World Trade Center dust led to large declines in FEV1 for FDNY rescue workers during the first year. Overall, these declines were persistent, without recovery over the next 6 years, leaving a substantial proportion of workers with abnormal lung function. PMID:20375403
A robust multilevel simultaneous eigenvalue solver
NASA Technical Reports Server (NTRS)
Costiner, Sorin; Taasan, Shlomo
1993-01-01
Multilevel (ML) algorithms for eigenvalue problems are often faced with several types of difficulties such as: the mixing of approximated eigenvectors by the solution process, the approximation of incomplete clusters of eigenvectors, the poor representation of solution on coarse levels, and the existence of close or equal eigenvalues. Algorithms that do not treat appropriately these difficulties usually fail, or their performance degrades when facing them. These issues motivated the development of a robust adaptive ML algorithm which treats these difficulties, for the calculation of a few eigenvectors and their corresponding eigenvalues. The main techniques used in the new algorithm include: the adaptive completion and separation of the relevant clusters on different levels, the simultaneous treatment of solutions within each cluster, and the robustness tests which monitor the algorithm's efficiency and convergence. The eigenvectors' separation efficiency is based on a new ML projection technique generalizing the Rayleigh Ritz projection, combined with a technique, the backrotations. These separation techniques, when combined with an FMG formulation, in many cases lead to algorithms of O(qN) complexity, for q eigenvectors of size N on the finest level. Previously developed ML algorithms are less focused on the mentioned difficulties. Moreover, algorithms which employ fine level separation techniques are of O(q(sub 2)N) complexity and usually do not overcome all these difficulties. Computational examples are presented where Schrodinger type eigenvalue problems in 2-D and 3-D, having equal and closely clustered eigenvalues, are solved with the efficiency of the Poisson multigrid solver. A second order approximation is obtained in O(qN) work, where the total computational work is equivalent to only a few fine level relaxations per eigenvector.
Evaluation of mathematical algorithms for automatic patient alignment in radiosurgery.
Williams, Kenneth M; Schulte, Reinhard W; Schubert, Keith E; Wroe, Andrew J
2015-06-01
Image registration techniques based on anatomical features can serve to automate patient alignment for intracranial radiosurgery procedures in an effort to improve the accuracy and efficiency of the alignment process as well as potentially eliminate the need for implanted fiducial markers. To explore this option, four two-dimensional (2D) image registration algorithms were analyzed: the phase correlation technique, mutual information (MI) maximization, enhanced correlation coefficient (ECC) maximization, and the iterative closest point (ICP) algorithm. Digitally reconstructed radiographs from the treatment planning computed tomography scan of a human skull were used as the reference images, while orthogonal digital x-ray images taken in the treatment room were used as the captured images to be aligned. The accuracy of aligning the skull with each algorithm was compared to the alignment of the currently practiced procedure, which is based on a manual process of selecting common landmarks, including implanted fiducials and anatomical skull features. Of the four algorithms, three (phase correlation, MI maximization, and ECC maximization) demonstrated clinically adequate (ie, comparable to the standard alignment technique) translational accuracy and improvements in speed compared to the interactive, user-guided technique; however, the ICP algorithm failed to give clinically acceptable results. The results of this work suggest that a combination of different algorithms may provide the best registration results. This research serves as the initial groundwork for the translation of automated, anatomy-based 2D algorithms into a real-world system for 2D-to-2D image registration and alignment for intracranial radiosurgery. This may obviate the need for invasive implantation of fiducial markers into the skull and may improve treatment room efficiency and accuracy. © The Author(s) 2014.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507
Increased plasma O2 solubility improves O2 uptake of in situ dog muscle working maximally.
Hogan, M C; Willford, D C; Keipert, P E; Faithfull, N S; Wagner, P D
1992-12-01
A perfluorocarbon emulsion [formulation containing 90% wt/vol perflubron (perfluorooctylbromide); Alliance Pharmaceutical] was used to increase O2 solubility in the plasma compartment during hyperoxic low hemoglobin concentration ([Hb]) perfusion of a maximally working dog muscle in situ. Our hypothesis was that the increased plasma O2 solubility would increase the muscle O2 diffusing capacity (DO2) by augmenting the capillary surface area in contact with high [O2]. Oxygen uptake (VO2) was measured in isolated in situ canine gastrocnemius (n = 4) while working for 6 min at a maximal stimulation rate of 1 Hz (isometric tetanic contractions) on three to four separate occasions for each muscle. On each occasion, the last 4 min of the 6-min work period was split into 2 min of a control treatment (only emulsifying agent mixed into blood) and 2 min of perflubron treatment (6 g/kg body wt), reversing the order for each subsequent work bout. Before contractions, the [Hb] of the dog was decreased to 8-9 g/100 ml and arterial PO2 was increased to 500-600 Torr by having the dog breathe 100% O2 to maximize the effect of the perflubron. Muscle blood flow was held constant between the two experimental conditions. Plasma O2 solubility was almost doubled to 0.005 ml O2 x 100 ml blood-1 x Torr-1 by the addition of the perflubron. Muscle O2 delivery and maximal VO2 were significantly improved (at the same blood flow and [Hb]) by 11 and 12.6%, respectively (P < 0.05), during the perflubron treatment compared with the control. O2 extraction by the muscle remained the same between the two treatments, as did the estimate of DO2.(ABSTRACT TRUNCATED AT 250 WORDS)
Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Leukocyte Recognition Using EM-Algorithm
NASA Astrophysics Data System (ADS)
Colunga, Mario Chirinos; Siordia, Oscar Sánchez; Maybank, Stephen J.
This document describes a method for classifying images of blood cells. Three different classes of cells are used: Band Neutrophils, Eosinophils and Lymphocytes. The image pattern is projected down to a lower dimensional sub space using PCA; the probability density function for each class is modeled with a Gaussian mixture using the EM-Algorithm. A new cell image is classified using the maximum a posteriori decision rule.
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Creating a consistent dark-target aerosol optical depth record from MODIS and VIIRS
NASA Astrophysics Data System (ADS)
Levy, R. C.; Mattoo, S.; Munchak, L. A.; Patadia, F.; Holz, R.
2014-12-01
To answer fundamental questions about our changing climate, we must quantify how aerosols are changing over time. This is a global question that requires regional characterization, because in some places aerosols are increasing and in others they are decreasing. Although NASA's Moderate resolution Imaging Spectrometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, the creation of an aerosol climate data record (CDR) requires consistent multi-decadal data. With the Visible and Infrared Imaging Radiometer Suite (VIIRS) aboard Suomi-NPP, there is potential to continue the MODIS aerosol time series. Yet, since the operational VIIRS aerosol product is produced by a different algorithm, it is not suitable to continue MODIS to create an aerosol CDR. Therefore, we have applied the MODIS Dark-target (DT) algorithm to VIIRS observations, taking into account the slight differences in wavelengths, resolutions and geometries between the two sensors. More specifically, we applied the MODIS DT algorithm to a dataset known as the Intermediate File Format (IFF), created by the University of Wisconsin. The IFF is produced for both MODIS and VIIRS, with the idea that a single (MODIS-like or ML) algorithm can be run either dataset, which can in turn be compared to the MODIS Collection 6 (M6) retrieval that is run on standard MODIS data. After minimizing or characterizing remaining differences between ML on MODIS-IFF (or ML-M) and M6, we have performed apples-to-apples comparison between ML-M and ML on VIIRS IFF (ML-V). Examples of these comparisons include time series of monthly global mean, monthly and seasonal global maps at 1° resolution, and collocations as compared to AERONET. We concentrate on the overlapping period January 2012 through June 2014, and discuss some of the remaining discrepancies between the ML-V and ML-M datasets.
NASA Astrophysics Data System (ADS)
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is demonstrated to be useful in visualizing bus-based and transmission line-based quantities.
NASA Astrophysics Data System (ADS)
Job, Joshua; Wang, Zhihui; Rønnow, Troels; Troyer, Matthias; Lidar, Daniel
2014-03-01
We report on experimental work benchmarking the performance of the D-Wave Two programmable annealer on its native Ising problem, and a comparison to available classical algorithms. In this talk we will focus on the comparison with an algorithm originally proposed and implemented by Alex Selby. This algorithm uses dynamic programming to repeatedly optimize over randomly selected maximal induced trees of the problem graph starting from a random initial state. If one is looking for a quantum advantage over classical algorithms, one should compare to classical algorithms which are designed and optimized to maximally take advantage of the structure of the type of problem one is using for the comparison. In that light, this classical algorithm should serve as a good gauge for any potential quantum speedup for the D-Wave Two.
Lindholm, E; Brevinge, H; Bergh, C H; Körner, U; Lundholm, K
2003-08-01
The purpose of this study was to evaluate to what extent self-reported health related quality of life (HRQL), assessed by the Swedish standard version of the Medical Outcome Study Short-Form 36 (SF-36), is related to measured exercise capacity and metabolic efficiency in a cohort of healthy subjects from the Gothenburg area of Sweden. Individuals were invited to take part in the evaluation where HRQL was compared with the maximal power output expressed in Watts assessed during a standardized treadmill test with incremental work loads. Whole body respiratory gas exchanges (CO2/O2) were simultaneously measured. Estimate of metabolic efficiency was derived from oxygen uptake per Watt produced (ml O2/min/W) near maximal work. The health status profile in the current population largely agreed with normative data from an age- and gender-matched reference group, although some measured scores were slightly better than reference scores. Males and females had a similar relationship between energy cost (ml O2/min) for production of maximal work (W), while the regressions for maximal exercise power and age were significantly different between males and females (p < 0.01). The overall metabolic efficiency was the same in individuals between 40 and 74 years of age (10.4 +/- 0.07 ml O2/min/ Watt). Maximal exercise power was only related to the SF-36 subscale physical functioning (PF), but unrelated to other physical subscales such as role limitations due to physical problems, good general health and vitality. There was also a discrepancy between measured maximal power and PF in many subjects, particularly in males who experienced either intact or severely reduced PF. Our results demonstrate that simultaneous measurements of self-reported and objective measures of PF should add a more integrated view for evaluation of therapeutic effectiveness, since the overall correlation was poor between objective and subjective scores among individuals.
2013-01-01
intelligently selecting waveform parameters using adaptive algorithms. The adaptive algorithms optimize the waveform parameters based on (1) the EM...the environment. 15. SUBJECT TERMS cognitive radar, adaptive sensing, spectrum sensing, multi-objective optimization, genetic algorithms, machine...detection and classification block diagram. .........................................................6 Figure 5. Genetic algorithm block diagram
Efficient sequential and parallel algorithms for finding edit distance based motifs.
Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar
2016-08-18
Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in this paper are also applicable to other motif search problems such as Planted Motif Search (PMS) and Simple Motif Search (SMS).
Left ventricular oxygen extraction during submaximal and maximal exertion in ponies.
Manohar, M
1988-01-01
1. Left ventricular (LV) myocardial O2 extraction was studied in five healthy ponies which had catheters implanted in the great cardiac vein and main pulmonary artery 15-30 days before the study. The abdominal aorta was percutaneously catheterized to sample arterial blood. 2. In addition, phasic LV and aortic pressures, LV dP/dtmax and rate-pressure product were also studied; dP/dtmax is the maximal rate of rise of the left ventricular pressure during the isovolumic phase, and is considered an index of myocardial contractility. Measurements were made at rest (control) and during adenosine infusion (3 mumol kg-1 min-1) at rest, moderate exercise (heart rate 169 +/- 10 beats min-1), heavy exercise (heart rate 198 +/- 7 beats min-1), maximal exercise (heart rate 232 +/- 7 beats min-1), and adenosine infusion (3 mumol kg-1 min-1) during maximal exercise (heart rate 230 +/- 6 beats min-1). 3. In resting ponies, LV arterial to coronary venous O2 content difference (delta LVa-v O2) was 8.9 +/- 0.5 ml dl-1 and O2 extraction was 59.9 +/- 2.2%. Adenosine infusion at rest decreased delta LVa-v O2 and O2 extraction precipitously (2.6 ml dl-1 and 14.3 +/- 1.7%, respectively), thereby indicating superfluous LV myocardial perfusion. 4. Moderate, heavy and maximal exercise increased delta LVa-v O2 to 185, 194 and 218% of its control value and O2 extraction rose to 71 +/- 2, 75 +/- 1.5 and 78 +/- 0.9%, respectively. The widening of the delta LVa-v O2 gradient was due to the increased arterial O2 content during exercise. 5. Combining these observations with equine myocardial perfusion, the LV O2 consumption was calculated to be 7.8, 47.9 and 103.6 ml min-1 100 g-1 at rest, moderate and maximal exercise. In order to achieve the 13.4-fold increase in LV O2 consumption, the LV perfusion rose only 6-fold; the rest being met by widening the delta LVa-v O2. 6. Adenosine infusion during maximal exercise decreased delta LVa-v O2 and O2 extraction (10.7 +/- 1 ml dl-1 and 45%, respectively; P less than 0.0001). This indicated that coronary vasodilator capacity was not being completely expended in maximally exercising ponies. It is concluded that coronary circulation is unlikely to be a limiting factor to further exertion in ponies. Organ/tissue perfusion studies in exercising ponies have demonstrated that of all working muscles, the left ventricular (LV) myocardium received the highest level of blood flow.(ABSTRACT TRUNCATED AT 400 WORDS) PMID:3150987
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
Viewing Angle Classification of Cryo-Electron Microscopy Images Using Eigenvectors
Singer, A.; Zhao, Z.; Shkolnisky, Y.; Hadani, R.
2012-01-01
The cryo-electron microscopy (cryo-EM) reconstruction problem is to find the three-dimensional structure of a macromolecule given noisy versions of its two-dimensional projection images at unknown random directions. We introduce a new algorithm for identifying noisy cryo-EM images of nearby viewing angles. This identification is an important first step in three-dimensional structure determination of macromolecules from cryo-EM, because once identified, these images can be rotationally aligned and averaged to produce “class averages” of better quality. The main advantage of our algorithm is its extreme robustness to noise. The algorithm is also very efficient in terms of running time and memory requirements, because it is based on the computation of the top few eigenvectors of a specially designed sparse Hermitian matrix. These advantages are demonstrated in numerous numerical experiments. PMID:22506089
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wognum, S., E-mail: s.wognum@gmail.com; Heethuis, S. E.; Bel, A.
2014-07-15
Purpose: The spatial accuracy of deformable image registration (DIR) is important in the implementation of image guided adaptive radiotherapy techniques for cancer in the pelvic region. Validation of algorithms is best performed on phantoms with fiducial markers undergoing controlled large deformations. Excised porcine bladders, exhibiting similar filling and voiding behavior as human bladders, provide such an environment. The aim of this study was to determine the spatial accuracy of different DIR algorithms on CT images ofex vivo porcine bladders with radiopaque fiducial markers applied to the outer surface, for a range of bladder volumes, using various accuracy metrics. Methods: Fivemore » excised porcine bladders with a grid of 30–40 radiopaque fiducial markers attached to the outer wall were suspended inside a water-filled phantom. The bladder was filled with a controlled amount of water with added contrast medium for a range of filling volumes (100–400 ml in steps of 50 ml) using a luer lock syringe, and CT scans were acquired at each filling volume. DIR was performed for each data set, with the 100 ml bladder as the reference image. Six intensity-based algorithms (optical flow or demons-based) implemented in theMATLAB platform DIRART, a b-spline algorithm implemented in the commercial software package VelocityAI, and a structure-based algorithm (Symmetric Thin Plate Spline Robust Point Matching) were validated, using adequate parameter settings according to values previously published. The resulting deformation vector field from each registration was applied to the contoured bladder structures and to the marker coordinates for spatial error calculation. The quality of the algorithms was assessed by comparing the different error metrics across the different algorithms, and by comparing the effect of deformation magnitude (bladder volume difference) per algorithm, using the Independent Samples Kruskal-Wallis test. Results: The authors found good structure accuracy without dependency on bladder volume difference for all but one algorithm, and with the best result for the structure-based algorithm. Spatial accuracy as assessed from marker errors was disappointing for all algorithms, especially for large volume differences, implying that the deformations described by the registration did not represent anatomically correct deformations. The structure-based algorithm performed the best in terms of marker error for the large volume difference (100–400 ml). In general, for the small volume difference (100–150 ml) the algorithms performed relatively similarly. The structure-based algorithm exhibited the best balance in performance between small and large volume differences, and among the intensity-based algorithms, the algorithm implemented in VelocityAI exhibited the best balance. Conclusions: Validation of multiple DIR algorithms on a novel physiological bladder phantom revealed that the structure accuracy was good for most algorithms, but that the spatial accuracy as assessed from markers was low for all algorithms, especially for large deformations. Hence, many of the available algorithms exhibit sufficient accuracy for contour propagation purposes, but possibly not for accurate dose accumulation.« less
Multiway spectral community detection in networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Newman, M. E. J.
2015-11-01
One of the most widely used methods for community detection in networks is the maximization of the quality function known as modularity. Of the many maximization techniques that have been used in this context, some of the most conceptually attractive are the spectral methods, which are based on the eigenvectors of the modularity matrix. Spectral algorithms have, however, been limited, by and large, to the division of networks into only two or three communities, with divisions into more than three being achieved by repeated two-way division. Here we present a spectral algorithm that can directly divide a network into any number of communities. The algorithm makes use of a mapping from modularity maximization to a vector partitioning problem, combined with a fast heuristic for vector partitioning. We compare the performance of this spectral algorithm with previous approaches and find it to give superior results, particularly in cases where community sizes are unbalanced. We also give demonstrative applications of the algorithm to two real-world networks and find that it produces results in good agreement with expectations for the networks studied.
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
Toward Intelligent Machine Learning Algorithms
1988-05-01
Machine learning is recognized as a tool for improving the performance of many kinds of systems, yet most machine learning systems themselves are not...directed systems, and with the addition of a knowledge store for organizing and maintaining knowledge to assist learning, a learning machine learning (L...ML) algorithm is possible. The necessary components of L-ML systems are presented along with several case descriptions of existing machine learning systems
Hip and Wrist Accelerometer Algorithms for Free-Living Behavior Classification.
Ellis, Katherine; Kerr, Jacqueline; Godbole, Suneeta; Staudenmayer, John; Lanckriet, Gert
2016-05-01
Accelerometers are a valuable tool for objective measurement of physical activity (PA). Wrist-worn devices may improve compliance over standard hip placement, but more research is needed to evaluate their validity for measuring PA in free-living settings. Traditional cut-point methods for accelerometers can be inaccurate and need testing in free living with wrist-worn devices. In this study, we developed and tested the performance of machine learning (ML) algorithms for classifying PA types from both hip and wrist accelerometer data. Forty overweight or obese women (mean age = 55.2 ± 15.3 yr; BMI = 32.0 ± 3.7) wore two ActiGraph GT3X+ accelerometers (right hip, nondominant wrist; ActiGraph, Pensacola, FL) for seven free-living days. Wearable cameras captured ground truth activity labels. A classifier consisting of a random forest and hidden Markov model classified the accelerometer data into four activities (sitting, standing, walking/running, and riding in a vehicle). Free-living wrist and hip ML classifiers were compared with each other, with traditional accelerometer cut points, and with an algorithm developed in a laboratory setting. The ML classifier obtained average values of 89.4% and 84.6% balanced accuracy over the four activities using the hip and wrist accelerometer, respectively. In our data set with average values of 28.4 min of walking or running per day, the ML classifier predicted average values of 28.5 and 24.5 min of walking or running using the hip and wrist accelerometer, respectively. Intensity-based cut points and the laboratory algorithm significantly underestimated walking minutes. Our results demonstrate the superior performance of our PA-type classification algorithm, particularly in comparison with traditional cut points. Although the hip algorithm performed better, additional compliance achieved with wrist devices might justify using a slightly lower performing algorithm.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Electrically-induced muscle fatigue affects feedforward mechanisms of control.
Monjo, F; Forestier, N
2015-08-01
To investigate the effects of focal muscle fatigue induced by electromyostimulation (EMS) on Anticipatory Postural Adjustments (APAs) during arm flexions performed at maximal velocity. Fifteen healthy subjects performed self-paced arm flexions at maximal velocity before and after the completion of fatiguing electromyostimulation programs involving the medial and anterior deltoids and aiming to degrade movement peak acceleration. APA timing and magnitude were measured using surface electromyography. Following muscle fatigue, despite a lower mechanical disturbance evidenced by significant decreased peak accelerations (-12%, p<.001), APAs remained unchanged as compared to control trials (p>.11 for all analyses). The fatigue signals evoked by externally-generated contractions seem to be gated by the Central Nervous System and result in postural strategy changes which aim to increase the postural safety margin. EMS is widely used in rehabilitation and training programs for its neuromuscular function-related benefits. However and from a motor control viewpoint, the present results show that the use of EMS can lead to acute inaccuracies in predictive motor control. We propose that clinicians should investigate the chronic and global effects of EMS on motor control. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
NASA Astrophysics Data System (ADS)
Wei, Jun; Jiang, Guo-Qing; Liu, Xin
2017-09-01
This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.
Farrell, Colm; Hayes, Siobhan; Relan, Anurag; van Amersfoort, Edwin S; Pijpstra, Rienk; Hack, C Erik
2013-12-01
To characterize the pharmacokinetics (PK) of recombinant human C1 inhibitor (rhC1INH) in healthy volunteers and hereditary angioedema (HAE) patients. Plasma levels of C1INH following 294 administrations of rhC1INH in 133 subjects were fitted using nonlinear mixed-effects modelling. The model was used to simulate maximal C1INH levels for the proposed dosing scheme. A one-compartment model with Michaelis-Menten elimination kinetics described the data. Baseline C1INH levels were 0.901 [95% confidence interval (CI): 0.839-0.968] and 0.176 U ml(-1) (95% CI: 0.154-0.200) in healthy volunteers and HAE patients, respectively. The volume of distribution of rhC1INH was 2.86 l (95% CI: 2.68-3.03). The maximal rate of elimination and the concentration corresponding to half this maximal rate were 1.63 U ml(-1) h(-1) (95% CI: 1.41-1.88) and 1.60 U ml(-1) (95% CI: 1.14-2.24), respectively, for healthy volunteers and symptomatic HAE patients. The maximal elimination rate was 36% lower in asymptomatic HAE patients. Peak C1INH levels did not change upon repeated administration of rhC1INH. Bodyweight was found to be an important predictor of the volume of distribution. Simulations of the proposed dosing scheme predicted peak C1INH concentrations above the lower level of the normal range (0.7 U ml(-1)) for at least 94% of all patients. The population PK model for C1INH supports a dosing scheme on a 50 U kg(-1) basis up to 84 kg, with a fixed dose of 4200 U above 84 kg. The PK of rhC1INH following repeat administration are consistent with the PK following the first administration. © 2013 The British Pharmacological Society.
Maximal muscular vascular conductances during whole body upright exercise in humans
Calbet, J A L; Jensen-Urstad, M; van Hall, G; Holmberg, H -C; Rosdahl, H; Saltin, B
2004-01-01
That muscular blood flow may reach 2.5 l kg−1 min−1 in the quadriceps muscle has led to the suggestion that muscular vascular conductance must be restrained during whole body exercise to avoid hypotension. The main aim of this study was to determine the maximal arm and leg muscle vascular conductances (VC) during leg and arm exercise, to find out if the maximal muscular vasodilatory response is restrained during maximal combined arm and leg exercise. Six Swedish elite cross-country skiers, age (mean ± s.e.m.) 24 ± 2 years, height 180 ± 2 cm, weight 74 ± 2 kg, and maximal oxygen uptake (V̇O2,max) 5.1 ± 0.1 l min−1 participated in the study. Femoral and subclavian vein blood flows, intra-arterial blood pressure, cardiac output, as well as blood gases in the femoral and subclavian vein, right atrium and femoral artery were determined during skiing (roller skis) at ∼76% of V̇O2,max and at V̇O2,max with different techniques: diagonal stride (combined arm and leg exercise), double poling (predominantly arm exercise) and leg skiing (predominantly leg exercise). During submaximal exercise cardiac output (26–27 l min−1), mean blood pressure (MAP) (∼87 mmHg), systemic VC, systemic oxygen delivery and pulmonary V̇O2 (∼4 l min−1) attained similar values regardless of exercise mode. The distribution of cardiac output was modified depending on the musculature engaged in the exercise. There was a close relationship between VC and V̇O2 in arms (r = 0.99, P < 0.001) and legs (r = 0.98, P < 0.05). Peak arm VC (63.7 ± 5.6 ml min−1 mmHg−1) was attained during double poling, while peak leg VC was reached at maximal exercise with the diagonal technique (109.8 ± 11.5 ml min−1 mmHg−1) when arm VC was 38.8 ± 5.7 ml min−1 mmHg−1. If during maximal exercise arms and legs had been vasodilated to the observed maximal levels then mean arterial pressure would have dropped at least to 75–77 mmHg in our experimental conditions. It is concluded that skeletal muscle vascular conductance is restrained during whole body exercise in the upright position to avoid hypotension. PMID:15121799
Conformal Electromagnetic Particle in Cell: A Review
Meierbachtol, Collin S.; Greenwood, Andrew D.; Verboncoeur, John P.; ...
2015-10-26
We review conformal (or body-fitted) electromagnetic particle-in-cell (EM-PIC) numerical solution schemes. Included is a chronological history of relevant particle physics algorithms often employed in these conformal simulations. We also provide brief mathematical descriptions of particle-tracking algorithms and current weighting schemes, along with a brief summary of major time-dependent electromagnetic solution methods. Several research areas are also highlighted for recommended future development of new conformal EM-PIC methods.
Optimisation of Combined Cycle Gas Turbine Power Plant in Intraday Market: Riga CHP-2 Example
NASA Astrophysics Data System (ADS)
Ivanova, P.; Grebesh, E.; Linkevics, O.
2018-02-01
In the research, the influence of optimised combined cycle gas turbine unit - according to the previously developed EM & OM approach with its use in the intraday market - is evaluated on the generation portfolio. It consists of the two combined cycle gas turbine units. The introduced evaluation algorithm saves the power and heat balance before and after the performance of EM & OM approach by making changes in the generation profile of units. The aim of this algorithm is profit maximisation of the generation portfolio. The evaluation algorithm is implemented in multi-paradigm numerical computing environment MATLab on the example of Riga CHP-2. The results show that the use of EM & OM approach in the intraday market can be profitable or unprofitable. It depends on the initial state of generation units in the intraday market and on the content of the generation portfolio.
Real-time topic-aware influence maximization using preprocessing.
Chen, Wei; Lin, Tian; Yang, Cheng
2016-01-01
Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.
Schram, Ben; Hing, Wayne; Climstein, Mike
2016-01-01
Stand-up paddle boarding (SUP) is a rapidly growing sport and recreational activity for which only anecdotal evidence exists on its proposed health, fitness, and injury-rehabilitation benefits. 10 internationally and nationally ranked elite SUP athletes. Participants were assessed for their maximal aerobic power on an ergometer in a laboratory and compared with other water-based athletes. Field-based assessments were subsequently performed using a portable gas-analysis system, and a correlation between the 2 measures was performed. Maximal aerobic power (relative) was significantly higher (P = .037) when measured in the field with a portable gas-analysis system (45.48 ± 6.96 mL · kg(-1) · min(-1)) than with laboratory-based metabolic-cart measurements (43.20 ± 6.67 mL · kg(-1) · min(-1)). There was a strong, positive correlation (r = .907) between laboratory and field maximal aerobic power results. Significantly higher (P = .000) measures of SUP paddling speed were found in the field than with the laboratory ergometer (+42.39%). There were no significant differences in maximal heart rate between the laboratory and field settings (P = .576). The results demonstrate the maximal aerobic power representative of internationally and nationally ranked SUP athletes and show that SUP athletes can be assessed for maximal aerobic power in the laboratory with high correlation to field-based measures. The field-based portable gas-analysis unit has a tendency to consistently measure higher oxygen consumption. Elite SUP athletes display aerobic power outputs similar to those of other upper-limb-dominant elite water-based athletes (surfing, dragon-boat racing, and canoeing).
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-10-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångström Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ~ 0.025), while reducing the differences between AE. We characterize algorithm retrievability through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
NASA Astrophysics Data System (ADS)
Levy, R. C.; Munchak, L. A.; Mattoo, S.; Patadia, F.; Remer, L. A.; Holz, R. E.
2015-07-01
To answer fundamental questions about aerosols in our changing climate, we must quantify both the current state of aerosols and how they are changing. Although NASA's Moderate resolution Imaging Spectroradiometer (MODIS) sensors have provided quantitative information about global aerosol optical depth (AOD) for more than a decade, this period is still too short to create an aerosol climate data record (CDR). The Visible Infrared Imaging Radiometer Suite (VIIRS) was launched on the Suomi-NPP satellite in late 2011, with additional copies planned for future satellites. Can the MODIS aerosol data record be continued with VIIRS to create a consistent CDR? When compared to ground-based AERONET data, the VIIRS Environmental Data Record (V_EDR) has similar validation statistics as the MODIS Collection 6 (M_C6) product. However, the V_EDR and M_C6 are offset in regards to global AOD magnitudes, and tend to provide different maps of 0.55 μm AOD and 0.55/0.86 μm-based Ångstrom Exponent (AE). One reason is that the retrieval algorithms are different. Using the Intermediate File Format (IFF) for both MODIS and VIIRS data, we have tested whether we can apply a single MODIS-like (ML) dark-target algorithm on both sensors that leads to product convergence. Except for catering the radiative transfer and aerosol lookup tables to each sensor's specific wavelength bands, the ML algorithm is the same for both. We run the ML algorithm on both sensors between March 2012 and May 2014, and compare monthly mean AOD time series with each other and with M_C6 and V_EDR products. Focusing on the March-April-May (MAM) 2013 period, we compared additional statistics that include global and gridded 1° × 1° AOD and AE, histograms, sampling frequencies, and collocations with ground-based AERONET. Over land, use of the ML algorithm clearly reduces the differences between the MODIS and VIIRS-based AOD. However, although global offsets are near zero, some regional biases remain, especially in cloud fields and over brighter surface targets. Over ocean, use of the ML algorithm actually increases the offset between VIIRS and MODIS-based AOD (to ∼ 0.025), while reducing the differences between AE. We characterize algorithm retrievibility through statistics of retrieval fraction. In spite of differences between retrieved AOD magnitudes, the ML algorithm will lead to similar decisions about "whether to retrieve" on each sensor. Finally, we discuss how issues of calibration, as well as instrument spatial resolution may be contributing to the statistics and the ability to create a consistent MODIS → VIIRS aerosol CDR.
What variables affect public perceptions for EMS meeting general community needs?
Blau, Gary; Hochner, Arthur; Portwood, James
2012-01-01
In the fall, 2010, a phone survey of 928 respondents examined two research questions: does the general public perceive Emergency Medical Services (EMS) as meeting their community needs? And what factors or correlates help to explain EMS meeting community needs? To maximize geographical representation across the contiguous United States, a clustered stratified sampling strategy was used based upon zip codes across the 48 states. Results showed strong support by the sample for perceiving that EMS was meeting their general community needs. 17 percent of the variance in EMS meeting community needs was collectively explained by the demographic and perceptual variables in the regression model. Of the correlates tested, the strongest relationship was found between greater admiration for EMS professionals and higher perception of EMS meeting community needs. Study limitations included sampling households with only landline (no cell) phones, using a simulated emergency situation, and not collecting gender data.
Machine learning algorithms for the prediction of hERG and CYP450 binding in drug development.
Klon, Anthony E
2010-07-01
The cost of developing new drugs is estimated at approximately $1 billion; the withdrawal of a marketed compound due to toxicity can result in serious financial loss for a pharmaceutical company. There has been a greater interest in the development of in silico tools that can identify compounds with metabolic liabilities before they are brought to market. The two largest classes of machine learning (ML) models, which will be discussed in this review, have been developed to predict binding to the human ether-a-go-go related gene (hERG) ion channel protein and the various CYP isoforms. Being able to identify potentially toxic compounds before they are made would greatly reduce the number of compound failures and the costs associated with drug development. This review summarizes the state of modeling hERG and CYP binding towards this goal since 2003 using ML algorithms. A wide variety of ML algorithms that are comparable in their overall performance are available. These ML methods may be applied regularly in discovery projects to flag compounds with potential metabolic liabilities.
Dose-response algorithms for water-borne Pseudomonas aeruginosa folliculitis.
Roser, D J; Van Den Akker, B; Boase, S; Haas, C N; Ashbolt, N J; Rice, S A
2015-05-01
We developed two dose-response algorithms for P. aeruginosa pool folliculitis using bacterial and lesion density estimates, associated with undetectable, significant, and almost certain folliculitis. Literature data were fitted to Furumoto & Mickey's equations, developed for plant epidermis-invading pathogens: N l = A ln(1 + BC) (log-linear model); P inf = 1-e(-r c C) (exponential model), where A and B are 2.51644 × 107 lesions/m2 and 2.28011 × 10-11 c.f.u./ml P. aeruginosa, respectively; C = pathogen density (c.f.u./ml), N l = folliculitis lesions/m2, P inf = probability of infection, and r C = 4·3 × 10-7 c.f.u./ml P. aeruginosa. Outbreak data indicates these algorithms apply to exposure durations of 41 ± 25 min. Typical water quality benchmarks (≈10-2 c.f.u./ml) appear conservative but still useful as the literature indicated repeated detection likely implies unstable control barriers and bacterial bloom potential. In future, culture-based outbreak testing should be supplemented with quantitative polymerase chain reaction and organic carbon assays, and quantification of folliculitis aetiology to better understand P. aeruginosa risks.
Min, Hua; Mobahi, Hedyeh; Irvin, Katherine; Avramovic, Sanja; Wojtusiak, Janusz
2017-09-16
Bio-ontologies are becoming increasingly important in knowledge representation and in the machine learning (ML) fields. This paper presents a ML approach that incorporates bio-ontologies and its application to the SEER-MHOS dataset to discover patterns of patient characteristics that impact the ability to perform activities of daily living (ADLs). Bio-ontologies are used to provide computable knowledge for ML methods to "understand" biomedical data. This retrospective study included 723 cancer patients from the SEER-MHOS dataset. Two ML methods were applied to create predictive models for ADL disabilities for the first year after a patient's cancer diagnosis. The first method is a standard rule learning algorithm; the second is that same algorithm additionally equipped with methods for reasoning with ontologies. The models showed that a patient's race, ethnicity, smoking preference, treatment plan and tumor characteristics including histology, staging, cancer site, and morphology were predictors for ADL performance levels one year after cancer diagnosis. The ontology-guided ML method was more accurate at predicting ADL performance levels (P < 0.1) than methods without ontologies. This study demonstrated that bio-ontologies can be harnessed to provide medical knowledge for ML algorithms. The presented method demonstrates that encoding specific types of hierarchical relationships to guide rule learning is possible, and can be extended to other types of semantic relationships present in biomedical ontologies. The ontology-guided ML method achieved better performance than the method without ontologies. The presented method can also be used to promote the effectiveness and efficiency of ML in healthcare, in which use of background knowledge and consistency with existing clinical expertise is critical.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokhrel, D; Sood, S; Badkul, R
Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identicalmore » beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case-by-case basis when planning for sinus SRT based on tumor size and location to OARs particularly the optic apparatus.« less
Efficient data communication protocols for wireless networks
NASA Astrophysics Data System (ADS)
Zeydan, Engin
In this dissertation, efficient decentralized algorithms are investigated for cost minimization problems in wireless networks. For wireless sensor networks, we investigate both the reduction in the energy consumption and throughput maximization problems separately using multi-hop data aggregation for correlated data in wireless sensor networks. The proposed algorithms exploit data redundancy using a game theoretic framework. For energy minimization, routes are chosen to minimize the total energy expended by the network using best response dynamics to local data. The cost function used in routing takes into account distance, interference and in-network data aggregation. The proposed energy-efficient correlation-aware routing algorithm significantly reduces the energy consumption in the network and converges in a finite number of steps iteratively. For throughput maximization, we consider both the interference distribution across the network and correlation between forwarded data when establishing routes. Nodes along each route are chosen to minimize the interference impact in their neighborhood and to maximize the in-network data aggregation. The resulting network topology maximizes the global network throughput and the algorithm is guaranteed to converge with a finite number of steps using best response dynamics. For multiple antenna wireless ad-hoc networks, we present distributed cooperative and regret-matching based learning schemes for joint transmit beanformer and power level selection problem for nodes operating in multi-user interference environment. Total network transmit power is minimized while ensuring a constant received signal-to-interference and noise ratio at each receiver. In cooperative and regret-matching based power minimization algorithms, transmit beanformers are selected from a predefined codebook to minimize the total power. By selecting transmit beamformers judiciously and performing power adaptation, the cooperative algorithm is shown to converge to pure strategy Nash equilibrium with high probability throughout the iterations in the interference impaired network. On the other hand, the regret-matching learning algorithm is noncooperative and requires minimum amount of overhead. The proposed cooperative and regret-matching based distributed algorithms are also compared with centralized solutions through simulation results.
Hand volume estimates based on a geometric algorithm in comparison to water displacement.
Mayrovitz, H N; Sims, N; Hill, C J; Hernandez, T; Greenshner, A; Diep, H
2006-06-01
Assessing changes in upper extremity limb volume during lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although arm volumes may be determined by tape measure, the suitability of circumference measurements to estimate hand volumes is questionable because of the deviation in circularity of hand shape. Our aim was to develop an alternative measurement procedure and algorithm for routine use to estimate hand volumes. A caliper was used to measure hand width and depth in 33 subjects (66 hands) and volumes (VE) were calculated using an elliptical frustum model. Using regression analysis and limits of agreement (LOA), VE was compared to volumes determined by water displacement (VW), to volumes calculated from tape-measure determined circumferences (VC), and to a trapezoidal model (VT). VW and VE (mean +/- SD) were similar (363 +/- 98 vs. 362 +/-100 ml) and highly correlated; VE = 1.01VW -3.1 ml, r=0.986, p<0.001, with LOA of +/- 33.5 ml and +/- 9.9 %. In contrast, VC (480 +/- 138 ml) and VT (432 +/- 122 ml) significantly overestimated volume (p<0.0001). These results indicate that the elliptical algorithm can be a useful alternative to water displacement when hand volumes are needed and the water displacement method is contra-indicated, impractical to implement, too time consuming or not available.
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Luquette, Richard J.; Sanner, Robert M.
2003-01-01
Precision Formation Flying is an enabling technology for a variety of proposed space-based observatories, including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), the associated MAXIM pathfinder mission, and the Stellar Imager. An essential element of the technology is the control algorithm. This paper discusses the development of a nonlinear, six-degree of freedom (6DOF) control algorithm for maintaining the relative position and attitude of a spacecraft within a formation. The translation dynamics are based on the equations of motion for the restricted three body problem. The control law guarantees the tracking error convergences to zero, based on a Lyapunov analysis. The simulation, modelled after the MAXIM Pathfinder mission, maintains the relative position and attitude of a Follower spacecraft with respect to a Leader spacecraft, stationed near the L2 libration point in the Sun-Earth system.
Optimisation of the mean boat velocity in rowing.
Rauter, G; Baumgartner, L; Denoth, J; Riener, R; Wolf, P
2012-01-01
In rowing, motor learning may be facilitated by augmented feedback that displays the ratio between actual mean boat velocity and maximal achievable mean boat velocity. To provide this ratio, the aim of this work was to develop and evaluate an algorithm calculating an individual maximal mean boat velocity. The algorithm optimised the horizontal oar movement under constraints such as the individual range of the horizontal oar displacement, individual timing of catch and release and an individual power-angle relation. Immersion and turning of the oar were simplified, and the seat movement of a professional rower was implemented. The feasibility of the algorithm, and of the associated ratio between actual boat velocity and optimised boat velocity, was confirmed by a study on four subjects: as expected, advanced rowing skills resulted in higher ratios, and the maximal mean boat velocity depended on the range of the horizontal oar displacement.
Helgerud, J
1994-01-01
Sex differences in running economy (gross oxygen cost of running, CR), maximal oxygen uptake (VO2max), anaerobic threshold (Th(an)), percentage utilization of aerobic power (% VO2max), and Th(an) during running were investigated. There were six men and six women aged 20-30 years with a performance time of 2 h 40 min over the marathon distance. The VO2max, Th(an), and CR were measured during controlled running on a treadmill at 1 degree and 3 degrees gradient. From each subject's recorded time of running in the marathon, the average speed (vM) was calculated and maintained during the treadmill running for 11 min. The VO2max was inversely related to body mass (mb), there were no sex differences, and the mean values of the reduced exponent were 0.65 for women and 0.81 for men. These results indicate that for running the unit ml.kg-0.75.min-1 is convenient when comparing individuals with different mb. The VO2max was about 10% (23 ml.kg-0.75.min-1) higher in the men than in the women. The women had on the average 10-12 ml.kg-0.75.min-1 lower VO2 than the men when running at comparable velocities. Disregarding sex, the mean value of CR was 0.211 (SEM 0.005) ml.kg-1.m-1 (resting included), and was independent of treadmill speed. No sex differences in Th(an) expressed as % VO2max or percentage maximal heart rate were found, but Th(an) expressed as VO2 in ml.kg-0.75.min-1 was significantly higher in the men compared to the women. The percentage utilization of fcmax and concentration of blood lactate at vM was higher for the female runners.(ABSTRACT TRUNCATED AT 250 WORDS)
Making the Right Choice: Optimizing rt-PA and eptifibatide lysis, an in vitro study
Shaw, George J.; Meunier, Jason M.; Lindsell, Christopher J.; Pancioli, Arthur M.; Holland, Christy K.
2010-01-01
Introduction Recombinant tissue plasminogen activator (rt-PA) is the only FDA approved lytic therapy for acute ischemic stroke. However, there can be complications such as intra-cerebral hemorrhage. This has led to interest in adjuncts such as GP IIb-IIIa inhibitors. However, there is little data on combined therapies. Here, we measure clot lysis for rt-PA and eptifibatide in an in vitro human clot model, and determine the drug concentrations maximizing lysis. A pharmacokinetic model is used to compare drug concentrations expected in clinical trials with those used here. The hypothesis is that there is a range of rt-PA and eptifibatide concentrations that maximize in vitro clot lysis. Materials and Methods Whole blood clots were made from blood obtained from 28 volunteers, after appropriate institutional approval. Sample clots were exposed to rt-PA and eptifibatide in human fresh-frozen plasma; rt-PA concentrations were 0, 0.5, 1, and 3.15 μg/ml, and eptifibatide concentrations were 0, 0.63, 1.05, 1.26 and 2.31 μg/ml. All exposures were for 30 minutes at 37 C. Clot width was measured using a microscopic imaging technique and mean fractional clot loss (FCL) at 30 minutes was used to determine lytic efficacy. On average, 28 clots (range: 6-148) from 6 subjects (3-24) were used in each group. Results and Conclusions FCL for control clots was 14% (95% Confidence Interval: 13-15%). FCL was 58% (55-61%) for clots exposed to both drugs at all concentrations, except those at an rt-PA concentration of 3.15 μg/ml, and eptifibatide concentrations of 1.26 μg/ml (Epf) or 2.31 μg/ml. Here, FCL was 43% (36-51) and 35% (32-38) respectively. FCL is maximized at moderate rt-PA and eptifibatide concentration; these values may approximate the average concentrations used in some rt-PA and eptifibatide treatments. PMID:20813398
Networked Microgrids for Self-healing Power Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
This paper proposes a transformative architecture for the normal operation and self-healing of networked microgrids (MGs). MGs can support and interchange electricity with each other in the proposed infrastructure. The networked MGs are connected by a physical common bus and a designed two-layer cyber communication network. The lower layer is within each MG where the energy management system (EMS) schedules the MG operation; the upper layer links a number of EMSs for global optimization and communication. In the normal operation mode, the objective is to schedule dispatchable distributed generators (DGs), energy storage systems (ESs) and controllable loads to minimize themore » operation costs and maximize the supply adequacy of each MG. When a generation deficiency or fault happens in a MG, the model switches to the self-healing mode and the local generation capacities of other MGs can be used to support the on-emergency portion of the system. A consensus algorithm is used to distribute portions of the desired power support to each individual MG in a decentralized way. The allocated portion corresponds to each MG’s local power exchange target which is used by its EMS to perform the optimal schedule. The resultant aggregated power output of networked MGs will be used to provide the requested power support. Test cases demonstrate the effectiveness of the proposed methodology.« less
The taxonomy statistic uncovers novel clinical patterns in a population of ischemic stroke patients.
Tukiendorf, Andrzej; Kaźmierski, Radosław; Michalak, Sławomir
2013-01-01
In this paper, we describe a simple taxonomic approach for clinical data mining elaborated by Marczewski and Steinhaus (M-S), whose performance equals the advanced statistical methodology known as the expectation-maximization (E-M) algorithm. We tested these two methods on a cohort of ischemic stroke patients. The comparison of both methods revealed strong agreement. Direct agreement between M-S and E-M classifications reached 83%, while Cohen's coefficient of agreement was κ = 0.766(P < 0.0001). The statistical analysis conducted and the outcomes obtained in this paper revealed novel clinical patterns in ischemic stroke patients. The aim of the study was to evaluate the clinical usefulness of Marczewski-Steinhaus' taxonomic approach as a tool for the detection of novel patterns of data in ischemic stroke patients and the prediction of disease outcome. In terms of the identification of fairly frequent types of stroke patients using their age, National Institutes of Health Stroke Scale (NIHSS), and diabetes mellitus (DM) status, when dealing with rough characteristics of patients, four particular types of patients are recognized, which cannot be identified by means of routine clinical methods. Following the obtained taxonomical outcomes, the strong correlation between the health status at moment of admission to emergency department (ED) and the subsequent recovery of patients is established. Moreover, popularization and simplification of the ideas of advanced mathematicians may provide an unconventional explorative platform for clinical problems.
Effects of electrical muscle stimulation on oxygen consumption.
Hayter, Tina L; Coombes, Jeff S; Knez, Wade L; Brancato, Tania L
2005-02-01
Electrical muscle stimulation (EMS) devices are being marketed as weight/ fat loss devices throughout the world. Commercially available stimulators have the ability to evoke muscle contractions that may affect caloric expenditure while the device is being used. The aim of this study was to test the effects of two different EMS devices (Abtronic and Feminique) on oxygen consumption at rest. Subjects arrived for testing after an overnight fast, had the devices fitted, and then positioned supine with expired air measured to determine oxygen consumption. After a 10-minute acclimation period, oxygen consumption was measured for 20 minutes with the device switched off (resting) then 20 minutes with the device switched on (stimulated). There were no significant differences (p > 0.05) in oxygen consumption between the resting and stimulated periods with either the Abtronic (mean +/- SD; resting, 3.40 +/- 0.44; stimulated, 3.45 +/- 0.53 ml of O(2).kg(-1).min(-1)) or the Feminique (resting, 3.73 +/- 0.45; stimulated, 3.75 +/- 0.46 ml of O(2).kg(-1).min(-1)). In summary, the EMS devices tested had no effect on oxygen consumption during muscle stimulation.
Muhlestein, Whitney E; Akagi, Dallin S; Kallos, Justiss A; Morone, Peter J; Weaver, Kyle D; Thompson, Reid C; Chambless, Lola B
2018-04-01
Objective Machine learning (ML) algorithms are powerful tools for predicting patient outcomes. This study pilots a novel approach to algorithm selection and model creation using prediction of discharge disposition following meningioma resection as a proof of concept. Materials and Methods A diversity of ML algorithms were trained on a single-institution database of meningioma patients to predict discharge disposition. Algorithms were ranked by predictive power and top performers were combined to create an ensemble model. The final ensemble was internally validated on never-before-seen data to demonstrate generalizability. The predictive power of the ensemble was compared with a logistic regression. Further analyses were performed to identify how important variables impact the ensemble. Results Our ensemble model predicted disposition significantly better than a logistic regression (area under the curve of 0.78 and 0.71, respectively, p = 0.01). Tumor size, presentation at the emergency department, body mass index, convexity location, and preoperative motor deficit most strongly influence the model, though the independent impact of individual variables is nuanced. Conclusion Using a novel ML technique, we built a guided ML ensemble model that predicts discharge destination following meningioma resection with greater predictive power than a logistic regression, and that provides greater clinical insight than a univariate analysis. These techniques can be extended to predict many other patient outcomes of interest.
Processing of Cryo-EM Movie Data.
Ripstein, Z A; Rubinstein, J L
2016-01-01
Direct detector device (DDD) cameras dramatically enhance the capabilities of electron cryomicroscopy (cryo-EM) due to their improved detective quantum efficiency (DQE) relative to other detectors. DDDs use semiconductor technology that allows micrographs to be recorded as movies rather than integrated individual exposures. Movies from DDDs improve cryo-EM in another, more surprising, way. DDD movies revealed beam-induced specimen movement as a major source of image degradation and provide a way to partially correct the problem by aligning frames or regions of frames to account for this specimen movement. In this chapter, we use a self-consistent mathematical notation to explain, compare, and contrast several of the most popular existing algorithms for computationally correcting specimen movement in DDD movies. We conclude by discussing future developments in algorithms for processing DDD movies that would extend the capabilities of cryo-EM even further. © 2016 Elsevier Inc. All rights reserved.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases.
Zhu, Fei; Liu, Quan; Fu, Yuchen; Shen, Bairong
2014-01-01
The segmentation of structures in electron microscopy (EM) images is very important for neurobiological research. The low resolution neuronal EM images contain noise and generally few features are available for segmentation, therefore application of the conventional approaches to identify the neuron structure from EM images is not successful. We therefore present a multi-scale fused structure boundary detection algorithm in this study. In the algorithm, we generate an EM image Gaussian pyramid first, then at each level of the pyramid, we utilize Laplacian of Gaussian function (LoG) to attain structure boundary, we finally assemble the detected boundaries by using fusion algorithm to attain a combined neuron structure image. Since the obtained neuron structures usually have gaps, we put forward a reinforcement learning-based boundary amendment method to connect the gaps in the detected boundaries. We use a SARSA (λ)-based curve traveling and amendment approach derived from reinforcement learning to repair the incomplete curves. Using this algorithm, a moving point starts from one end of the incomplete curve and walks through the image where the decisions are supervised by the approximated curve model, with the aim of minimizing the connection cost until the gap is closed. Our approach provided stable and efficient structure segmentation. The test results using 30 EM images from ISBI 2012 indicated that both of our approaches, i.e., with or without boundary amendment, performed better than six conventional boundary detection approaches. In particular, after amendment, the Rand error and warping error, which are the most important performance measurements during structure segmentation, were reduced to very low values. The comparison with the benchmark method of ISBI 2012 and the recent developed methods also indicates that our method performs better for the accurate identification of substructures in EM images and therefore useful for the identification of imaging features related to brain diseases. PMID:24625699
Applications of modern statistical methods to analysis of data in physical science
NASA Astrophysics Data System (ADS)
Wicker, James Eric
Modern methods of statistical and computational analysis offer solutions to dilemmas confronting researchers in physical science. Although the ideas behind modern statistical and computational analysis methods were originally introduced in the 1970's, most scientists still rely on methods written during the early era of computing. These researchers, who analyze increasingly voluminous and multivariate data sets, need modern analysis methods to extract the best results from their studies. The first section of this work showcases applications of modern linear regression. Since the 1960's, many researchers in spectroscopy have used classical stepwise regression techniques to derive molecular constants. However, problems with thresholds of entry and exit for model variables plagues this analysis method. Other criticisms of this kind of stepwise procedure include its inefficient searching method, the order in which variables enter or leave the model and problems with overfitting data. We implement an information scoring technique that overcomes the assumptions inherent in the stepwise regression process to calculate molecular model parameters. We believe that this kind of information based model evaluation can be applied to more general analysis situations in physical science. The second section proposes new methods of multivariate cluster analysis. The K-means algorithm and the EM algorithm, introduced in the 1960's and 1970's respectively, formed the basis of multivariate cluster analysis methodology for many years. However, several shortcomings of these methods include strong dependence on initial seed values and inaccurate results when the data seriously depart from hypersphericity. We propose new cluster analysis methods based on genetic algorithms that overcomes the strong dependence on initial seed values. In addition, we propose a generalization of the Genetic K-means algorithm which can accurately identify clusters with complex hyperellipsoidal covariance structures. We then use this new algorithm in a genetic algorithm based Expectation-Maximization process that can accurately calculate parameters describing complex clusters in a mixture model routine. Using the accuracy of this GEM algorithm, we assign information scores to cluster calculations in order to best identify the number of mixture components in a multivariate data set. We will showcase how these algorithms can be used to process multivariate data from astronomical observations.
Cho, Joo-Youn; Yu, Kyung-Sang; Jang, In-Jin; Yang, Byung-Hwan; Shin, Sang-Goo; Yim, Dong-Seok
2002-04-01
The pharmacokinetics of omeprazole and its metabolites in healthy subjects were evaluated to determine if a single dose of moclobemide inhibited CYP2C19 activity. Sixteen volunteers, of whom eight were extensive metabolizers (EM) and eight were poor metabolizers for CYP2C19, participated in two studies. Venous blood samples were collected for 24 h after oral ingestion of 40 mg omeprazole with or without 300 mg moclobemide coadministration. The pharmacokinetic change of omeprazole, omeprazole sulphone and 5-hydroxyomeprazole concentrations were assessed to test for an interaction between omeprazole and moclobemide. The coadministration of moclobemide in EMs approximately doubled the mean AUC (from 1834 to 3760 ng ml(-1) h) and C(max) (from 987 to 1649 ng ml(-1)) of omeprazole, and increased the AUC of omeprazole sulphone without changing AUC ratio of omeprazole to omeprazole sulphone. Moclobemide coadministration more than doubled the AUC ratio of omeprazole to 5-hydroxyomeprazole (from 2.5 to 5.3) in EMs, too. There was a significant decrease in Cmax and AUC of 5-hydroxyomeprazole in PMs but no significant changes were seen in the results for omeprazole and omeprazole sulphone AUCs. A single dose of moclobemide resulted in significant suppression of CYP2C19 activity in EMs. We conclude that physicians prescribing moclobemide should pay attention to its pharmacokinetic interactions even on the first day of coadministration with CYP2C19 substrates.
Reduced isothermal feature set for long wave infrared (LWIR) face recognition
NASA Astrophysics Data System (ADS)
Donoso, Ramiro; San Martín, Cesar; Hermosilla, Gabriel
2017-06-01
In this paper, we introduce a new concept in the thermal face recognition area: isothermal features. This consists of a feature vector built from a thermal signature that depends on the emission of the skin of the person and its temperature. A thermal signature is the appearance of the face to infrared sensors and is unique to each person. The infrared face is decomposed into isothermal regions that present the thermal features of the face. Each isothermal region is modeled as circles within a center representing the pixel of the image, and the feature vector is composed of a maximum radius of the circles at the isothermal region. This feature vector corresponds to the thermal signature of a person. The face recognition process is built using a modification of the Expectation Maximization (EM) algorithm in conjunction with a proposed probabilistic index to the classification process. Results obtained using an infrared database are compared with typical state-of-the-art techniques showing better performance, especially in uncontrolled acquisition conditions scenarios.
Dugué, Audrey Emmanuelle; Pulido, Marina; Chabaud, Sylvie; Belin, Lisa; Gal, Jocelyn
2016-12-01
We describe how to estimate progression-free survival while dealing with interval-censored data in the setting of clinical trials in oncology. Three procedures with SAS and R statistical software are described: one allowing for a nonparametric maximum likelihood estimation of the survival curve using the EM-ICM (Expectation and Maximization-Iterative Convex Minorant) algorithm as described by Wellner and Zhan in 1997; a sensitivity analysis procedure in which the progression time is assigned (i) at the midpoint, (ii) at the upper limit (reflecting the standard analysis when the progression time is assigned at the first radiologic exam showing progressive disease), or (iii) at the lower limit of the censoring interval; and finally, two multiple imputations are described considering a uniform or the nonparametric maximum likelihood estimation (NPMLE) distribution. Clin Cancer Res; 22(23); 5629-35. ©2016 AACR. ©2016 American Association for Cancer Research.
Elashoff, Robert M.; Li, Gang; Li, Ning
2009-01-01
Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.
A Zero- and K-Inflated Mixture Model for Health Questionnaire Data
Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.
2011-01-01
In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
Jaber, A; Whitworth, J M; Corbett, I P; Al-Baqshi, B; Kanaa, M D; Meechan, J G
2010-11-01
To compare the efficacy of 2% lidocaine and 4% articaine both with 1:100,000 adrenaline in anaesthetising the pulps of mandibular incisors. Thirty-one healthy adult volunteers received the following local anaesthetic regimens adjacent to a mandibular central incisor: 1) buccal infiltration of 1.8 mL lidocaine plus dummy lingual injection (LB), 2) buccal plus lingual infiltrations of 0.9 mL lidocaine (LBL), 3) buccal infiltration of 1.8 mL articaine plus dummy lingual injection (AB), 4) buccal plus lingual infiltrations of 0.9 mL articaine (ABL). Pulp sensitivities of the central incisor and contralateral lateral incisor were assessed electronically. Anaesthetic efficacy was determined by two methods: 1) Recording the number of episodes with no responses to maximal electronic pulp tester stimulation during the course of the study period, 2) recording the number of volunteers with no response to maximal pulp tester stimulation within 15 min and maintained for 45 min (defined as sustained anaesthesia). Data were analysed by McNemar, chi-square, Mann-Whitney and paired t-tests. For both test teeth, the number of episodes of no sensation on maximal stimulation was significantly greater after articaine than lidocaine for both techniques. The split buccal plus lingual dose was more effective than the buccal injection alone for both solutions (p <0.001). 4% articaine was more effective than 2% lidocaine when comparing sustained anaesthesia in both teeth for each technique (p <0.001), however, there was no difference in sustained anaesthesia between techniques for either tooth or solution. 4% articaine was more effective than 2% lidocaine (both with 1:100,000 adrenaline) in anaesthetising the pulps of lower incisor teeth after buccal or buccal plus lingual infiltrations.
Iterative Code-Aided ML Phase Estimation and Phase Ambiguity Resolution
NASA Astrophysics Data System (ADS)
Wymeersch, Henk; Moeneclaey, Marc
2005-12-01
As many coded systems operate at very low signal-to-noise ratios, synchronization becomes a very difficult task. In many cases, conventional algorithms will either require long training sequences or result in large BER degradations. By exploiting code properties, these problems can be avoided. In this contribution, we present several iterative maximum-likelihood (ML) algorithms for joint carrier phase estimation and ambiguity resolution. These algorithms operate on coded signals by accepting soft information from the MAP decoder. Issues of convergence and initialization are addressed in detail. Simulation results are presented for turbo codes, and are compared to performance results of conventional algorithms. Performance comparisons are carried out in terms of BER performance and mean square estimation error (MSEE). We show that the proposed algorithm reduces the MSEE and, more importantly, the BER degradation. Additionally, phase ambiguity resolution can be performed without resorting to a pilot sequence, thus improving the spectral efficiency.
Mendonca, Goncalo V; Teixeira, Micael S; Heffernan, Kevin S; Fernhall, Bo
2013-06-01
Ingestion of water attenuates the chronotropic response to submaximal exercise. However, it is not known whether this effect is equally manifested during dynamic exercise below and above the ventilatory threshold (VT). We explored the effects of water ingestion on the heart rate response to an incremental cycle-ergometer protocol. In a randomized fashion, 19 healthy adults (10 men and nine women, age 20.9 ± 1.8 years) ingested 50 and 500 ml of water before completing a cycle-ergometer protocol on two separate days. The heart rate and oxygen uptake ( ) responses to water ingestion were analysed both at rest and during exercise performed below and above the VT. The effects of water intake on brachial blood pressure were measured only at rest. Resting mean arterial pressure increased and resting heart rate decreased, but only after 500 ml of water (P < 0.05). Compared with that seen after 50 ml of water, the 500 ml volume elicited an overall decrease in submaximal heart rate (P < 0.05). In contrast, drinking 500 ml of water did not affect submaximal . The participants' maximal heart rate, maximal and VT were similar between conditions. Our results therefore indicate that, owing to its effects on submaximal heart rate over a broad spectrum of intensities, the drinking of water should be recognized as a potential confounder in cardiovascular exercise studies. However, by showing no differences between conditions for submaximal , they also suggest that the magnitude of heart rate reduction after drinking 500 ml of water may be of minimal physiological significance for exercise cardiorespiratory capacity.
Interval-based reconstruction for uncertainty quantification in PET
NASA Astrophysics Data System (ADS)
Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis
2018-02-01
A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.
Smith, Cory M; Housh, Terry J; Hill, Ethan C; Johnson, Glen O; Schmidt, Richard J
2017-04-01
This study used a combined electromyographic, mechanomyographic, and force approach to identify electromechanical delay (EMD) from the onsets of the electromyographic to force signals (EMD E-F ), onsets of the electromyographic to mechanomyogrpahic signals (EMD E-M ), and onsets of mechanomyographic to force signals (EMD M-F ). The purposes of the current study were to examine: (1) the differences in EMD E-F , EMD E-M , and EMD M-F from the vastus lateralis during maximal, voluntary dynamic (1 repetition maximum [1-RM]) and isometric (maximal voluntary isometric contraction [MVIC]) muscle actions; and (2) the effects of fatigue on EMD E-F , EMD M-F , and EMD E-M . Ten men performed pretest and posttest 1-RM and MVIC leg extension muscle actions. The fatiguing workbout consisted of 70% 1-RM dynamic constant external resistance leg extension muscle actions to failure. The results indicated that there were no significant differences between 1-RM and MVIC EMD E-F , EMD E-M , or EMD M-F. There were, however, significant fatigue-induced increases in EMD E-F (94% and 63%), EMD E-M (107%), and EMD M-F (63%) for both the 1-RM and MVIC measurements. Therefore, these findings demonstrated the effects of fatigue on EMD measures and supported comparisons among studies which examined dynamic or isometric EMD measures from the vastus lateralis using a combined electromyographic, mechanomyographic, and force approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
Separation of pulsar signals from noise using supervised machine learning algorithms
NASA Astrophysics Data System (ADS)
Bethapudi, S.; Desai, S.
2018-04-01
We evaluate the performance of four different machine learning (ML) algorithms: an Artificial Neural Network Multi-Layer Perceptron (ANN MLP), Adaboost, Gradient Boosting Classifier (GBC), and XGBoost, for the separation of pulsars from radio frequency interference (RFI) and other sources of noise, using a dataset obtained from the post-processing of a pulsar search pipeline. This dataset was previously used for the cross-validation of the SPINN-based machine learning engine, obtained from the reprocessing of the HTRU-S survey data (Morello et al., 2014). We have used the Synthetic Minority Over-sampling Technique (SMOTE) to deal with high-class imbalance in the dataset. We report a variety of quality scores from all four of these algorithms on both the non-SMOTE and SMOTE datasets. For all the above ML methods, we report high accuracy and G-mean for both the non-SMOTE and SMOTE cases. We study the feature importances using Adaboost, GBC, and XGBoost and also from the minimum Redundancy Maximum Relevance approach to report algorithm-agnostic feature ranking. From these methods, we find that the signal to noise of the folded profile to be the best feature. We find that all the ML algorithms report FPRs about an order of magnitude lower than the corresponding FPRs obtained in Morello et al. (2014), for the same recall value.
Photohemolytic potency of tetracyclines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bjellerup, M.; Ljunggren, B.
1985-04-01
Hemolysis induced by long-wave ultraviolet radiation (UVA) and 8 different commercial tetracycline derivatives was studied in a model using human red blood cells. Demethylchlortetracycline and doxycycline were shown to have pronounced hemolytic properties causing 88% and 85% hemolysis, respectively, at a concentration of 50 micrograms/ml and 72 J/ cm2 of UVA. Tetracycline, oxytetracycline, and chlortetracycline caused maximally 18% hemolysis at 200 micrograms/ml and lymecycline only 7% at 100 micrograms/ml. Methacycline showed intermediate hemolytic effect of 36% at 200 micrograms/ml. Minocycline had no hemolytic effect whatsoever. These experimental data correlate very well with clinical reports and comparative phototoxicity trials in humans.more » Photohemolysis may thus be of value for predicting tetracycline phototoxicity.« less
Beam-induced motion correction for sub-megadalton cryo-EM particles.
Scheres, Sjors Hw
2014-08-13
In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aldridge, David F.
2016-07-06
Program EMRECORD is a utility program designed to facilitate introduction of a 3D electromagnetic (EM) data acquisition configuration (or a “source-receiver recording geometry”) into EM forward modeling algorithms EMHOLE and FDEM. A precise description of the locations (in 3D space), orientations, types, and amplitudes/sensitivities, of all sources and receivers is an essential ingredient for forward modeling of EM wavefields.
Sasaya, Tenta; Sunaguchi, Naoki; Thet-Lwin, Thet-; Hyodo, Kazuyuki; Zeniya, Tsutomu; Takeda, Tohoru; Yuasa, Tetsuya
2017-01-01
We propose a pinhole-based fluorescent x-ray computed tomography (p-FXCT) system with a 2-D detector and volumetric beam that can suppress the quality deterioration caused by scatter components. In the corresponding p-FXCT technique, projections are acquired at individual incident energies just above and below the K-edge of the imaged trace element; then, reconstruction is performed based on the two sets of projections using a maximum likelihood expectation maximization algorithm that incorporates the scatter components. We constructed a p-FXCT imaging system and performed a preliminary experiment using a physical phantom and an I imaging agent. The proposed dual-energy p-FXCT improved the contrast-to-noise ratio by a factor of more than 2.5 compared to that attainable using mono-energetic p-FXCT for a 0.3 mg/ml I solution. We also imaged an excised rat’s liver infused with a Ba contrast agent to demonstrate the feasibility of imaging a biological sample. PMID:28272496
Wisloff, U; Castagna, C; Helgerud, J; Jones, R; Hoff, J
2004-01-01
Background: A high level of strength is inherent in elite soccer play, but the relation between maximal strength and sprint and jumping performance has not been studied thoroughly. Objective: To determine whether maximal strength correlates with sprint and vertical jump height in elite male soccer players. Methods: Seventeen international male soccer players (mean (SD) age 25.8 (2.9) years, height 177.3 (4.1) cm, weight 76.5 (7.6) kg, and maximal oxygen uptake 65.7 (4.3) ml/kg/min) were tested for maximal strength in half squats and sprinting ability (0–30 m and 10 m shuttle run sprint) and vertical jumping height. Result: There was a strong correlation between maximal strength in half squats and sprint performance and jumping height. Conclusions: Maximal strength in half squats determines sprint performance and jumping height in high level soccer players. High squat strength did not imply reduced maximal oxygen consumption. Elite soccer players should focus on maximal strength training, with emphasis on maximal mobilisation of concentric movements, which may improve their sprinting and jumping performance. PMID:15155427
Ostojic, Sergej M; Stojanovic, Marko D; Calleja-Gonzalez, Julio
2011-04-30
The main aim of the study was to investigate whether different levels of aerobic power influence heart rate (HR) responses during the first minute of recovery following maximal exercise in athletes. Thirty-two young male soccer players were recruited for the study during the final week of their training prior to [corrected] the competition. Following the maximal exercise on treadmill the participants were placed supine for 60 s of HR recording. The time between exercise cessation and the recovery HR measurement was kept as short as possible. At the end of exercise (i.e., the start of recovery), HRs were [corrected] was similar in both trials. At both 10 s and 20 s of recovery period, the players characterized by high aerobic power (> 60 ml/kg/ min) revealed significantly lower HR as compared to their sub-elite counterparts (< 50 ml/kg/min; P < 0.05). No differences between the groups were found at later stages of the analyzed post-exercise HR. The data suggest that the athletes characterized by high aerobic capacity could be better adapted to maximal exercise with faster recovery HR immediately following an exercise test. These results generally suggest that the aerobic power along with autonomic modulation might have played a role in the ultra short-term cardiovascular responses to all-out exercise.
NASA Astrophysics Data System (ADS)
Li, Yan; Wu, Mingwei; Du, Xinwei; Xu, Zhuoran; Gurusamy, Mohan; Yu, Changyuan; Kam, Pooi-Yuen
2018-02-01
A novel soft-decision-aided maximum likelihood (SDA-ML) carrier phase estimation method and its simplified version, the decision-aided and soft-decision-aided maximum likelihood (DA-SDA-ML) methods are tested in a nonlinear phase noise-dominant channel. The numerical performance results show that both the SDA-ML and DA-SDA-ML methods outperform the conventional DA-ML in systems with constant-amplitude modulation formats. In addition, modified algorithms based on constellation partitioning are proposed. With partitioning, the modified SDA-ML and DA-SDA-ML are shown to be useful for compensating the nonlinear phase noise in multi-level modulation systems.
Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R
2016-12-01
Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO 2 ) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO 2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO 2 . Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO 2 . ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.
Hravnak, Marilyn; Chen, Lujie; Dubrawski, Artur; Bose, Eliezer; Clermont, Gilles; Pinsky, Michael R.
2015-01-01
PURPOSE Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby “cleaning” such data for future modeling. METHODS 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data (heart rate [HR], respiratory rate [RR], peripheral arterial oxygen saturation [SpO2] at 1/20Hz., and noninvasive oscillometric blood pressure [BP]) Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. RESULTS Block 1 yielded 812 VS events, with 214 (26%) judged by experts as artifact (RR 43%, SpO2 40%, BP 15%, HR 2%). ML algorithms applied to the Block 1 training/cross-validation set (10-fold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2). CONCLUSIONS ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building. PMID:26438655
NASA Astrophysics Data System (ADS)
Singh, Ranjan Kumar; Rinawa, Moti Lal
2018-04-01
The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.
Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; Del-Toro, Noemi; Dianes, Jose A; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio
2016-01-01
The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE.The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX "complete" submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. © 2016 by The American Society for Biochemistry and Molecular Biology, Inc.
Perez-Riverol, Yasset; Xu, Qing-Wei; Wang, Rui; Uszkoreit, Julian; Griss, Johannes; Sanchez, Aniel; Reisinger, Florian; Csordas, Attila; Ternent, Tobias; del-Toro, Noemi; Dianes, Jose A.; Eisenacher, Martin; Hermjakob, Henning; Vizcaíno, Juan Antonio
2016-01-01
The original PRIDE Inspector tool was developed as an open source standalone tool to enable the visualization and validation of mass-spectrometry (MS)-based proteomics data before data submission or already publicly available in the Proteomics Identifications (PRIDE) database. The initial implementation of the tool focused on visualizing PRIDE data by supporting the PRIDE XML format and a direct access to private (password protected) and public experiments in PRIDE. The ProteomeXchange (PX) Consortium has been set up to enable a better integration of existing public proteomics repositories, maximizing its benefit to the scientific community through the implementation of standard submission and dissemination pipelines. Within the Consortium, PRIDE is focused on supporting submissions of tandem MS data. The increasing use and popularity of the new Proteomics Standards Initiative (PSI) data standards such as mzIdentML and mzTab, and the diversity of workflows supported by the PX resources, prompted us to design and implement a new suite of algorithms and libraries that would build upon the success of the original PRIDE Inspector and would enable users to visualize and validate PX “complete” submissions. The PRIDE Inspector Toolsuite supports the handling and visualization of different experimental output files, ranging from spectra (mzML, mzXML, and the most popular peak lists formats) and peptide and protein identification results (mzIdentML, PRIDE XML, mzTab) to quantification data (mzTab, PRIDE XML), using a modular and extensible set of open-source, cross-platform libraries. We believe that the PRIDE Inspector Toolsuite represents a milestone in the visualization and quality assessment of proteomics data. It is freely available at http://github.com/PRIDE-Toolsuite/. PMID:26545397
NASA Technical Reports Server (NTRS)
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Yiou, E; Deroche, T; Do, M C; Woodman, T
2011-04-01
During leg flexion from erect posture, postural stability is organized in advance during "anticipatory postural adjustments" (APA). During these APA, inertial forces are generated that propel the centre of gravity (CoG) laterally towards stance leg side. This study examined how fear of falling (FoF) may influence this anticipatory postural control of medio-lateral (ML) stability. Ten young healthy participants performed a series of leg flexions at maximal velocity from low and high surface heights (6 and 66 cm above ground, respectively). In this latter condition with increased FoF, stance foot was placed at the lateral edge of the support surface to induce maximal postural threat. Results showed that the amplitude of ML inertial forces generated during APA decreased with FoF; this decrease was compensated by an increase in APA duration so that the CoG position at time of swing foot-off was located further towards stance leg side. With these changes in ML APA, the CoG was propelled in the same final (unipodal) position above stance foot as in condition with low FoF. These results contrast with those obtained in the literature during quiet standing which showed that FoF did not have any influence on the ML component of postural control. It is proposed that ML APA are modified with increased FoF, in such a way that the risk of a sideway fall induced by the large CoG motion is attenuated.
Oliveira, Ricardo B; Myers, Jonathan; Araújo, Claudio Gil S; Abella, Joshua; Mandic, Sandra; Froelicher, Victor
2009-06-01
Maximal oxygen pulse (O(2) pulse) mirrors the stroke volume response to exercise, and should therefore be a strong predictor of mortality. Limited and conflicting data are, however, available on this issue. Nine hundred forty-eight participants, classified as those with cardiopulmonary disease (CPD) and those without (non-CPD), underwent cardiopulmonary exercise testing (CPX) for clinical reasons between 1993 and 2003. The ability of maximal O(2) pulse and maximal oxygen uptake (peak VO(2)) to predict mortality was investigated using proportional hazards and Akaike information criterion analyses. All-cause mortality was the endpoint. Over a mean follow-up of 6.3+/-3.2 years, there were 126 deaths. Maximal O(2) pulse, expressed in either absolute or relative to age-predicted terms, and peak VO(2) were significant and independent predictors of mortality in those with and without CPD (P<0.04). Akaike information criterion analysis revealed that the model including both maximal O(2) pulse and peak VO(2) had the highest accuracy for predicting mortality. The optimal cut-points for O(2) pulse and peak VO(2) (<12; > or =12 ml/beat and <16; > or =16 ml/(kg.min) respectively) were established by the area under the receiver-operating-characteristic curve. The relative risks of mortality were 3.4 and 2.2 (CPD and non-CPD, respectively) among participants with both maximal O(2) pulse and peak VO(2) responses below these cut-points compared with participants with both responses above these cut-points. These results indicate that maximal O(2) pulse is a significant predictor of mortality in patients with and without CPD. The addition of absolute and relative O(2) pulse data provides complementary information for risk-stratifying heterogeneous participants referred for CPX and should be routinely included in the CPX report.
Lannin, Timothy B; Thege, Fredrik I; Kirby, Brian J
2016-10-01
Advances in rare cell capture technology have made possible the interrogation of circulating tumor cells (CTCs) captured from whole patient blood. However, locating captured cells in the device by manual counting bottlenecks data processing by being tedious (hours per sample) and compromises the results by being inconsistent and prone to user bias. Some recent work has been done to automate the cell location and classification process to address these problems, employing image processing and machine learning (ML) algorithms to locate and classify cells in fluorescent microscope images. However, the type of machine learning method used is a part of the design space that has not been thoroughly explored. Thus, we have trained four ML algorithms on three different datasets. The trained ML algorithms locate and classify thousands of possible cells in a few minutes rather than a few hours, representing an order of magnitude increase in processing speed. Furthermore, some algorithms have a significantly (P < 0.05) higher area under the receiver operating characteristic curve than do other algorithms. Additionally, significant (P < 0.05) losses to performance occur when training on cell lines and testing on CTCs (and vice versa), indicating the need to train on a system that is representative of future unlabeled data. Optimal algorithm selection depends on the peculiarities of the individual dataset, indicating the need of a careful comparison and optimization of algorithms for individual image classification tasks. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Song, Lei; Gao, Jungang; Wang, Sheng; Hu, Huasi; Guo, Youmin
2017-01-01
Estimation of the pleural effusion's volume is an important clinical issue. The existing methods cannot assess it accurately when there is large volume of liquid in the pleural cavity and/or the patient has some other disease (e.g. pneumonia). In order to help solve this issue, the objective of this study is to develop and test a novel algorithm using B-spline and local clustering level set method jointly, namely BLL. The BLL algorithm was applied to a dataset involving 27 pleural effusions detected on chest CT examination of 18 adult patients with the presence of free pleural effusion. Study results showed that average volumes of pleural effusion computed using the BLL algorithm and assessed manually by the physicians were 586 ml±339 ml and 604±352 ml, respectively. For the same patient, the volume of the pleural effusion, segmented semi-automatically, was 101.8% ±4.6% of that was segmented manually. Dice similarity was found to be 0.917±0.031. The study demonstrated feasibility of applying the new BLL algorithm to accurately measure the volume of pleural effusion.
Lin, Yunyue; Wu, Qishi; Cai, Xiaoshan; ...
2010-01-01
Data transmission from sensor nodes to a base station or a sink node often incurs significant energy consumption, which critically affects network lifetime. We generalize and solve the problem of deploying multiple base stations to maximize network lifetime in terms of two different metrics under one-hop and multihop communication models. In the one-hop communication model, the sensors far away from base stations always deplete their energy much faster than others. We propose an optimal solution and a heuristic approach based on the minimal enclosing circle algorithm to deploy a base station at the geometric center of each cluster. In themore » multihop communication model, both base station location and data routing mechanism need to be considered in maximizing network lifetime. We propose an iterative algorithm based on rigorous mathematical derivations and use linear programming to compute the optimal routing paths for data transmission. Simulation results show the distinguished performance of the proposed deployment algorithms in maximizing network lifetime.« less
Using Machine Learning for Advanced Anomaly Detection and Classification
NASA Astrophysics Data System (ADS)
Lane, B.; Poole, M.; Camp, M.; Murray-Krezan, J.
2016-09-01
Machine Learning (ML) techniques have successfully been used in a wide variety of applications to automatically detect and potentially classify changes in activity, or a series of activities by utilizing large amounts data, sometimes even seemingly-unrelated data. The amount of data being collected, processed, and stored in the Space Situational Awareness (SSA) domain has grown at an exponential rate and is now better suited for ML. This paper describes development of advanced algorithms to deliver significant improvements in characterization of deep space objects and indication and warning (I&W) using a global network of telescopes that are collecting photometric data on a multitude of space-based objects. The Phase II Air Force Research Laboratory (AFRL) Small Business Innovative Research (SBIR) project Autonomous Characterization Algorithms for Change Detection and Characterization (ACDC), contracted to ExoAnalytic Solutions Inc. is providing the ability to detect and identify photometric signature changes due to potential space object changes (e.g. stability, tumble rate, aspect ratio), and correlate observed changes to potential behavioral changes using a variety of techniques, including supervised learning. Furthermore, these algorithms run in real-time on data being collected and processed by the ExoAnalytic Space Operations Center (EspOC), providing timely alerts and warnings while dynamically creating collection requirements to the EspOC for the algorithms that generate higher fidelity I&W. This paper will discuss the recently implemented ACDC algorithms, including the general design approach and results to date. The usage of supervised algorithms, such as Support Vector Machines, Neural Networks, k-Nearest Neighbors, etc., and unsupervised algorithms, for example k-means, Principle Component Analysis, Hierarchical Clustering, etc., and the implementations of these algorithms is explored. Results of applying these algorithms to EspOC data both in an off-line "pattern of life" analysis as well as using the algorithms on-line in real-time, meaning as data is collected, will be presented. Finally, future work in applying ML for SSA will be discussed.
General form of a cooperative gradual maximal covering location problem
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh
2018-07-01
Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.
Distributed-Memory Fast Maximal Independent Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew
The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluatemore » their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.« less
Improving Search Algorithms by Using Intelligent Coordinates
NASA Technical Reports Server (NTRS)
Wolpert, David H.; Tumer, Kagan; Bandari, Esfandiar
2004-01-01
We consider algorithms that maximize a global function G in a distributed manner, using a different adaptive computational agent to set each variable of the underlying space. Each agent eta is self-interested; it sets its variable to maximize its own function g (sub eta). Three factors govern such a distributed algorithm's performance, related to exploration/exploitation, game theory, and machine learning. We demonstrate how to exploit alI three factors by modifying a search algorithm's exploration stage: rather than random exploration, each coordinate of the search space is now controlled by a separate machine-learning-based player engaged in a noncooperative game. Experiments demonstrate that this modification improves simulated annealing (SA) by up to an order of magnitude for bin packing and for a model of an economic process run over an underlying network. These experiments also reveal interesting small-world phenomena.
In vivo interaction of ketoconazole and sucralfate in healthy volunteers.
Carver, P L; Berardi, R R; Knapp, M J; Rider, J M; Kauffman, C A; Bradley, S F; Atassi, M
1994-01-01
Absorption of ketoconazole is impaired in subjects with an increased gastric pH due to administration of antacids, H2-receptor antagonists, proton pump inhibitors, or the presence of hypochlorhydria. Sucralfate could provide an attractive alternative in patients receiving ketoconazole who require therapy for acid-peptic disorders. Twelve healthy human volunteers were administered a single 400-mg oral dose of ketoconazole in each of three randomized treatment phases. In phase A, ketoconazole was administered orally with 240 ml of water. In phase B, ketoconazole and sucralfate (1.0 g) were administered simultaneously with 240 ml of water. In phase C, ketoconazole was administered with 240 ml of water 2 h after administration of sucralfate (1.0 g) orally with 240 ml of water. A 680-mg oral dose of glutamic acid hydrochloride was administered 10 min prior to and with each dose of ketoconazole, sucralfate, or ketoconazole plus sucralfate. Simultaneous administration of ketoconazole and sucralfate led to a significant reduction in the area under the concentration-time curve and maximal concentration of ketoconazole in serum (78.12 +/- 12.20 versus 59.32 +/- 13.61 micrograms.h/ml and 12.34 +/- 3.07 versus 8.92 +/- 2.57 micrograms/ml, respectively; P < 0.05). When ketoconazole was administered 2 h after sucralfate, the observed ketoconazole area under the concentration-time curve was not significantly decreased compared with that of ketoconazole alone. The time to maximal concentrations in serum and the ketoconazole elimination rate constant were not significantly different in any of the three treatment phases. In patients receiving concurrent administration of ketoconazole and sucralfate, doses should be separated by at least 2 h. PMID:7910724
Effect of leg exercise training on vascular volumes during 30 days of 6 degrees head-down bed rest
NASA Technical Reports Server (NTRS)
Greenleaf, J. E.; Vernikos, J.; Wade, C. E.; Barnes, P. R.
1992-01-01
Plasma and red cell volumes, body density, and water balance were measured in 19 men (32-42 yr) confined to bed rest (BR). One group (n = 5) had no exercise training (NOE), another near-maximal variable-intensity isotonic exercise for 60 min/day (ITE; n = 7), and the third near-maximal intermittent isokinetic exercise for 60 min/day (IKE; n = 7). Caloric intake was 2,678-2,840 kcal/day; mean body weight (n = 19) decreased by 0.58 +/- 0.35 (SE) kg during BR due to a negative fluid balance (diuresis) on day 1. Mean energy costs for the NOE, and IKE, and ITE regimens were 83 (3.6 +/- 0.2 ml O2.min-1.kg-1), 214 (8.9 +/- 0.5 ml.min-1.kg-1), and 446 kcal/h (18.8 +/- 1.6 ml.min-1.kg-1), respectively. Body densities within groups and mean urine volumes (1,752-1,846 ml/day) between groups were unchanged during BR. Resting changes in plasma volume (ml/kg) after BR were -1.5 +/- 2.3% (NS) in ITE, -14.7 +/- 2.8% (P less than 0.05) in NOE, and -16.8 +/- 2.9% (P less than 0.05) in IKE, and mean water balances during BR were +295, -106, and +169 ml/24 h, respectively. Changes in red cell volume followed changes in plasma volume. The significant chronic decreases in plasma volume in the IKE and NOE groups and its maintenance in the ITE group could not be accounted for by water balance or by responses of the plasma osmotic, protein, vasopressin, or aldosterone concentrations or plasma renin activity. There was close coupling between resting plasma volume and plasma protein and osmotic content.(ABSTRACT TRUNCATED AT 250 WORDS).
Dong, J; Hayakawa, Y; Kober, C
2014-01-01
When metallic prosthetic appliances and dental fillings exist in the oral cavity, the appearance of metal-induced streak artefacts is not avoidable in CT images. The aim of this study was to develop a method for artefact reduction using the statistical reconstruction on multidetector row CT images. Adjacent CT images often depict similar anatomical structures. Therefore, reconstructed images with weak artefacts were attempted using projection data of an artefact-free image in a neighbouring thin slice. Images with moderate and strong artefacts were continuously processed in sequence by successive iterative restoration where the projection data was generated from the adjacent reconstructed slice. First, the basic maximum likelihood-expectation maximization algorithm was applied. Next, the ordered subset-expectation maximization algorithm was examined. Alternatively, a small region of interest setting was designated. Finally, the general purpose graphic processing unit machine was applied in both situations. The algorithms reduced the metal-induced streak artefacts on multidetector row CT images when the sequential processing method was applied. The ordered subset-expectation maximization and small region of interest reduced the processing duration without apparent detriments. A general-purpose graphic processing unit realized the high performance. A statistical reconstruction method was applied for the streak artefact reduction. The alternative algorithms applied were effective. Both software and hardware tools, such as ordered subset-expectation maximization, small region of interest and general-purpose graphic processing unit achieved fast artefact correction.
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Cosmic muon induced EM showers in NO$$\
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter; ...
2016-11-15
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
Cosmic muon induced EM showers in NO$$\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagdon, M.J.; Martin, P.J.
1997-06-01
In 1994, Novus Engineering and EME Group began a project for the New York State Office of Mental Health (OMH) to maximize the use and benefit of energy management systems (EMS) installed at various large psychiatric hospitals throughout New York State. The project, which was funded and managed by the Dormitory Authority of the State of New York (DASNY), had three major objectives: (1) Maximize Energy Savings - Novus staff quickly learned that EMS systems as set up by contractors are far from optimal for generating energy savings. This part of the program revealed numerous opportunities for increased energy savings,more » such as: fine tuning proportional/integral/derivative (PID) loops to eliminate valve and damper hunting; adjusting temperature reset schedules to reduce energy consumption and provide more uniform temperature conditions throughout the facilities; and modifying equipment schedules. (2) Develop Monitoring Protocols - Large EMS systems are so complex that they require a systematic approach to daily, monthly and seasonal monitoring of building system conditions in order to locate system problems before they turn into trouble calls or equipment failures. In order to assist local facility staff in their monitoring efforts, Novus prepared user-friendly handbooks on each EMS. These included monitoring protocols tailored to each facility. (3) Provide Staff Training - When a new EMS is installed at a facility, it is frequently the maintenance staffs first exposure to a complex computerized system. Without proper training in what to look for, staff use of the EMS is generally very limited. With proper training, staff can be taught to take a pro-active approach to identify and solve problems before they get out of hand. The staff then realize that the EMS is a powerful preventative maintenance tool that can be used to make their work more effective and efficient. Case histories are presented.« less
Sasaki, Satoshi; Comber, Alexis J; Suzuki, Hiroshi; Brunsdon, Chris
2010-01-28
Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases) requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted) EMS case distances to those locations. Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71). The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.
Can endurance training improve physical capacity and quality of life in young Fontan patients?
Hedlund, Eva R; Lundell, Bo; Söderström, Liselott; Sjöberg, Gunnar
2018-03-01
Children after Fontan palliation have reduced exercise capacity and quality of life. Our aim was to study whether endurance training could improve physical capacity and quality of life in Fontan patients. Fontan patients (n=30) and healthy age- and gender-matched control subjects (n=25) performed a 6-minute walk test at submaximal capacity and a maximal cycle ergometer test. Quality of life was assessed with Pediatric Quality of Life Inventory Version 4.0 questionnaires for children and parents. All tests were repeated after a 12-week endurance training programme and after 1 year. Patients had decreased submaximal and maximal exercise capacity (maximal oxygen uptake 35.0±5.1 ml/minute per·kg versus 43.7±8.4 ml/minute·per·kg, p<0.001) and reported a lower quality of life score (70.9±9.9 versus 85.7±8.0, p<0.001) than controls. After training, patients improved their submaximal exercise capacity in a 6-minute walk test (from 590.7±65.5 m to 611.8±70.9 m, p<0.05) and reported a higher quality of life (p<0.01), but did not improve maximal exercise capacity. At follow-up, submaximal exercise capacity had increased further and improved quality of life was sustained. The controls improved their maximal exercise capacity (p<0.05), but not submaximal exercise capacity or quality of life after training. At follow-up, improvement of maximal exercise capacity was sustained. We believe that an individualised endurance training programme for Fontan patients improves submaximal exercise capacity and quality of life in Fontan patients and the effect on quality of life appears to be long-lasting.
NOS3 gene polymorphisms and exercise hemodynamics in postmenopausal women.
Hand, B D; McCole, S D; Brown, M D; Park, J J; Ferrell, R E; Huberty, A; Douglass, L W; Hagberg, J M
2006-12-01
We tested whether the G894T and T-786C NOS3 polymorphisms were associated with exercise cardiovascular (CV) hemodynamics in sedentary, physically active, and endurance-trained postmenopausal women. CV hemodynamic parameters including heart rate (HR), systolic (SBP) and diastolic (DBP) blood pressures and cardiac output (Q), as determined by acetylene rebreathing, stroke volume (SV), arteriovenous oxygen difference (a-vO2 diff), and total peripheral resistance (TPR) were measured during submaximal (40, 60, 80 %) and maximal (approximately 100 % VO2max) exercise. NOS3 G894T genotype was not significantly associated, either independently or interactively with habitual physical activity (PA) level, with SBP, Q, TPR, or a-vO2 diff during submaximal or maximal exercise. However, NOS3 894T non-carriers had a higher submaximal exercise HR than NOS3 894T allele carriers (120 +/- 2 vs. 112 +/- 2 beats/min, p = 0.007). NOS3 894T allele carriers had a higher SV than 894T non-carriers (78 +/- 2 vs. 72 +/- 2 ml/beat, p = 0.03) during submaximal exercise. NOS3 894T non-carriers also had a higher maximal exercise HR averaged across habitual PA groups than T allele carrier women (165 +/- 2 vs. 158 +/- 2 beats/min, p = 0.04). NOS3 894T allele carriers also tended to have a higher SV during maximal exercise than 894T non-carriers (70 +/- 2 vs. 64 +/- 2 ml/beat, p = 0.08). NOS3 T-786C genotype was not significantly associated, either independently or interactively, with any of the CV hemodynamic measures during submaximal or maximal exercise. These results suggest an association of NOS3 G894T genotype with submaximal and maximal exercise CV hemodynamic responses, especially HR, in postmenopausal women.
Dozois, Adeline; Hampton, Lorrie; Kingston, Carlene W; Lambert, Gwen; Porcelli, Thomas J; Sorenson, Denise; Templin, Megan; VonCannon, Shellie; Asimos, Andrew W
2017-12-01
The recently proposed American Heart Association/American Stroke Association EMS triage algorithm endorses routing patients with suspected large vessel occlusion (LVO) acute ischemic strokes directly to endovascular centers based on a stroke severity score. The predictive value of this algorithm for identifying LVO is dependent on the overall prevalence of LVO acute ischemic stroke in the EMS population screened for stroke, which has not been reported. We performed a cross-sectional study of patients transported by our county's EMS agency who were dispatched as a possible stroke or had a primary impression of stroke by paramedics. We determined the prevalence of LVO by reviewing medical record imaging reports based on a priori specified criteria. We enrolled 2402 patients, of whom 777 (32.3%) had an acute stroke-related diagnosis. Among 485 patients with acute ischemic stroke, 24.1% (n=117) had an LVO, which represented only 4.87% (95% confidence interval, 4.05%-5.81%) of the total EMS population screened for stroke. Overall, the prevalence of LVO acute ischemic stroke in our EMS population screened for stroke was low. This is an important consideration for any EMS stroke severity-based triage protocol and should be considered in predicting the rates of overtriage to endovascular stroke centers. © 2017 American Heart Association, Inc.
Natural and Artificial Intelligence in Neurosurgery: A Systematic Review.
Senders, Joeky T; Arnaout, Omar; Karhade, Aditya V; Dasenbrock, Hormuzdiyar H; Gormley, William B; Broekman, Marike L; Smith, Timothy R
2017-09-07
Machine learning (ML) is a domain of artificial intelligence that allows computer algorithms to learn from experience without being explicitly programmed. To summarize neurosurgical applications of ML where it has been compared to clinical expertise, here referred to as "natural intelligence." A systematic search was performed in the PubMed and Embase databases as of August 2016 to review all studies comparing the performance of various ML approaches with that of clinical experts in neurosurgical literature. Twenty-three studies were identified that used ML algorithms for diagnosis, presurgical planning, or outcome prediction in neurosurgical patients. Compared to clinical experts, ML models demonstrated a median absolute improvement in accuracy and area under the receiver operating curve of 13% (interquartile range 4-21%) and 0.14 (interquartile range 0.07-0.21), respectively. In 29 (58%) of the 50 outcome measures for which a P -value was provided or calculated, ML models outperformed clinical experts ( P < .05). In 18 of 50 (36%), no difference was seen between ML and expert performance ( P > .05), while in 3 of 50 (6%) clinical experts outperformed ML models ( P < .05). All 4 studies that compared clinicians assisted by ML models vs clinicians alone demonstrated a better performance in the first group. We conclude that ML models have the potential to augment the decision-making capacity of clinicians in neurosurgical applications; however, significant hurdles remain associated with creating, validating, and deploying ML models in the clinical setting. Shifting from the preconceptions of a human-vs-machine to a human-and-machine paradigm could be essential to overcome these hurdles. Published by Oxford University Press on behalf of Congress of Neurological Surgeons 2017.
Casu, Sebastian; Häske, David
2016-06-01
Delayed antibiotic treatment for patients in severe sepsis and septic shock decreases the probability of survival. In this survey, medical directors of different emergency medical services (EMS) in Germany were asked if they are prepared for pre-hospital sepsis therapy with antibiotics or special algorithms to evaluate the individual preparations of the different rescue areas for the treatment of patients with this infectious disease. The objective of the survey was to obtain a general picture of the current status of the EMS with respect to rapid antibiotic treatment for sepsis. A total of 166 medical directors were invited to complete a short survey on behalf of the different rescue service districts in Germany via an electronic cover letter. Of the rescue districts, 25.6 % (n = 20) stated that they keep antibiotics on EMS vehicles. In addition, 2.6 % carry blood cultures on the vehicles. The most common antibiotic is ceftriaxone (third generation cephalosporin). In total, 8 (10.3 %) rescue districts use an algorithm for patients with sepsis, severe sepsis or septic shock. Although the German EMS is an emergency physician-based rescue system, special opportunities in the form of antibiotics on emergency physician vehicles are missing. Simultaneously, only 10.3 % of the rescue districts use a special algorithm for sepsis therapy. Sepsis, severe sepsis and septic shock do not appear to be prioritized as highly as these deadly diseases should be in the pre-hospital setting.
Foot volume estimates based on a geometric algorithm in comparison to water displacement.
Mayrovitz, H N; Sims, N; Litwin, B; Pfister, S
2005-03-01
Assessing lower extremity limb volume and its change during and after lymphedema therapy is important for determining treatment efficacy and documenting outcomes. Although leg volumes may be determined by tape measure and other methods, there is no metric method to routinely assess foot volumes. Exclusion of foot volumes can under- or overestimate therapeutic progress. Our aim was to develop and test a metric measurement procedure and algorithm for practicing therapists to use to estimate foot volumes. The method uses a caliper and ruler to measure foot dimensions at standardized locations and calculates foot volume (VM) by a mathematical algorithm. VM was compared to volumes measured by water displacement (Vw) in 30 subjects (60 feet) using regression analysis and limits of agreement (LOA). Vw and VM (mean +/- sd) were similar 857 +/- 150 ml vs. 859 +/- 154 ml, and were highly correlated VM = 1.00Vw + 1.67 ml, r = 0.965, p < 0.001. The LOA for absolute volume differences and percentages were respectively +/- 79.6 ml and +/- 9.28 %. These results indicate that this metric method can be a useful alternative to water displacement when foot volumes are needed, but the water displacement method is contraindicated, impractical to implement, too time consuming or is not available.
A synergistic method for vibration suppression of an elevator mechatronic system
NASA Astrophysics Data System (ADS)
Knezevic, Bojan Z.; Blanusa, Branko; Marcetic, Darko P.
2017-10-01
Modern elevators are complex mechatronic systems which have to satisfy high performance in precision, safety and ride comfort. Each elevator mechatronic system (EMS) contains a mechanical subsystem which is characterized by its resonant frequency. In order to achieve high performance of the whole system, the control part of the EMS inevitably excites resonant circuits causing the occurrence of vibration. This paper proposes a synergistic solution based on the jerk control and the upgrade of the speed controller with a band-stop filter to restore lost ride comfort and speed control caused by vibration. The band-stop filter eliminates the resonant component from the speed controller spectra and jerk control provides operating of the speed controller in a linear mode as well as increased ride comfort. The original method for band-stop filter tuning based on Goertzel algorithm and Kiefer search algorithm is proposed in this paper. In order to generate the speed reference trajectory which can be defined by different shapes and amplitudes of jerk, a unique generalized model is proposed. The proposed algorithm is integrated in the power drive control algorithm and implemented on the digital signal processor. Through experimental verifications on a scale down prototype of the EMS it has been verified that only synergistic effect of controlling jerk and filtrating the reference torque can completely eliminate vibrations.
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization. PMID:28786986
Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong
2017-01-01
Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.
A Markov model for blind image separation by a mean-field EM algorithm.
Tonazzini, Anna; Bedini, Luigi; Salerno, Emanuele
2006-02-01
This paper deals with blind separation of images from noisy linear mixtures with unknown coefficients, formulated as a Bayesian estimation problem. This is a flexible framework, where any kind of prior knowledge about the source images and the mixing matrix can be accounted for. In particular, we describe local correlation within the individual images through the use of Markov random field (MRF) image models. These are naturally suited to express the joint pdf of the sources in a factorized form, so that the statistical independence requirements of most independent component analysis approaches to blind source separation are retained. Our model also includes edge variables to preserve intensity discontinuities. MRF models have been proved to be very efficient in many visual reconstruction problems, such as blind image restoration, and allow separation and edge detection to be performed simultaneously. We propose an expectation-maximization algorithm with the mean field approximation to derive a procedure for estimating the mixing matrix, the sources, and their edge maps. We tested this procedure on both synthetic and real images, in the fully blind case (i.e., no prior information on mixing is exploited) and found that a source model accounting for local autocorrelation is able to increase robustness against noise, even space variant. Furthermore, when the model closely fits the source characteristics, independence is no longer a strict requirement, and cross-correlated sources can be separated, as well.
Speech Enhancement Using Gaussian Scale Mixture Models
Hao, Jiucang; Lee, Te-Won; Sejnowski, Terrence J.
2011-01-01
This paper presents a novel probabilistic approach to speech enhancement. Instead of a deterministic logarithmic relationship, we assume a probabilistic relationship between the frequency coefficients and the log-spectra. The speech model in the log-spectral domain is a Gaussian mixture model (GMM). The frequency coefficients obey a zero-mean Gaussian whose covariance equals to the exponential of the log-spectra. This results in a Gaussian scale mixture model (GSMM) for the speech signal in the frequency domain, since the log-spectra can be regarded as scaling factors. The probabilistic relation between frequency coefficients and log-spectra allows these to be treated as two random variables, both to be estimated from the noisy signals. Expectation-maximization (EM) was used to train the GSMM and Bayesian inference was used to compute the posterior signal distribution. Because exact inference of this full probabilistic model is computationally intractable, we developed two approaches to enhance the efficiency: the Laplace method and a variational approximation. The proposed methods were applied to enhance speech corrupted by Gaussian noise and speech-shaped noise (SSN). For both approximations, signals reconstructed from the estimated frequency coefficients provided higher signal-to-noise ratio (SNR) and those reconstructed from the estimated log-spectra produced lower word recognition error rate because the log-spectra fit the inputs to the recognizer better. Our algorithms effectively reduced the SSN, which algorithms based on spectral analysis were not able to suppress. PMID:21359139
Simultaneous maximum a posteriori longitudinal PET image reconstruction
NASA Astrophysics Data System (ADS)
Ellis, Sam; Reader, Andrew J.
2017-09-01
Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.
Pal, Suvra; Balakrishnan, Narayanaswamy
2018-05-01
In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Cutaneous microdialysis as a novel means of continuously stimulating eccrine sweat glands in vivo.
Morgan, Caroline J; Friedmann, Peter S; Church, Martin K; Clough, Geraldine F
2006-06-01
Previous studies of the pharmacological regulation of sweat gland function in humans have administered agonists or antagonists systemically, by local intradermal injection or by iontophoresis. This has not allowed prolonged or steady-state activation of sweat glands to be examined. In this study, we used the technique of dermal microdialysis to administer pharmacological agents singly and in combination for up to 5 hours. Muscarinic stimulation with pilocarpine nitrate (50 mug ml(-1) to 1.66 mg ml(-1)) produced a sigmoid dose response curve, with maximal sweating (measured as transepidermal water loss) (mean 70 g m(-2) hour(-1)) after 15 minutes. This was sustained at steady-state levels (55 g m(-2) hour(-1)) until perfusion stopped. Perfusion with atropine (0.003 mg ml(-1)) reduced sweating below baseline and blocked pilocarpine-induced sweating completely. Noradrenaline (0.005 mg ml(-1)) induced much lower sweat rates than pilocarpine (56.8+/-1.62 g m(-2) hour(-1) vs 8.2+/-1.2 g m(-2) hour(-1), respectively, P<0.001) and this was unaffected by co-administration of atropine. This method has made it possible to show that sweat glands are capable of sustaining near maximal activity for at least 5 hours. The method has future application in investigation of conditions with disordered sweat gland activity.
Human neutrophil peptides in vaginitis/cervicitis of different etiology.
Wiechuła, Barbara E; Friedek, Daniela A; Ekiel, Alicja M; Romanik, Małgorzata K; Martirosian, Gayane
2007-01-01
Development of female genito-urinary infections depends on many factors, such as immune system activity, virulence of microorganism and production of factors inhibiting the growth of microorganisms. Taking into account the possibility of relapses or severe complications, it is very important to appropriately diagnose and treat infections. Because of recently observed increase of microbial resistance to antibiotics, researchers are looking for alternatives. In our study we evaluated and compared the concentration of human neutrophil peptides (HNP 1-3) in cervico-vaginal lavages (CVL), obtained from women with vaginitis/cervicitis. Swabs from the posterior vaginal fornix and from the endocervical canal as well as CVL samples were obtained from 32 patients with vaginitis/cervicitis and 29 healthy women (control group). Supernatants of CVL were used for determination of concentration of HNP by ELISA. The difference between concentrations of HNP 1-3 in studied and control groups was statistically significant (p = 0.018). The maximal concentration was determined in patients with mixed infections (28.41 ng/ml), and Group B Streptococci, GBS, (28.06 ng/ml), the minimal concentrations in cases of C. trachomatis (mean concentrations did not differ from those in the control group: 16.93 ng/ml and 16.39 ng/ml, respectively). Maximal correlation was determined for control-studied group with isolation of GBS (r = 0.79), and very high negative correlation for group of GBS - C. trachomatis (r = -0.98).
Constrained Fisher Scoring for a Mixture of Factor Analyzers
2016-09-01
expectation -maximization algorithm with similar computational requirements. Lastly, we demonstrate the efficacy of the proposed method for learning a... expectation maximization 44 Gene T Whipps 301 394 2372Unclassified Unclassified Unclassified UU ii Approved for public release; distribution is unlimited...14 3.6 Relationship with Expectation -Maximization 16 4. Simulation Examples 16 4.1 Synthetic MFA Example 17 4.2 Manifold Learning Example 22 5
Comparison of OPC job prioritization schemes to generate data for mask manufacturing
NASA Astrophysics Data System (ADS)
Lewis, Travis; Veeraraghavan, Vijay; Jantzen, Kenneth; Kim, Stephen; Park, Minyoung; Russell, Gordon; Simmons, Mark
2015-03-01
Delivering mask ready OPC corrected data to the mask shop on-time is critical for a foundry to meet the cycle time commitment for a new product. With current OPC compute resource sharing technology, different job scheduling algorithms are possible, such as, priority based resource allocation and fair share resource allocation. In order to maximize computer cluster efficiency, minimize the cost of the data processing and deliver data on schedule, the trade-offs of each scheduling algorithm need to be understood. Using actual production jobs, each of the scheduling algorithms will be tested in a production tape-out environment. Each scheduling algorithm will be judged on its ability to deliver data on schedule and the trade-offs associated with each method will be analyzed. It is now possible to introduce advance scheduling algorithms to the OPC data processing environment to meet the goals of on-time delivery of mask ready OPC data while maximizing efficiency and reducing cost.
Ertani, Andrea; Schiavon, Michela; Nardi, Serenella
2017-01-01
An alfalfa -based protein hydrolysate (EM) has been tested in tomato ( Solanum lycopersicon L.) plants at two different concentrations (0.1 and 1 mL L -1 ) to get insight on its efficacy as biostimulant in this species and to unravel possible metabolic targets and molecular mechanisms that may shed light on its mode of action. EM was efficient in promoting the fresh biomass and content in chlorophyll and soluble sugars of tomato plants, especially when it was applied at the concentration of 1 mL L -1 . This effect on plant productivity was likely related to the EM-dependent up-regulation of genes identified via microarray and involved in primary carbon and nitrogen metabolism, photosynthesis, nutrient uptake and developmental processes. EM also up-regulated a number of genes implied in the secondary metabolism that leads to the synthesis of compounds (phenols and terpenes) functioning in plant development and interaction with the environment. Concomitantly, phenol content was enhanced in EM-treated plants. Several new genes have been identified in tomato as potential targets of EM action, like those involved in detoxification processes from reactive oxygen species and xenobiotic (particularly glutathione/ascorbate cycle-related and ABC transporters), and defense against abiotic and biotic stress. The model hypothesized is that elicitors present in the EM formulation like auxins, phenolics, and amino acids, may trigger a signal transduction pathway via modulation of the intracellular levels of the hormones ethylene, jasmonic acid and abscissic acid, which then further prompt the activation of a cascade events requiring the presence and activity of many kinases and transcription factors to activate stress-related genes. The genes identified suggest these kinases and transcription factors as players involved in a complex crosstalk between biotic and abiotic stress signaling pathways. We conclude that EM acts as a biostimulant in tomato due to its capacity to stimulate plant productivity and up-regulate stress-related responses. Its use in agricultural practices may reduce the need of inorganic fertilizers and pesticides, thereby reducing the environmental impact of productive agriculture.
Ertani, Andrea; Schiavon, Michela; Nardi, Serenella
2017-01-01
An alfalfa-based protein hydrolysate (EM) has been tested in tomato (Solanum lycopersicon L.) plants at two different concentrations (0.1 and 1 mL L-1) to get insight on its efficacy as biostimulant in this species and to unravel possible metabolic targets and molecular mechanisms that may shed light on its mode of action. EM was efficient in promoting the fresh biomass and content in chlorophyll and soluble sugars of tomato plants, especially when it was applied at the concentration of 1 mL L-1. This effect on plant productivity was likely related to the EM-dependent up-regulation of genes identified via microarray and involved in primary carbon and nitrogen metabolism, photosynthesis, nutrient uptake and developmental processes. EM also up-regulated a number of genes implied in the secondary metabolism that leads to the synthesis of compounds (phenols and terpenes) functioning in plant development and interaction with the environment. Concomitantly, phenol content was enhanced in EM-treated plants. Several new genes have been identified in tomato as potential targets of EM action, like those involved in detoxification processes from reactive oxygen species and xenobiotic (particularly glutathione/ascorbate cycle-related and ABC transporters), and defense against abiotic and biotic stress. The model hypothesized is that elicitors present in the EM formulation like auxins, phenolics, and amino acids, may trigger a signal transduction pathway via modulation of the intracellular levels of the hormones ethylene, jasmonic acid and abscissic acid, which then further prompt the activation of a cascade events requiring the presence and activity of many kinases and transcription factors to activate stress-related genes. The genes identified suggest these kinases and transcription factors as players involved in a complex crosstalk between biotic and abiotic stress signaling pathways. We conclude that EM acts as a biostimulant in tomato due to its capacity to stimulate plant productivity and up-regulate stress-related responses. Its use in agricultural practices may reduce the need of inorganic fertilizers and pesticides, thereby reducing the environmental impact of productive agriculture. PMID:28725232
In vitro and in vivo investigations on the antitumour activity of Chelidonium majus.
Capistrano I, Rica; Wouters, An; Lardon, Filip; Gravekamp, Claudia; Apers, Sandra; Pieters, Luc
2015-12-15
Chelidonium majus L. (Papaveraceae) (greater celandine) is a medicinal herb that is widely spread in Europe. Antitumoural activity has been reported for C. majus extracts. To investigate the antitumour activity of a C. majus extract in vitro and in vivo. Cytotoxic effects of C. majus extracts were evaluated on human cancer cell lines, i.e. PANC-1 (pancreas cancer), HT-29 (colon cancer), MDA-MB-231 (breast cancer), PC-EM005 and PC-EM002 (primary endometrium cancer cells), and PANC02 (murine pancreatic adenocarcinoma cells). A preliminary in vivo study was performed to evaluate the effect of a defatted C. majus extract and Ukrain(TM) in a highly metastatic murine pancreatic model. Chelidonium majus L. herb containing 1.26% (dry weight) of total alkaloids expressed as chelidonine was used to prepare an 80% ethanolic extract (CM2). This crude extract was then defatted with n-hexane, resulting in a defatted C. majus extract (CM2B). Cytotoxic effects of the two extracts (CM2 and CM2B) were evaluated on human and murine cell lines in vitro. CM2B and Ukrain(TM) were evaluated in a highly metastatic murine pancreatic model. Four main benzylisoquinoline alkaloids were identified in CM2B, i.e. chelidonine, sanguinarine, chelerythrine and protopine, using HPLC-UV. CM2 showed a high cytotoxic activity against PANC-1 (IC50, 20.7 µg/ml) and HT-29 (IC50, 20.6 µg/ml), and a moderate cytotoxic activity against MDA-MB-231 (IC50, 73.9 µg/ml). CM2 as well as CM2B showed a moderate to high cytotoxic activity against the PANC02 cell line (IC50, 34.4 and 36.0 µg/ml). Low to almost no cytotoxic effect was observed on primary endometrium cancer cells PC-EM005, PC-EM002 and on normal fibroblast cells 3T3, when treated with CM2B. Significantly less metastases were counted in mice treated with 1.2 mg/kg CM2B, but not with 3.6 mg/kg Ukrain(TM), compared to the control group. The extract, however, did not affect the weight of the primary tumours. Copyright © 2015 Elsevier GmbH. All rights reserved.
EM Bias-Correction for Ice Thickness and Surface Roughness Retrievals over Rough Deformed Sea Ice
NASA Astrophysics Data System (ADS)
Li, L.; Gaiser, P. W.; Allard, R.; Posey, P. G.; Hebert, D. A.; Richter-Menge, J.; Polashenski, C. M.
2016-12-01
The very rough ridge sea ice accounts for significant percentage of total ice areas and even larger percentage of total volume. The commonly used Radar altimeter surface detection techniques are empirical in nature and work well only over level/smooth sea ice. Rough sea ice surfaces can modify the return waveforms, resulting in significant Electromagnetic (EM) bias in the estimated surface elevations, and thus large errors in the ice thickness retrievals. To understand and quantify such sea ice surface roughness effects, a combined EM rough surface and volume scattering model was developed to simulate radar returns from the rough sea ice `layer cake' structure. A waveform matching technique was also developed to fit observed waveforms to a physically-based waveform model and subsequently correct the roughness induced EM bias in the estimated freeboard. This new EM Bias Corrected (EMBC) algorithm was able to better retrieve surface elevations and estimate the surface roughness parameter simultaneously. In situ data from multi-instrument airborne and ground campaigns were used to validate the ice thickness and surface roughness retrievals. For the surface roughness retrievals, we applied this EMBC algorithm to co-incident LiDAR/Radar measurements collected during a Cryosat-2 under-flight by the NASA IceBridge missions. Results show that not only does the waveform model fit very well to the measured radar waveform, but also the roughness parameters derived independently from the LiDAR and radar data agree very well for both level and deformed sea ice. For sea ice thickness retrievals, validation based on in-situ data from the coordinated CRREL/NRL field campaign demonstrates that the physically-based EMBC algorithm performs fundamentally better than the empirical algorithm over very rough deformed sea ice, suggesting that sea ice surface roughness effects can be modeled and corrected based solely on the radar return waveforms.
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
Holzinger, Andreas
2016-06-01
Machine learning (ML) is the fastest growing field in computer science, and health informatics is among the greatest challenges. The goal of ML is to develop algorithms which can learn and improve over time and can be used for predictions. Most ML researchers concentrate on automatic machine learning (aML), where great advances have been made, for example, in speech recognition, recommender systems, or autonomous vehicles. Automatic approaches greatly benefit from big data with many training sets. However, in the health domain, sometimes we are confronted with a small number of data sets or rare events, where aML-approaches suffer of insufficient training samples. Here interactive machine learning (iML) may be of help, having its roots in reinforcement learning, preference learning, and active learning. The term iML is not yet well used, so we define it as "algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human." This "human-in-the-loop" can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization of health data, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard problem, reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase.
A study of active learning methods for named entity recognition in clinical text.
Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua
2015-12-01
Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.
Generation of Referring Expressions: Assessing the Incremental Algorithm
ERIC Educational Resources Information Center
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
NASA Astrophysics Data System (ADS)
Ackley, Kendall; Eikenberry, Stephen; Klimenko, Sergey; LIGO Team
2017-01-01
We present a false-alarm rate for a joint detection of gravitational wave (GW) events and associated electromagnetic (EM) counterparts for Advanced LIGO and Virgo (LV) observations during the first years of operation. Using simulated GW events and their recostructed probability skymaps, we tile over the error regions using sets of archival wide-field telescope survey images and recover the number of astrophysical transients to be expected during LV-EM followup. With the known GW event injection coordinates we inject artificial electromagnetic (EM) sources at that site based on theoretical and observational models on a one-to-one basis. We calculate the EM false-alarm probability using an unsupervised machine learning algorithm based on shapelet analysis which has shown to be a strong discriminator between astrophysical transients and image artifacts while reducing the set of transients to be manually vetted by five orders of magnitude. We also show the performance of our method in context with other machine-learned transient classification and reduction algorithms, showing comparability without the need for a large set of training data opening the possibility for next-generation telescopes to take advantage of this pipeline for LV-EM followup missions.
Burns, Ryan D; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Shultz, Barry B; Saint-Maurice, Pedro F; Welk, Gregory J; Mahar, Matthew T
2016-01-01
A popular algorithm to predict VO2Peak from the one-mile run/walk test (1MRW) includes body mass index (BMI), which manifests practical issues in school settings. The purpose of this study was to develop an aerobic capacity model from 1MRW in adolescents independent of BMI. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years. The 1MRW was administered on an outside track and a laboratory VO2Peak test was conducted using a maximal treadmill protocol. Multiple linear regression was employed to develop the prediction model. Results yielded the following algorithm: VO2Peak = 7.34 × (1MRW speed in m s(-1)) + 0.23 × (age × sex) + 17.75. The New Model displayed a multiple correlation and prediction error of R = 0.81, standard error of the estimate = 4.78 ml kg(-1) · min(-1), with measured VO2Peak and good criterion-referenced (CR) agreement into FITNESSGRAM's Healthy Fitness Zone (Kappa = 0.62; percentage agreement = 84.4%; Φ = 0.62). The New Model was validated using k-fold cross-validation and showed homoscedastic residuals across the range of predicted scores. The omission of BMI did not compromise accuracy of the model. In conclusion, the New Model displayed good predictive accuracy and good CR agreement with measured VO2Peak in adolescents aged 13-16 years.
Harmony search algorithm: application to the redundancy optimization problem
NASA Astrophysics Data System (ADS)
Nahas, Nabil; Thien-My, Dao
2010-09-01
The redundancy optimization problem is a well known NP-hard problem which involves the selection of elements and redundancy levels to maximize system performance, given different system-level constraints. This article presents an efficient algorithm based on the harmony search algorithm (HSA) to solve this optimization problem. The HSA is a new nature-inspired algorithm which mimics the improvization process of music players. Two kinds of problems are considered in testing the proposed algorithm, with the first limited to the binary series-parallel system, where the problem consists of a selection of elements and redundancy levels used to maximize the system reliability given various system-level constraints; the second problem for its part concerns the multi-state series-parallel systems with performance levels ranging from perfect operation to complete failure, and in which identical redundant elements are included in order to achieve a desirable level of availability. Numerical results for test problems from previous research are reported and compared. The results of HSA showed that this algorithm could provide very good solutions when compared to those obtained through other approaches.
Optimization of laminated stacking sequence for buckling load maximization by genetic algorithm
NASA Technical Reports Server (NTRS)
Le Riche, Rodolphe; Haftka, Raphael T.
1992-01-01
The use of a genetic algorithm to optimize the stacking sequence of a composite laminate for buckling load maximization is studied. Various genetic parameters including the population size, the probability of mutation, and the probability of crossover are optimized by numerical experiments. A new genetic operator - permutation - is proposed and shown to be effective in reducing the cost of the genetic search. Results are obtained for a graphite-epoxy plate, first when only the buckling load is considered, and then when constraints on ply contiguity and strain failure are added. The influence on the genetic search of the penalty parameter enforcing the contiguity constraint is studied. The advantage of the genetic algorithm in producing several near-optimal designs is discussed.
Gastin, P B; Lawson, D L
1994-01-01
A supramaximal variable resistance test over varying time intervals was evaluated as an instrument for the assessment of a number of anaerobic parameters, including the accumulated oxygen deficit (AOD). Eight active men [age, 22 +/- 1 (SEM 1) years, peak oxygen uptake, 53.1 (SEM 2.1) ml x kg-1 x min-1] completed three randomly ordered all-out sprints of 45-, 60- and 90-s duration. Two incremental pretests consisting of three 5-min stages at power outputs of 45, 135, 225 W and 90, 180, 270 W were performed to establish individual efficiency relationships [r = 0.996 (SEE 1.1) ml x kg-1 x min-1]. These relationships were used to estimate energy demand (millilitres per kilogram of oxygen equivalents in 15-s time intervals) during the supramaximal tests. The AOD for the 45 [47.6 (SEM 1.5) ml x kg-1], 60 [49.0 (SEM 1.8) ml x kg-1] and 90 s [49.6 (SEM 1.7) ml x kg-1] tests were significantly different only for the 45 and 90-s tests. Evaluation of the 90-s test indicated that maximal or near-maximal (98%) anaerobic energy release was achieved in 60 s, with the AOD beginning to plateau after this time. No significant differences among tests were found for peak power, time to peak power and peak pedalling rate. Differences in mean power, total work and relative power decrement were related to the length of the test.(ABSTRACT TRUNCATED AT 250 WORDS)
Recursive Fact-finding: A Streaming Approach to Truth Estimation in Crowdsourcing Applications
2013-07-01
are reported over the course of the campaign, lending themselves better to the abstraction of a data stream arriving from the community of sources. In...EM Recursive EM Figure 4. Recursive EM Algorithm Convergence V. RELATED WORK Social sensing which is also referred to as human- centric sensing [4...systems, where different sources offer reviews on products (or brands, companies) they have experienced [16]. Customers are affected by those reviews
NASA Astrophysics Data System (ADS)
Pasquato, Mario; Chung, Chul
2016-05-01
Context. Machine-learning (ML) solves problems by learning patterns from data with limited or no human guidance. In astronomy, ML is mainly applied to large observational datasets, e.g. for morphological galaxy classification. Aims: We apply ML to gravitational N-body simulations of star clusters that are either formed by merging two progenitors or evolved in isolation, planning to later identify globular clusters (GCs) that may have a history of merging from observational data. Methods: We create mock-observations from simulated GCs, from which we measure a set of parameters (also called features in the machine-learning field). After carrying out dimensionality reduction on the feature space, the resulting datapoints are fed in to various classification algorithms. Using repeated random subsampling validation, we check whether the groups identified by the algorithms correspond to the underlying physical distinction between mergers and monolithically evolved simulations. Results: The three algorithms we considered (C5.0 trees, k-nearest neighbour, and support-vector machines) all achieve a test misclassification rate of about 10% without parameter tuning, with support-vector machines slightly outperforming the others. The first principal component of feature space correlates with cluster concentration. If we exclude it from the regression, the performance of the algorithms is only slightly reduced.
Blood detection in wireless capsule endoscopy using expectation maximization clustering
NASA Astrophysics Data System (ADS)
Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.
2006-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Alves, Fernanda Germano; Filho, Francisco Maugeri; de Medeiros Burkert, Janaína Fernandes; Kalil, Susana Juliano
2010-03-01
In this work, the agitation and aeration effects in the maximization of the beta-galactosidase production from Kluyveromyces marxianus CCT 7082 were investigated simultaneously, in relation to the volumetric enzyme activity and the productivity, as well as the analysis of the lactose consumption and production of glucose, and galactose of this process. Agitation and aeration effects were studied in a 2 L batch stirred reactor. A central composite design (2(2) trials plus three central points) was carried out. Agitation speed varied from 200 to 500 rpm and aeration rate from 0.5 to 1.5 vvm. It has been shown in this study that the volumetric enzyme production was strongly influenced by mixing conditions, while aeration was shown to be less significant. Linear models for activity and productivity due to agitation and aeration were obtained. The favorable condition was 500 rpm and 1.5 vvm, which lead to the best production of 17 U mL(-1) for enzymatic activity, 1.2 U mL(-1) h(-1) for productivity in 14 h of process, a cellular concentration of 11 mg mL(-1), and a 167.2 h(-1) volumetric oxygen transfer coefficient.
NASA Astrophysics Data System (ADS)
Liu, Guangbin; Yang, Hongsheng; Liu, Shilin
2010-07-01
In laboratory conditions, effects of rearing temperature and stocking density were examined on hatching of fertilized egg and growth of auricularia larvae of Apostichopus japonicus respectively. Data series like larval length and density, metamorphic time, and survival rate of the larvae were recorded. Statistics showed that for A. japonicus, survival rate (from fertilized egg to late auricularia) decreased significantly with the increasing rearing temperature ( P<0.05). At different temperatures SGR was statistically significant as well ( P<0.05) from day 1, and maximal SGR was found on day 9 at 24°C (159.26±3.28). This study clearly indicated that at low temperature (<24°C), metamorphic rate was remarkably higher than at higher temperature (>26°C). Hatching rate was significantly different between 0.2-5 ind./ml groups and 20-50 ind./ml groups. Rearing larvae at the higher density had the smaller maximal-length, whereas needed longer time to complete metamorphosis. This study suggested that 21°C and 0.4 ind./ml can be used as the most suitable rearing temperature and stocking density for large -scale artificial breeding of A. japonicus’s larvae.
Body composition and Vo2max of exceptional weight-trained athletes.
Fahey, T D; Akka, L; Rolph, R
1975-10-01
The maximal oxygen uptake and body composition of 30 exceptional athletes who have trained extensively with weights was measured. The sample included 3 world record holders, 8 other world class athletes, and 19 national class competitors. The sports represented were shot-putting, discus throwing, body building, power lifting, wrestling, and olympic lifting. Vo2max as determined on a bicycle ergometer by the open-circuit method was 4.6 +/- 0.7 1-min-1 (mean +/- SD) (48.8 +/- 7 ml-kg-1., 56.4 +/- 8.6 ml-(kg LBW)-1). The mean maximal heart rate was 185.3 +/- 11.6 beats-min-1. The subjects attained a work rate of 1,728.2 +/- 223 kpm-min-1 on a continuous progressive bicycle ergometer test and had mean maximal ventilations of 152.5 +/- 27.7 1-min-1 BTPS. Body composition was determined by densitometry. Body weight averaged 96.0 +/- 14.9 kg, with mean percent fat of 13.8 +/- 4.5. The results of this study indicate that exceptional weight-trained athletes are within the normal college-age population range in body fat and of somewhat higher physical working capacity.
Koh, Eun S; Chung, Sun G; Kim, Tae Uk; Kim, Hee Chan
2012-12-01
To investigate whether capsule-preserving hydraulic distension with saline solution and corticosteroid for adhesive capsulitis induces biomechanical alterations in glenohumeral joint capsules along with clinical improvements. A case series. University outpatient clinic of physical medicine and rehabilitation. Eighteen patients with unilateral adhesive capsulitis. INTERVENTION AND MAIN OUTCOME MEASUREMENTS: Three hydraulic distensions with saline solution and corticosteroid were performed with 1-month intervals. To avoid rupturing capsules, all distensions were monitored by using real-time pressure-volume curves. Stiffness, maximal volume capacity, and pressure at the maximal volume capacity of the capsule were measured at each intervention. Clinical parameters, such as pain and range of motion, were recorded before, 3 days after, and 1 month after each distension. Stiffness decreased (47.6 ± 27.1 mm Hg/mL to 31.7 ± 18.4 mm Hg/mL to 24.2 ± 14.0 mm Hg/mL, mean SD) and maximal volume capacity increased (18.8 ± 7.3 mL to 20.5 ± 7.5 mL to 24.2 ± 7.0 mL, mean SD) significantly (P = .001 for both) at each repeated hydraulic distension. Pressure at the maximal volume capacity tended to decrease, but the decrements were not statistically significant (P = .662). The clinical parameters were significantly improved throughout and 1 month after the 3 repeat procedures (P < .05 for all). Capsule-preserving hydraulic distension changed the biomechanical properties of the glenohumeral joint capsule, lessening the stiffness and enlarging the volume capacity. These alterations were accompanied by improved range of motion and relief of pain. Repeated capsule-preserving hydraulic distension with saline solution and corticosteroid would be useful to treat adhesive capsulitis and to evaluate the treatment results. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
Using a genetic algorithm to optimize a water-monitoring network for accuracy and cost effectiveness
NASA Astrophysics Data System (ADS)
Julich, R. J.
2004-05-01
The purpose of this project is to determine the optimal spatial distribution of water-monitoring wells to maximize important data collection and to minimize the cost of managing the network. We have employed a genetic algorithm (GA) towards this goal. The GA uses a simple fitness measure with two parts: the first part awards a maximal score to those combinations of hydraulic head observations whose net uncertainty is closest to the value representing all observations present, thereby maximizing accuracy; the second part applies a penalty function to minimize the number of observations, thereby minimizing the overall cost of the monitoring network. We used the linear statistical inference equation to calculate standard deviations on predictions from a numerical model generated for the 501-observation Death Valley Regional Flow System as the basis for our uncertainty calculations. We have organized the results to address the following three questions: 1) what is the optimal design strategy for a genetic algorithm to optimize this problem domain; 2) what is the consistency of solutions over several optimization runs; and 3) how do these results compare to what is known about the conceptual hydrogeology? Our results indicate the genetic algorithms are a more efficient and robust method for solving this class of optimization problems than have been traditional optimization approaches.
Dipnall, Joanna F; Pasco, Julie A; Berk, Michael; Williams, Lana J; Dodd, Seetal; Jacka, Felice N; Meyer, Denny
2016-01-01
Depression is commonly comorbid with many other somatic diseases and symptoms. Identification of individuals in clusters with comorbid symptoms may reveal new pathophysiological mechanisms and treatment targets. The aim of this research was to combine machine-learning (ML) algorithms with traditional regression techniques by utilising self-reported medical symptoms to identify and describe clusters of individuals with increased rates of depression from a large cross-sectional community based population epidemiological study. A multi-staged methodology utilising ML and traditional statistical techniques was performed using the community based population National Health and Nutrition Examination Study (2009-2010) (N = 3,922). A Self-organised Mapping (SOM) ML algorithm, combined with hierarchical clustering, was performed to create participant clusters based on 68 medical symptoms. Binary logistic regression, controlling for sociodemographic confounders, was used to then identify the key clusters of participants with higher levels of depression (PHQ-9≥10, n = 377). Finally, a Multiple Additive Regression Tree boosted ML algorithm was run to identify the important medical symptoms for each key cluster within 17 broad categories: heart, liver, thyroid, respiratory, diabetes, arthritis, fractures and osteoporosis, skeletal pain, blood pressure, blood transfusion, cholesterol, vision, hearing, psoriasis, weight, bowels and urinary. Five clusters of participants, based on medical symptoms, were identified to have significantly increased rates of depression compared to the cluster with the lowest rate: odds ratios ranged from 2.24 (95% CI 1.56, 3.24) to 6.33 (95% CI 1.67, 24.02). The ML boosted regression algorithm identified three key medical condition categories as being significantly more common in these clusters: bowel, pain and urinary symptoms. Bowel-related symptoms was found to dominate the relative importance of symptoms within the five key clusters. This methodology shows promise for the identification of conditions in general populations and supports the current focus on the potential importance of bowel symptoms and the gut in mental health research.
Maximum likelihood positioning and energy correction for scintillation detectors
NASA Astrophysics Data System (ADS)
Lerche, Christoph W.; Salomon, André; Goldschmidt, Benjamin; Lodomez, Sarah; Weissler, Björn; Solf, Torsten
2016-02-01
An algorithm for determining the crystal pixel and the gamma ray energy with scintillation detectors for PET is presented. The algorithm uses Likelihood Maximisation (ML) and therefore is inherently robust to missing data caused by defect or paralysed photo detector pixels. We tested the algorithm on a highly integrated MRI compatible small animal PET insert. The scintillation detector blocks of the PET gantry were built with the newly developed digital Silicon Photomultiplier (SiPM) technology from Philips Digital Photon Counting and LYSO pixel arrays with a pitch of 1 mm and length of 12 mm. Light sharing was used to readout the scintillation light from the 30× 30 scintillator pixel array with an 8× 8 SiPM array. For the performance evaluation of the proposed algorithm, we measured the scanner’s spatial resolution, energy resolution, singles and prompt count rate performance, and image noise. These values were compared to corresponding values obtained with Center of Gravity (CoG) based positioning methods for different scintillation light trigger thresholds and also for different energy windows. While all positioning algorithms showed similar spatial resolution, a clear advantage for the ML method was observed when comparing the PET scanner’s overall single and prompt detection efficiency, image noise, and energy resolution to the CoG based methods. Further, ML positioning reduces the dependence of image quality on scanner configuration parameters and was the only method that allowed achieving highest energy resolution, count rate performance and spatial resolution at the same time.
Gunderson, Shana M.; Hayes, Robert A.; Quinn, John P.; Danziger, Larry H.
2004-01-01
ABT-492 is a novel quinolone with potent activity against gram-positive, gram-negative, and atypical pathogens, making this compound an ideal candidate for the treatment of community-acquired pneumonia. We therefore compared the in vitro pharmacodynamic activity of ABT-492 to that of levofloxacin, an antibiotic commonly used for the treatment of pneumonia, through MIC determination and time-kill kinetic analysis. ABT-492 demonstrated potent activity against penicillin-sensitive, penicillin-resistant, and levofloxacin-resistant Streptococcus pneumoniae strains (MICs ranging from 0.0078 to 0.125 μg/ml); β-lactamase-positive and β-lactamase-negative Haemophilus influenzae strains (MICs ranging from 0.000313 to 0.00125 μg/ml); and β-lactamase-positive and β-lactamase-negative Moraxella catarrhalis strains (MICs ranging from 0.001 to 0.0025 μg/ml), with MICs being much lower than those of levofloxacin. Both ABT-492 and levofloxacin demonstrated concentration-dependent bactericidal activities in time-kill kinetics studies at four and eight times the MIC with 10 of 12 bacterial isolates exposed to ABT-492 and with 12 of 12 bacterial isolates exposed to levofloxacin. Sigmoidal maximal-effect models support concentration-dependent bactericidal activity. The model predicts that 50% of maximal activity can be achieved with concentrations ranging from one to two times the MIC for both ABT-492 and levofloxacin and that near-maximal activity (90% effective concentration) can be achieved at concentrations ranging from two to five times the MIC for ABT-492 and one to six times the MIC for levofloxacin. PMID:14693540
Gunderson, Shana M; Hayes, Robert A; Quinn, John P; Danziger, Larry H
2004-01-01
ABT-492 is a novel quinolone with potent activity against gram-positive, gram-negative, and atypical pathogens, making this compound an ideal candidate for the treatment of community-acquired pneumonia. We therefore compared the in vitro pharmacodynamic activity of ABT-492 to that of levofloxacin, an antibiotic commonly used for the treatment of pneumonia, through MIC determination and time-kill kinetic analysis. ABT-492 demonstrated potent activity against penicillin-sensitive, penicillin-resistant, and levofloxacin-resistant Streptococcus pneumoniae strains (MICs ranging from 0.0078 to 0.125 micro g/ml); beta-lactamase-positive and beta-lactamase-negative Haemophilus influenzae strains (MICs ranging from 0.000313 to 0.00125 micro g/ml); and beta-lactamase-positive and beta-lactamase-negative Moraxella catarrhalis strains (MICs ranging from 0.001 to 0.0025 micro g/ml), with MICs being much lower than those of levofloxacin. Both ABT-492 and levofloxacin demonstrated concentration-dependent bactericidal activities in time-kill kinetics studies at four and eight times the MIC with 10 of 12 bacterial isolates exposed to ABT-492 and with 12 of 12 bacterial isolates exposed to levofloxacin. Sigmoidal maximal-effect models support concentration-dependent bactericidal activity. The model predicts that 50% of maximal activity can be achieved with concentrations ranging from one to two times the MIC for both ABT-492 and levofloxacin and that near-maximal activity (90% effective concentration) can be achieved at concentrations ranging from two to five times the MIC for ABT-492 and one to six times the MIC for levofloxacin.
Walter, James S; Posluszny, Joseph; Dieter, Raymond; Dieter, Robert S; Sayers, Scott; Iamsakul, Kiratipath; Staunton, Christine; Thomas, Donald; Rabbat, Mark; Singh, Sanjay
2018-05-01
To optimize maximal respiratory responses with surface stimulation over abdominal and upper thorax muscles and using a 12-Channel Neuroprosthetic Platform. Following instrumentation, six anesthetized adult canines were hyperventilated sufficiently to produce respiratory apnea. Six abdominal tests optimized electrode arrangements and stimulation parameters using bipolar sets of 4.5 cm square electrodes. Tests in the upper thorax optimized electrode locations, and forelimb moment was limited to slight-to-moderate. During combined muscle stimulation tests, the upper thoracic was followed immediately by abdominal stimulation. Finally, a model of glottal closure for cough was conducted with the goal of increased peak expiratory flow. Optimized stimulation of abdominal muscles included three sets of bilateral surface electrodes located 4.5 cm dorsal to the lateral line and from the 8 th intercostal space to caudal to the 13 th rib, 80 or 100 mA current, and 50 Hz stimulation frequency. The maximal expired volume was 343 ± 23 ml (n=3). Optimized upper thorax stimulation included a single bilateral set of electrodes located over the 2 nd interspace, 60 to 80 mA, and 50 Hz. The maximal inspired volume was 304 ± 54 ml (n=4). Sequential stimulation of the two muscles increased the volume to 600 ± 152 ml (n=2), and the glottal closure maneuver increased the flow. Studies in an adult canine model identified optimal surface stimulation methods for upper thorax and abdominal muscles to induce sufficient volumes for ventilation and cough. Further study with this neuroprosthetic platform is warranted.
Quantification of pericardial effusions by echocardiography and computed tomography.
Leibowitz, David; Perlman, Gidon; Planer, David; Gilon, Dan; Berman, Philip; Bogot, Naama
2011-01-15
Echocardiography is a well-accepted tool for the diagnosis and quantification of pericardial effusion (PEff). Given the increasing use of computed tomographic (CT) scanning, more PEffs are being initially diagnosed by computed tomography. No study has compared quantification of PEff by computed tomography and echocardiography. The objective of this study was to assess the accuracy of quantification of PEff by 2-dimensional echocardiography and computed tomography compared to the amount of pericardial fluid drained at pericardiocentesis. We retrospectively reviewed an institutional database to identify patients who underwent chest computed tomography and echocardiography before percutaneous pericardiocentesis with documentation of the amount of fluid withdrawn. Digital 2-dimensional echocardiographic and CT images were retrieved and quantification of PEff volume was performed by applying the formula for the volume of a prolate ellipse, π × 4/3 × maximal long-axis dimension/2 × maximal transverse dimension/2 × maximal anteroposterior dimension/2, to the pericardial sac and to the heart. Nineteen patients meeting study qualifications were entered into the study. The amount of PEff drained was 200 to 1,700 ml (mean 674 ± 340). Echocardiographically calculated pericardial effusion volume correlated relatively well with PEff volume (r = 0.73, p <0.001, mean difference -41 ± 225 ml). There was only moderate correlation between CT volume quantification and actual volume drained (r = 0.4, p = 0.004, mean difference 158 ± 379 ml). In conclusion, echocardiography appears a more accurate imaging technique than computed tomography in quantitative assessment of nonloculated PEffs and should continue to be the primary imaging in these patients. Copyright © 2011 Elsevier Inc. All rights reserved.
The gene normalization task in BioCreative III
2011-01-01
Background We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). Results We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. Conclusions By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance. PMID:22151901
The gene normalization task in BioCreative III.
Lu, Zhiyong; Kao, Hung-Yu; Wei, Chih-Hsuan; Huang, Minlie; Liu, Jingchen; Kuo, Cheng-Ju; Hsu, Chun-Nan; Tsai, Richard Tzong-Han; Dai, Hong-Jie; Okazaki, Naoaki; Cho, Han-Cheol; Gerner, Martin; Solt, Illes; Agarwal, Shashank; Liu, Feifan; Vishnyakova, Dina; Ruch, Patrick; Romacker, Martin; Rinaldi, Fabio; Bhattacharya, Sanmitra; Srinivasan, Padmini; Liu, Hongfang; Torii, Manabu; Matos, Sergio; Campos, David; Verspoor, Karin; Livingston, Kevin M; Wilbur, W John
2011-10-03
We report the Gene Normalization (GN) challenge in BioCreative III where participating teams were asked to return a ranked list of identifiers of the genes detected in full-text articles. For training, 32 fully and 500 partially annotated articles were prepared. A total of 507 articles were selected as the test set. Due to the high annotation cost, it was not feasible to obtain gold-standard human annotations for all test articles. Instead, we developed an Expectation Maximization (EM) algorithm approach for choosing a small number of test articles for manual annotation that were most capable of differentiating team performance. Moreover, the same algorithm was subsequently used for inferring ground truth based solely on team submissions. We report team performance on both gold standard and inferred ground truth using a newly proposed metric called Threshold Average Precision (TAP-k). We received a total of 37 runs from 14 different teams for the task. When evaluated using the gold-standard annotations of the 50 articles, the highest TAP-k scores were 0.3297 (k=5), 0.3538 (k=10), and 0.3535 (k=20), respectively. Higher TAP-k scores of 0.4916 (k=5, 10, 20) were observed when evaluated using the inferred ground truth over the full test set. When combining team results using machine learning, the best composite system achieved TAP-k scores of 0.3707 (k=5), 0.4311 (k=10), and 0.4477 (k=20) on the gold standard, representing improvements of 12.4%, 21.8%, and 26.6% over the best team results, respectively. By using full text and being species non-specific, the GN task in BioCreative III has moved closer to a real literature curation task than similar tasks in the past and presents additional challenges for the text mining community, as revealed in the overall team results. By evaluating teams using the gold standard, we show that the EM algorithm allows team submissions to be differentiated while keeping the manual annotation effort feasible. Using the inferred ground truth we show measures of comparative performance between teams. Finally, by comparing team rankings on gold standard vs. inferred ground truth, we further demonstrate that the inferred ground truth is as effective as the gold standard for detecting good team performance.
Depaoli, Sarah
2013-06-01
Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
One-way quantum computing in superconducting circuits
NASA Astrophysics Data System (ADS)
Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.
2018-03-01
We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.
Orthogonalizing EM: A design-based least squares algorithm.
Xiong, Shifeng; Dai, Bin; Huling, Jared; Qian, Peter Z G
We introduce an efficient iterative algorithm, intended for various least squares problems, based on a design of experiments perspective. The algorithm, called orthogonalizing EM (OEM), works for ordinary least squares and can be easily extended to penalized least squares. The main idea of the procedure is to orthogonalize a design matrix by adding new rows and then solve the original problem by embedding the augmented design in a missing data framework. We establish several attractive theoretical properties concerning OEM. For the ordinary least squares with a singular regression matrix, an OEM sequence converges to the Moore-Penrose generalized inverse-based least squares estimator. For ordinary and penalized least squares with various penalties, it converges to a point having grouping coherence for fully aliased regression matrices. Convergence and the convergence rate of the algorithm are examined. Finally, we demonstrate that OEM is highly efficient for large-scale least squares and penalized least squares problems, and is considerably faster than competing methods when n is much larger than p . Supplementary materials for this article are available online.
Caldarelli, G; Troiano, G; Rosadini, D; Nante, N
2017-01-01
The available laboratory tests for the differential diagnosis of prostate cancer, are represented by the total PSA, the free PSA, and the free/total PSA ratio. In Italy most of doctors tend to request both total and free PSA for their patients even in cases where the total PSA doesn't justify the further request of free PSA, with a consequent growth of the costs for the National Health System. The aim of our study was to predict the saving in Euro (due to reagents) and reduction in free PSA tests, applying the "PSA Reflex" algorithm. We calculated the number of total PSA and free PSA exams performed in 2014 in the Hospital of Grosseto and, simulating the application of the "PSA Reflex" algorithm in the same year, we calculated the decrease in the number of free PSA requests and we tried to predict the Euro savings in reagents, obtained from this reduction. In 2014 in the Hospital of Grosseto 25,955 total PSA tests have been performed: 3,631 (14%) resulted greater than 10 ng / ml; 7,686 (29.6%) between 2 and 10 ng / ml; 14,638 (56.4%) lower than 2 ng / ml. The performed free PSA tests were 16904. Simulating the use of "PSA Reflex" algorithm, the free PSA tests would be performed only in cases with total PSA values between 2 and 10 ng / mL with a saving of 54.5% of free PSA exams and of 8,971 euros, only for reagents. Our study showed that the "PSA Reflex" algorithm is a valid alternative leading to a reduction of the costs. The estimated intralaboratory savings, due to the reagents, seem to be modest, however, they are followed by the additional savings due to the other diagnostic processes for prostate cancers.
Extracellular space preservation aids the connectomic analysis of neural circuits.
Pallotto, Marta; Watkins, Paul V; Fubara, Boma; Singer, Joshua H; Briggman, Kevin L
2015-12-09
Dense connectomic mapping of neuronal circuits is limited by the time and effort required to analyze 3D electron microscopy (EM) datasets. Algorithms designed to automate image segmentation suffer from substantial error rates and require significant manual error correction. Any improvement in segmentation error rates would therefore directly reduce the time required to analyze 3D EM data. We explored preserving extracellular space (ECS) during chemical tissue fixation to improve the ability to segment neurites and to identify synaptic contacts. ECS preserved tissue is easier to segment using machine learning algorithms, leading to significantly reduced error rates. In addition, we observed that electrical synapses are readily identified in ECS preserved tissue. Finally, we determined that antibodies penetrate deep into ECS preserved tissue with only minimal permeabilization, thereby enabling correlated light microscopy (LM) and EM studies. We conclude that preservation of ECS benefits multiple aspects of the connectomic analysis of neural circuits.
Deep learning and model predictive control for self-tuning mode-locked lasers
NASA Astrophysics Data System (ADS)
Baumeister, Thomas; Brunton, Steven L.; Nathan Kutz, J.
2018-03-01
Self-tuning optical systems are of growing importance in technological applications such as mode-locked fiber lasers. Such self-tuning paradigms require {\\em intelligent} algorithms capable of inferring approximate models of the underlying physics and discovering appropriate control laws in order to maintain robust performance for a given objective. In this work, we demonstrate the first integration of a {\\em deep learning} (DL) architecture with {\\em model predictive control} (MPC) in order to self-tune a mode-locked fiber laser. Not only can our DL-MPC algorithmic architecture approximate the unknown fiber birefringence, it also builds a dynamical model of the laser and appropriate control law for maintaining robust, high-energy pulses despite a stochastically drifting birefringence. We demonstrate the effectiveness of this method on a fiber laser which is mode-locked by nonlinear polarization rotation. The method advocated can be broadly applied to a variety of optical systems that require robust controllers.
A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.
Catarinucci, Luca; Tarricone, Luciano
2009-01-01
The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.
Arase, Shuntaro; Horie, Kanta; Kato, Takashi; Noda, Akira; Mito, Yasuhiro; Takahashi, Masatoshi; Yanagisawa, Toshinobu
2016-10-21
Multivariate curve resolution-alternating least squares (MCR-ALS) method was investigated for its potential to accelerate pharmaceutical research and development. The fast and efficient separation of complex mixtures consisting of multiple components, including impurities as well as major drug substances, remains a challenging application for liquid chromatography in the field of pharmaceutical analysis. In this paper we suggest an integrated analysis algorithm functioning on a matrix of data generated from HPLC coupled with photo-diode array detector (HPLC-PDA) and consisting of the mathematical program for the developed multivariate curve resolution method using an expectation maximization (EM) algorithm with a bidirectional exponentially modified Gaussian (BEMG) model function as a constraint for chromatograms and numerous PDA spectra aligned with time axis. The algorithm provided less than ±1.0% error between true and separated peak area values at resolution (R s ) of 0.6 using simulation data for a three-component mixture with an elution order of a/b/c with similarity (a/b)=0.8410, (b/c)=0.9123 and (a/c)=0.9809 of spectra at peak apex. This software concept provides fast and robust separation analysis even when method development efforts fail to achieve complete separation of the target peaks. Additionally, this approach is potentially applicable to peak deconvolution, allowing quantitative analysis of co-eluted compounds having exactly the same molecular weight. This is complementary to the use of LC-MS to perform quantitative analysis on co-eluted compounds using selected ions to differentiate the proportion of response attributable to each compound. Copyright © 2016 Elsevier B.V. All rights reserved.
Onton, Julie A; Kang, Dae Y; Coleman, Todd P
2016-01-01
Brain activity during sleep is a powerful marker of overall health, but sleep lab testing is prohibitively expensive and only indicated for major sleep disorders. This report demonstrates that mobile 2-channel in-home electroencephalogram (EEG) recording devices provided sufficient information to detect and visualize sleep EEG. Displaying whole-night sleep EEG in a spectral display allowed for quick assessment of general sleep stability, cycle lengths, stage lengths, dominant frequencies and other indices of sleep quality. By visualizing spectral data down to 0.1 Hz, a differentiation emerged between slow-wave sleep with dominant frequency between 0.1-1 Hz or 1-3 Hz, but rarely both. Thus, we present here the new designations, Hi and Lo Deep sleep, according to the frequency range with dominant power. Simultaneously recorded electrodermal activity (EDA) was primarily associated with Lo Deep and very rarely with Hi Deep or any other stage. Therefore, Hi and Lo Deep sleep appear to be physiologically distinct states that may serve unique functions during sleep. We developed an algorithm to classify five stages (Awake, Light, Hi Deep, Lo Deep and rapid eye movement (REM)) using a Hidden Markov Model (HMM), model fitting with the expectation-maximization (EM) algorithm, and estimation of the most likely sleep state sequence by the Viterbi algorithm. The resulting automatically generated sleep hypnogram can help clinicians interpret the spectral display and help researchers computationally quantify sleep stages across participants. In conclusion, this study demonstrates the feasibility of in-home sleep EEG collection, a rapid and informative sleep report format, and novel deep sleep designations accounting for spectral and physiological differences.
Rezazadeh, Maryam; Yamini, Yadollah; Seidi, Shahram; Tahmasebi, Elham; Rezaei, Fatemeh
2014-04-09
Electromembrane surrounded solid phase microextraction (EM-SPME) of acidic herbicides was studied for the first time. In order to investigate the capability of this new microextraction technique to analyze acidic targets, chlorophenoxy acid (CPA) herbicides were quantified in plant tissue. 1-Octanol, was sustained in the pores of the wall of a hollow fiber and served as supported liquid membrane (SLM). Other EM-SPME related parameters, including extraction time, applied voltage, and pHs of the sample solution and the acceptor phase, were optimized using experimental design. A 20 min time frame was needed to reach the highest extraction efficiency of the analytes from a 24 mL alkaline sample solution across the organic liquid membrane and into the aqueous acceptor phase through a 50 V electrical field, and to their final adsorption on a carbonaceous anode. In addition to high sample cleanup, which made the proposed method appropriate for analysis of acidic compounds in a complicated media (plant tissue), 4.8% of 2-methyl-4-chlorophenoxyacetic acid (MCPA) and 0.6% of 2,4-dichlorophenoxyacetic acid (2,4-D) were adsorbed on the anode, resulting in suitable detection limits (less than 5 ng mL -1 ), and admissible repeatability and reproducibility (intra- and interassay precision were in the ranges of 5.2-8.5% and 8.8-12.0%, respectively). Linearity of the method was scrutinized within the ranges of 1.0-500.0 and 10.0-500.0 ng mL -1 for MCPA and 2,4-D, respectively, and coefficients of determination greater than 0.9958 were obtained. Optimal conditions of EM-SPME of the herbicides were employed for analysis of CPAs in whole wheat tissue.
Fuzzy multinomial logistic regression analysis: A multi-objective programming approach
NASA Astrophysics Data System (ADS)
Abdalla, Hesham A.; El-Sayed, Amany A.; Hamed, Ramadan
2017-05-01
Parameter estimation for multinomial logistic regression is usually based on maximizing the likelihood function. For large well-balanced datasets, Maximum Likelihood (ML) estimation is a satisfactory approach. Unfortunately, ML can fail completely or at least produce poor results in terms of estimated probabilities and confidence intervals of parameters, specially for small datasets. In this study, a new approach based on fuzzy concepts is proposed to estimate parameters of the multinomial logistic regression. The study assumes that the parameters of multinomial logistic regression are fuzzy. Based on the extension principle stated by Zadeh and Bárdossy's proposition, a multi-objective programming approach is suggested to estimate these fuzzy parameters. A simulation study is used to evaluate the performance of the new approach versus Maximum likelihood (ML) approach. Results show that the new proposed model outperforms ML in cases of small datasets.
Appropriate interpretation of aerobic capacity: allometric scaling in adult and young soccer players
Chamari, K; Moussa-Chamari, I; Boussaidi, L; Hachana, Y; Kaouech, F; Wisloff, U
2005-01-01
Objective: To compare aerobic capacity of young and adult elite soccer players using appropriate scaling procedures. Methods: Twenty four male adult (mean (SD) age 24 (2) years, weight 75.7 (7.2) kg, VO2MAX 66.6 (5.2) ml/lbm/min, where lbm is lean body mass in kg) and 21 youth (14 (0.4) years, 60.2 (7.3) kg, 66.5 (5.9) ml/lbm/min) elite soccer players took part in the study. Allometric equations were used to determine the relation between maximal and submaximal oxygen cost of running (running economy) and body mass. Results: Maximal and submaximal oxygen uptake increased in proportion to body mass raised to the power of 0.72 (0.04) and 0.60 (0.06) respectively. The VO2MAX of adult players was similar to that of the youth players when expressed in direct proportion to body mass—that is, ml/kg/min—but 5% higher (p<0.05) when expressed using appropriate procedures for scaling. Conversely, compared with seniors, youth players had 13% higher (p<0.001) energy cost of running—that is, poorer running economy—when expressed as ml/kg/min but not when expressed according to the scaling procedures. Conclusions: Compared with the youth soccer players, VO2MAX in the seniors was underestimated and running economy overestimated when expressed traditionally as ml/lbm/min. The study clearly shows the pitfalls in previous studies when aerobic capacity was evaluated in subjects with different body mass. It further shows that the use of scaling procedures can affect the evaluation of, and the resultant training programme to improve, aerobic capacity. PMID:15665205
Carlsten, Chris; Dimich-Ward, Helen; Ferguson, Alexander; Becker, Allan; Dybuncio, Anne; Chan-Yeung, Moira
2011-02-01
The operating characteristics of PC(20) values used as cut-offs to define airway hyperresponsiveness, as it informs the diagnosis of asthma in children, are poorly understood. We examine data from a unique cohort to inform this concern. Determine the sensitivity and specificity of incremental PC(20) cut-offs for allergist-diagnosed asthma. Airway reactivity at age 7 was assessed in children within a birth cohort at high risk for asthma; PC(20) for methacholine was determined by standard technique including interpolation. The diagnosis of asthma was considered by the pediatric allergist without knowledge of the methacholine challenge results. Sensitivity and specificity were calculated using a cross-tabulation of asthma diagnosis with incremental PC(20) cut-off values, from 1.0 to 8.0 mg/ml, and plotted as receiver operator characteristic (ROC) curves. The "optimal" cut-off was defined as that PC(20) conferring maximal value for sensitivity plus specificity while the "balanced" cut-off was defined as that PC(20) at which sensitivity and specificity were most equal. 70/348 children (20.1%) were diagnosed with asthma. The optimal and balanced PC(20) cut-offs, both for all children and for females alone, were respectively 3 mg/ml (sensitivity 80.0%, specificity 49.1%) and 2 mg/ml (sensitivity 63.1%, specificity 64.7%). For males alone, the "optimal" and "balanced" PC(20) cut-offs were both 2 mg/ml. For this cohort of 7-year olds at high risk for asthma, methacholine challenge testing using a cut-off value of PC(20) 3 mg/ml conferred the maximal sum of specificity plus sensitivity. For contexts in which higher sensitivity or specificity is desired, other cut-offs may be preferred. Copyright © 2011 Wiley-Liss, Inc.
[Scintigraphic study of gastric emptying of rehydration drinks in athletes].
Martínez Gonzálvez, A B; Nuño de la Rosa Y Pozuelo, J A; Sánchez Gascón, F; Villegas García, J A; Mulero Aniorte, F; Contreras Gutiérrez, J C
2005-01-01
This study aims to evaluate how rehydration beverage ingestion influences gastric emptying rate (in cycle ergometer) at rest and during exercise at 70 % of maximal oxygen consumption (VO2max). 26 well-trained cyclists performed a preliminary maximal test until exhaustion to evaluate their VO2max, and two submaximal exercise tests at 70 % of their mode-specific VO2max. Each test was separated by one week. During the two submaximal tests, cyclists consumed 200 ml of a 99mTc-DTPA labeled rehydration beverage (A or B) and scintigraphy determinations were performed at rest. After, exercise was initiated for 60 minutes with an intake rate of 200 ml every 15 minutes, making gastric serial scintigraphy determinations. The difference regarding chemical composition between A and B drinks resides in the fact that drink A contains a smaller load in carbohydrates (10.3 g/100 ml versus 15.2 g/100 ml of B), proteins in form of serum milk and antioxidants in form of fruit juice. Both contain ions and vitamins. at rest, gastric count number was significantly reduced (p > 0.000) from 0 to 25 minutes for both A and B beverage. At the end of exercise (60 min), there was greater gastric retention for B beverage than for A, this difference being statistically significant (p < 0.031). The A beverage, a rehydration drink on the market with protein and antioxidants with fruit juice content, has a faster gastric emptying rate than the B sport beverage.
Jee, Yong-Seok
2018-02-01
Recently, whole body-electromyostimulation (WB-EMS) has upgraded its functions and capabilities and has overcome limitations and inconveniences from past systems. Although the efficacy and safety of EMS have been examined in some studies, specific guidelines for applying WB-EMS are lacking. To determine the efficacy and safety of applying it in healthy men to improve cardiopulmonary and psychophysiological variables when applying WB-EMS. Sixty-four participants were randomly grouped into control group (without electrical stimuli) or WB-EMS group after a 6-week baseline period. The control group (n=33; female. 15; male, 18) wore the WB-EMS suit as much as the WB-EMS group (n=31; female, 15; male, 16). There were no abnormal changes in the cardiopulmonary variables (heart rate, systolic blood pressure [SBP], diastolic blood pressure, and oxygen uptake) during or after the graded exercise test (GXT) in both groups. There was a significant decrease in SBP and an increase of oxygen uptake from stages 3 to 5 of the GXT in the WB-EMS group. The psychophysiological factors for a WB-EMS group, which consisted of soreness, anxiety, fatigability, and sleeplessness were significantly decreased after the experiment. The application of WB-EMS in healthy young men did not negatively affect the cardiopulmonary and psychophysiological factors. Rather, the application of WB-EMS improved SBP and oxygen uptake in submaximal and maximal stages of GXT. This study also confirmed that 6 weeks of WB-EMS training can improve psychophysiological factors.
Maximum Likelihood Estimation of Nonlinear Structural Equation Models.
ERIC Educational Resources Information Center
Lee, Sik-Yum; Zhu, Hong-Tu
2002-01-01
Developed an EM type algorithm for maximum likelihood estimation of a general nonlinear structural equation model in which the E-step is completed by a Metropolis-Hastings algorithm. Illustrated the methodology with results from a simulation study and two real examples using data from previous studies. (SLD)
Wireless, relative-motion computer input device
Holzrichter, John F.; Rosenbury, Erwin T.
2004-05-18
The present invention provides a system for controlling a computer display in a workspace using an input unit/output unit. A train of EM waves are sent out to flood the workspace. EM waves are reflected from the input unit/output unit. A relative distance moved information signal is created using the EM waves that are reflected from the input unit/output unit. Algorithms are used to convert the relative distance moved information signal to a display signal. The computer display is controlled in response to the display signal.
Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation
NASA Astrophysics Data System (ADS)
Kim, Sunwoo
This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.
Biclustering Protein Complex Interactions with a Biclique FindingAlgorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ding, Chris; Zhang, Anne Ya; Holbrook, Stephen
2006-12-01
Biclustering has many applications in text mining, web clickstream mining, and bioinformatics. When data entries are binary, the tightest biclusters become bicliques. We propose a flexible and highly efficient algorithm to compute bicliques. We first generalize the Motzkin-Straus formalism for computing the maximal clique from L{sub 1} constraint to L{sub p} constraint, which enables us to provide a generalized Motzkin-Straus formalism for computing maximal-edge bicliques. By adjusting parameters, the algorithm can favor biclusters with more rows less columns, or vice verse, thus increasing the flexibility of the targeted biclusters. We then propose an algorithm to solve the generalized Motzkin-Straus optimizationmore » problem. The algorithm is provably convergent and has a computational complexity of O(|E|) where |E| is the number of edges. It relies on a matrix vector multiplication and runs efficiently on most current computer architectures. Using this algorithm, we bicluster the yeast protein complex interaction network. We find that biclustering protein complexes at the protein level does not clearly reflect the functional linkage among protein complexes in many cases, while biclustering at protein domain level can reveal many underlying linkages. We show several new biologically significant results.« less
McCole, Steve D; Shuldiner, Alan R; Brown, Michael D; Moore, Geoffrey E; Ferrell, Robert E; Wilund, Kenneth R; Huberty, Andrea; Douglass, Larry W; Hagberg, James M
2004-02-01
We sought to determine whether common genetic variations at the beta2 (beta2-AR, Gln27Glu) and beta3 (beta3-AR, Trp64Arg) adrenergic receptor gene loci were associated with cardiovascular (CV) hemodynamics during maximal and submaximal exercise. CV hemodynamics were assessed in 62 healthy postmenopausal women (20 sedentary, 22 physically active, and 20 endurance athletes) during treadmill exercise at 40, 60, 80, and 100% maximal O2 uptake using acetylene rebreathing to quantify cardiac output. The beta2-AR genotype and habitual physical activity (PA) levels interacted to significantly associate with arteriovenous O2 difference (a-vDO2) during submaximal exercise (P = 0.05), with the highest submaximal exercise a-vDO2 in sedentary women homozygous for the beta2-AR Gln allele and no genotype-dependent differences in submaximal exercise a-vDO2 in physically active and athletic women. The beta2-AR genotype also was independently associated with a-vDO2 during submaximal (P = 0.004) and approximately 100% maximal O2 uptake exercise (P = 0.006), with a 1.2-2 ml/100 ml greater a-vDO2 in the Gln/Gln than in the Glu/Glu genotype women. The beta3-AR genotype, independently or interacting with habitual PA levels, was not significantly associated with any CV hemodynamic variables during submaximal or maximal exercise. Thus it appears that the beta2-AR genotype, both independently and interacting with habitual PA levels, is significantly associated with a-vDO2 during exercise in postmenopausal women, whereas the beta3-AR genotype does not appear to be associated with any maximal or submaximal exercise CV hemodynamic responses in postmenopausal women.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
NASA Astrophysics Data System (ADS)
Koirala, Nischal; Setser, Randolph M.; Bullen, Jennifer; McLennan, Gordon
2017-03-01
Blood flow rate is a critical parameter for diagnosing dialysis access function during fistulography where a flow rate of 600 ml/min in arteriovenous graft or 400-500 ml/min in arteriovenous fistula is considered the clinical threshold for fully functioning access. In this study, a flow rate computational model for calculating intra-access flow to evaluate dialysis access patency was developed and validated in an in vitro set up using digital subtraction angiography. Flow rates were computed by tracking the bolus through two regions of interest using cross correlation (XCOR) and mean arrival time (MAT) algorithms, and correlated versus an in-line transonic flow meter measurement. The mean difference (mean +/- standard deviation) between XCOR and in-line flow measurements for in vitro setup at 3, 6, 7.5 and 10 frames/s was 118+/-63 37+/-59 31+/-31 and 46+/-57 ml/min respectively while for MAT method it was 86+/-56 57+/-72 35+/-85 and 19+/-129 ml/min respectively. The result of this investigation will be helpful for selecting candidate algorithms while blood flow computational tool is developed for clinical application.
Adaptive early detection ML/PDA estimator for LO targets with EO sensors
NASA Astrophysics Data System (ADS)
Chummun, Muhammad R.; Kirubarajan, Thiagalingam; Bar-Shalom, Yaakov
2000-07-01
The batch Maximum Likelihood Estimator, combined with Probabilistic Data (ML-PDA), has been shown to be effective in acquiring low observable (LO) - low SNR - non-maneuvering targets in the presence of heavy clutter. The use of signal strength or amplitude information (AI) in the ML-PDA estimator with AI in a sliding-window fashion, to detect high- speed targets in heavy clutter using electro-optical (EO) sensors. The initial time and the length of the sliding-window are adjusted adaptively according to the information content of the received measurements. A track validation scheme via hypothesis testing is developed to confirm the estimated track, that is, the presence of a target, in each window. The sliding-window ML-PDA approach, together with track validation, enables early detection by rejecting noninformative scans, target reacquisition in case of temporary target disappearance and the handling of targets with speeds evolving over time. The proposed algorithm is shown to detect the target, which is hidden in as many as 600 false alarms per scan, 10 frames earlier than the Multiple Hypothesis Tracking (MHT) algorithm.
NASA Astrophysics Data System (ADS)
Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.
2014-12-01
As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; ...
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismore » guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.« less
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.
2017-01-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119
Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.
Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L
2016-08-01
This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.
Mapping of electrical muscle stimulation using MRI
NASA Technical Reports Server (NTRS)
Adams, Gregory R.; Harris, Robert T.; Woodard, Daniel; Dudley, Gary A.
1993-01-01
The pattern of muscle contractile activity elicited by electromyostimulation (EMS) was mapped and compared to the contractile-activity pattern produced by voluntary effort. This was done by examining the patterns and the extent of contrast shift, as indicated by T2 values, im magnetic resonance (MR) images after isometric activity of the left m. quadriceps of human subjects was elicited by EMS (1-sec train of 500-microsec sine wave pulses at 50 Hz) or voluntary effort. The results suggest that, whereas EMS stimulates the same fibers repeatedly, thereby increasing the metabolic demand and T2 values, the voluntary efforts are performed by more diffuse asynchronous activation of skeletal muscle even at forces up to 75 percent of maximal to maintain performance.
Effect of lower limb compression on blood flow and performance in elite wheelchair rugby athletes
Vaile, Joanna; Stefanovic, Brad; Askew, Christopher D.
2016-01-01
Objective To investigate the effects of compression socks worn during exercise on performance and physiological responses in elite wheelchair rugby athletes. Design In a non-blinded randomized crossover design, participants completed two exercise trials (4 × 8 min bouts of submaximal exercise, each finishing with a timed maximal sprint) separated by 24 hr, with or without compression socks. Setting National Sports Training Centre, Queensland, Australia. Participants Ten national representative male wheelchair rugby athletes with cervical spinal cord injuries volunteered to participate. Interventions Participants wore medical grade compression socks on both legs during the exercise task (COMP), and during the control trial no compression was worn (CON). Outcome Measures The efficacy of the compression socks was determined by assessments of limb blood flow, core body temperature, heart rate, and ratings of perceived exertion, perceived thermal strain, and physical performance. Results While no significant differences between conditions were observed for maximal sprint time, average lap time was better maintained in COMP compared to CON (P<0.05). Lower limb blood flow increased from pre- to post-exercise by the same magnitude in both conditions (COMP: 2.51 ± 2.34; CON: 2.20 ± 1.85 ml.100 ml.−1min−1), whereas there was a greater increase in upper limb blood flow pre- to post-exercise in COMP (10.77 ± 8.24 ml.100 ml.−1min−1) compared to CON (6.21 ± 5.73 ml.100 ml.−1min−1; P < 0.05). Conclusion These findings indicate that compression socks worn during exercise is an effective intervention for maintaining submaximal performance during wheelchair exercise, and this performance benefit may be associated with an augmentation of upper limb blood flow. PMID:25582434
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Silva, Adão; Gameiro, Atílio
2014-01-01
We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90%) for a moderate number of active users. PMID:24574928
Distance-Based Phylogenetic Methods Around a Polytomy.
Davidson, Ruth; Sullivant, Seth
2014-01-01
Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.
Paul, Prokash; Bhattacharyya, Debangsu; Turton, Richard; ...
2017-06-06
Here, a novel sensor network design (SND) algorithm is developed for maximizing process efficiency while minimizing sensor network cost for a nonlinear dynamic process with an estimator-based control system. The multiobjective optimization problem is solved following a lexicographic approach where the process efficiency is maximized first followed by minimization of the sensor network cost. The partial net present value, which combines the capital cost due to the sensor network and the operating cost due to deviation from the optimal efficiency, is proposed as an alternative objective. The unscented Kalman filter is considered as the nonlinear estimator. The large-scale combinatorial optimizationmore » problem is solved using a genetic algorithm. The developed SND algorithm is applied to an acid gas removal (AGR) unit as part of an integrated gasification combined cycle (IGCC) power plant with CO 2 capture. Due to the computational expense, a reduced order nonlinear model of the AGR process is identified and parallel computation is performed during implementation.« less
Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal
2012-01-01
Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of those sequences that maximize likelihood under the Jukes-Cantor model is uninformative in the worst possible sense. For all inputs, all trees optimize the likelihood score. Second, we show that a greedy heuristic that uses GTR+Gamma ML to optimize the alignment and the tree can produce very poor alignments and trees. Therefore, the excellent performance of SATé-II and SATé-I is not because ML is used as an optimization criterion for choosing the best tree/alignment pair but rather due to the particular divide-and-conquer realignment techniques employed.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu Xiaoying; Ho, Shirley; Trac, Hy
We investigate machine learning (ML) techniques for predicting the number of galaxies (N{sub gal}) that occupy a halo, given the halo's properties. These types of mappings are crucial for constructing the mock galaxy catalogs necessary for analyses of large-scale structure. The ML techniques proposed here distinguish themselves from traditional halo occupation distribution (HOD) modeling as they do not assume a prescribed relationship between halo properties and N{sub gal}. In addition, our ML approaches are only dependent on parent halo properties (like HOD methods), which are advantageous over subhalo-based approaches as identifying subhalos correctly is difficult. We test two algorithms: supportmore » vector machines (SVM) and k-nearest-neighbor (kNN) regression. We take galaxies and halos from the Millennium simulation and predict N{sub gal} by training our algorithms on the following six halo properties: number of particles, M{sub 200}, {sigma}{sub v}, v{sub max}, half-mass radius, and spin. For Millennium, our predicted N{sub gal} values have a mean-squared error (MSE) of {approx}0.16 for both SVM and kNN. Our predictions match the overall distribution of halos reasonably well and the galaxy correlation function at large scales to {approx}5%-10%. In addition, we demonstrate a feature selection algorithm to isolate the halo parameters that are most predictive, a useful technique for understanding the mapping between halo properties and N{sub gal}. Lastly, we investigate these ML-based approaches in making mock catalogs for different galaxy subpopulations (e.g., blue, red, high M{sub star}, low M{sub star}). Given its non-parametric nature as well as its powerful predictive and feature selection capabilities, ML offers an interesting alternative for creating mock catalogs.« less
Foot side detection from lower lumbar spine acceleration.
Ben Mansour, Khaireddine; Rezzoug, Nasser; Gorce, Philippe
2015-09-01
The purpose of this paper is to present a reliable algorithm to discriminate between left/right foot contact using an accelerometer located over the lower lumbar spine. With the given accelerometer frame orientation, the side detection algorithm, based on the sign of the derivative of the sinusoidal shape obtained from the filtered mediolateral (ML) acceleration, showed 100% correct side detection for all subjects at all walking velocities. From the obtained results, it is concluded that in healthy subjects, the side of subsequent foot contact can be reliably obtained from the ML acceleration pattern of the lower lumbar spine. Copyright © 2015. Published by Elsevier B.V.
Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan
2018-05-01
Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p < 0.001, r = 0.73), LOS (p = 0.05, r = 0.52), and Foley duration (p < 0.001, r = 0.45). The five most relevant APMs, adopted by the RF-50 algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.
pyOpenMS: a Python-based interface to the OpenMS mass-spectrometry algorithm library.
Röst, Hannes L; Schmitt, Uwe; Aebersold, Ruedi; Malmström, Lars
2014-01-01
pyOpenMS is an open-source, Python-based interface to the C++ OpenMS library, providing facile access to a feature-rich, open-source algorithm library for MS-based proteomics analysis. It contains Python bindings that allow raw access to the data structures and algorithms implemented in OpenMS, specifically those for file access (mzXML, mzML, TraML, mzIdentML among others), basic signal processing (smoothing, filtering, de-isotoping, and peak-picking) and complex data analysis (including label-free, SILAC, iTRAQ, and SWATH analysis tools). pyOpenMS thus allows fast prototyping and efficient workflow development in a fully interactive manner (using the interactive Python interpreter) and is also ideally suited for researchers not proficient in C++. In addition, our code to wrap a complex C++ library is completely open-source, allowing other projects to create similar bindings with ease. The pyOpenMS framework is freely available at https://pypi.python.org/pypi/pyopenms while the autowrap tool to create Cython code automatically is available at https://pypi.python.org/pypi/autowrap (both released under the 3-clause BSD licence). © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.