Deterministic quantum annealing expectation-maximization algorithm
NASA Astrophysics Data System (ADS)
Miyahara, Hideyuki; Tsumura, Koji; Sughiyama, Yuki
2017-11-01
Maximum likelihood estimation (MLE) is one of the most important methods in machine learning, and the expectation-maximization (EM) algorithm is often used to obtain maximum likelihood estimates. However, EM heavily depends on initial configurations and fails to find the global optimum. On the other hand, in the field of physics, quantum annealing (QA) was proposed as a novel optimization approach. Motivated by QA, we propose a quantum annealing extension of EM, which we call the deterministic quantum annealing expectation-maximization (DQAEM) algorithm. We also discuss its advantage in terms of the path integral formulation. Furthermore, by employing numerical simulations, we illustrate how DQAEM works in MLE and show that DQAEM moderate the problem of local optima in EM.
Maximizing Energy Savings for Small Businesses | Buildings | NREL
significant amounts of money and energy, increase profits, promote their business, and cut greenhouse gas goals and save money: NREL's four-page lender's guide with discussion on timing and low-cost methods for information and design and decision support guides, available for free download The USDA's Business and
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
Shiota, T; Jones, M; Yamada, I; Heinrich, R S; Ishii, M; Sinclair, B; Holcomb, S; Yoganathan, A P; Sahn, D J
1996-02-01
The aim of the present study was to evaluate dynamic changes in aortic regurgitant (AR) orifice area with the use of calibrated electromagnetic (EM) flowmeters and to validate a color Doppler flow convergence (FC) method for evaluating effective AR orifice area and regurgitant volume. In 6 sheep, 8 to 20 weeks after surgically induced AR, 22 hemodynamically different states were studied. Instantaneous regurgitant flow rates were obtained by aortic and pulmonary EM flowmeters balanced against each other. Instantaneous AR orifice areas were determined by dividing these actual AR flow rates by the corresponding continuous wave velocities (over 25 to 40 points during each diastole) matched for each steady state. Echo studies were performed to obtain maximal aliasing distances of the FC in a low range (0.20 to 0.32 m/s) and a high range (0.70 to 0.89 m/s) of aliasing velocities; the corresponding maximal AR flow rates were calculated using the hemispheric flow convergence assumption for the FC isovelocity surface. AR orifice areas were derived by dividing the maximal flow rates by the maximal continuous wave Doppler velocities. AR orifice sizes obtained with the use of EM flowmeters showed little change during diastole. Maximal and time-averaged AR orifice areas during diastole obtained by EM flowmeters ranged from 0.06 to 0.44 cm2 (mean, 0.24 +/- 0.11 cm2) and from 0.05 to 0.43 cm2 (mean, 0.21 +/- 0.06 cm2), respectively. Maximal AR orifice areas by FC using low aliasing velocities overestimated reference EM orifice areas; however, at high AV, FC predicted the reference areas more reliably (0.25 +/- 0.16 cm2, r = .82, difference = 0.04 +/- 0.07 cm2). The product of the maximal orifice area obtained by the FC method using high AV and the velocity time integral of the regurgitant orifice velocity showed good agreement with regurgitant volumes per beat (r = .81, difference = 0.9 +/- 7.9 mL/beat). This study, using strictly quantified AR volume, demonstrated little change in AR orifice size during diastole. When high aliasing velocities are chosen, the FC method can be useful for determining effective AR orifice size and regurgitant volume.
ERIC Educational Resources Information Center
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
ERIC Educational Resources Information Center
Chen, Ping
2017-01-01
Calibration of new items online has been an important topic in item replenishment for multidimensional computerized adaptive testing (MCAT). Several online calibration methods have been proposed for MCAT, such as multidimensional "one expectation-maximization (EM) cycle" (M-OEM) and multidimensional "multiple EM cycles"…
A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.
Lloyd, Chris J
2008-09-01
We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.
NREL, Giner Evaluated Polymer Electrolyte Membrane for Maximizing Renewable
Energy on Grid | Energy Systems Integration Facility | NREL Giner NREL, Giner Evaluated Polymer -scale polymer electrolyte membrane (PEM) stack designed to maximize renewable energy on the grid by converting it to hydrogen when supply exceeds demand. Photo of a polymer electrolyte membrane stack in a
Maximizing the Benefits of Plug-in Electric Vehicles - Continuum Magazine
Testing and Integration Facility. Photo by Dennis Schroeder, NREL Maximizing the Benefits of Plug-in . Electric vehicle charging stations in NREL's parking garage. Photo by Dennis Schroder, NREL An NREL
Evidential analysis of difference images for change detection of multitemporal remote sensing images
NASA Astrophysics Data System (ADS)
Chen, Yin; Peng, Lijuan; Cremers, Armin B.
2018-03-01
In this article, we develop two methods for unsupervised change detection in multitemporal remote sensing images based on Dempster-Shafer's theory of evidence (DST). In most unsupervised change detection methods, the probability of difference image is assumed to be characterized by mixture models, whose parameters are estimated by the expectation maximization (EM) method. However, the main drawback of the EM method is that it does not consider spatial contextual information, which may entail rather noisy detection results with numerous spurious alarms. To remedy this, we firstly develop an evidence theory based EM method (EEM) which incorporates spatial contextual information in EM by iteratively fusing the belief assignments of neighboring pixels to the central pixel. Secondly, an evidential labeling method in the sense of maximizing a posteriori probability (MAP) is proposed in order to further enhance the detection result. It first uses the parameters estimated by EEM to initialize the class labels of a difference image. Then it iteratively fuses class conditional information and spatial contextual information, and updates labels and class parameters. Finally it converges to a fixed state which gives the detection result. A simulated image set and two real remote sensing data sets are used to evaluate the two evidential change detection methods. Experimental results show that the new evidential methods are comparable to other prevalent methods in terms of total error rate.
Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Nosaka, Kazunori
2009-09-01
This study compared voluntary (VOL) and electrically evoked isometric contractions by muscle stimulation (EMS) for changes in biceps brachii muscle oxygenation (tissue oxygenation index, DeltaTOI) and total haemoglobin concentration (DeltatHb = oxygenated haemoglobin + deoxygenated haemoglobin) determined by near-infrared spectroscopy. Twelve men performed EMS with one arm followed 24 h later by VOL with the contralateral arm, consisting of 30 repeated (1-s contraction, 1-s relaxation) isometric contractions at 30% of maximal voluntary contraction (MVC) for the first 60 s, and maximal intensity contractions thereafter (MVC for VOL and maximal tolerable current at 30 Hz for EMS) until MVC decreased approximately 30% of pre-exercise MVC. During the 30 contractions at 30% MVC, DeltaTOI decrease was significantly (P < 0.05) greater and DeltatHb was significantly (P < 0.05) lower for EMS than VOL, suggesting that the metabolic demand for oxygen in EMS is greater than VOL at the same torque level. However, during maximal intensity contractions, although EMS torque (approximately 40% of VOL) was significantly (P < 0.05) lower than VOL, DeltaTOI was similar and tHb was significantly (P < 0.05) lower for EMS than VOL towards the end, without significant differences between the two sessions in the recovery period. It is concluded that the oxygen demand of the activated biceps brachii muscle in EMS is comparable to VOL at maximal intensity.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
NASA Astrophysics Data System (ADS)
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
NASA Astrophysics Data System (ADS)
Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang
2018-04-01
Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.
Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia
2013-02-01
The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.
Maximize Energy Efficiency in Buildings | Climate Neutral Research Campuses
Buildings on a research campus, especially laboratory buildings, often represent the most cost-effective plans, campuses can evaluate the following: Energy Management Building Management New Buildings Design
Text Classification for Intelligent Portfolio Management
2002-05-01
years including nearest neighbor classification [15], naive Bayes with EM (Ex- pectation Maximization) [11] [13], Winnow with active learning [10... Active Learning and Expectation Maximization (EM). In particular, active learning is used to actively select documents for labeling, then EM assigns...generalization with active learning . Machine Learning, 15(2):201–221, 1994. [3] I. Dagan and P. Engelson. Committee-based sampling for training
Marketplace Impact | Transportation Research | NREL
Marketplace Impact Marketplace Impact This is the December 2014 issue of the Transportation and Hydrogen Newsletter. An illustration showing the outside of an electric vehicle with some portion of the partnership with the private sector to maximize market impact. Illustration by Joshua Bauer/NREL Public
Commercial Buildings Research Group. Steve's areas of expertise are electric power distribution systems, DC techniques for maximizing the energy efficiency of electrical distribution systems in commercial buildings
Deterministic annealing for density estimation by multivariate normal mixtures
NASA Astrophysics Data System (ADS)
Kloppenburg, Martin; Tavan, Paul
1997-03-01
An approach to maximum-likelihood density estimation by mixtures of multivariate normal distributions for large high-dimensional data sets is presented. Conventionally that problem is tackled by notoriously unstable expectation-maximization (EM) algorithms. We remove these instabilities by the introduction of soft constraints, enabling deterministic annealing. Our developments are motivated by the proof that algorithmically stable fuzzy clustering methods that are derived from statistical physics analogs are special cases of EM procedures.
Energy Systems Integration Facility Videos | Energy Systems Integration
Facility | NREL Energy Systems Integration Facility Videos Energy Systems Integration Facility Integration Facility NREL + SolarCity: Maximizing Solar Power on Electrical Grids Redefining What's Possible for Renewable Energy: Grid Integration Robot-Powered Reliability Testing at NREL's ESIF Microgrid
remaining 85% of the appropriation to maximize total air pollution reduction and health benefits, improve air quality in areas disproportionately affected by air pollution, leverage additional matching funds
Innovation and Entrepreneurship Events | NREL
Innovation and Entrepreneurship Events Innovation and Entrepreneurship Events Industry Growth Forum NREL's annual Industry Growth Forum (IGF) provides clean energy innovators an opportunity to maximize communities. Learn more and register for the 2018 Industry Growth Forum. Text Version
Distributed Optimization and Control | Grid Modernization | NREL
developing an innovative, distributed photovoltaic (PV) inverter control architecture that maximizes PV communications systems to support distribution grid operations. The growth of PV capacity has introduced prescribed limits, while fast variations in PV output tend to cause transients that lead to wear-out of
A Probability Based Framework for Testing the Missing Data Mechanism
ERIC Educational Resources Information Center
Lin, Johnny Cheng-Han
2013-01-01
Many methods exist for imputing missing data but fewer methods have been proposed to test the missing data mechanism. Little (1988) introduced a multivariate chi-square test for the missing completely at random data mechanism (MCAR) that compares observed means for each pattern with expectation-maximization (EM) estimated means. As an alternative,…
Integrated Energy Solutions Research | Integrated Energy Solutions | NREL
that spans the height and width of the wall they are facing. Decision Science and Informatics Enabling decision makers with rigorous, technology-neutral, data-backed decision support to maximize the impact of security in energy systems through analysis, decision support, advanced energy technology development, and
DefenseLink: Securing Afganistan, Stabilization & Growth
and maximize legitimate agricultural food crops. Story Army Engineers Repair Levee for Neighborhood near the site of the new agricultural research center in Shindand, Afghanistan. U.S. Army photo by Lt
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Köse, Alper
2014-01-01
The primary objective of this study was to examine the effect of missing data on goodness of fit statistics in confirmatory factor analysis (CFA). For this aim, four missing data handling methods; listwise deletion, full information maximum likelihood, regression imputation and expectation maximization (EM) imputation were examined in terms of…
NREL to Assist in Development and Evaluation of Class 6 Plug-in Hybrid
, and emissions, as well as the potential impacts on life-cycle costs, barriers to implementation, and application and maximizing potential energy efficiency, emissions, economic, and performance impacts."
NREL + SolarCity: Maximizing Solar Power on Electrical Grids Video Text
Electrical Grids video. RYAN HANLEY: The growth of distributed energy resources is becoming real and tangible . BRYAN HANNEGAN: Solar technologies, particularly those distributed, rooftop, PV solar technologies, add Hawaiian Electric Company was concerned about as far as installing distributed energy resources on their
NREL Bridges Fuels and Engines R&D to Maximize Vehicle Efficiency and
innovation-from fuel chemistry, conversion, and combustion to the evaluation of advanced fuels in actual -cylinder engine for advanced compression ignition fuels research will be installed and commissioned in the vehicle performance and emissions research, two engine dynamometer test cells for advanced fuels research
Perovskite Patent Portfolio | Photovoltaic Research | NREL
deposition of high-quality perovskite films. These techniques have been published in multiple peer-reviewed substrates that are suitable for high-throughput manufacturing and that can maximize the yield of the % to 3% increase in conversion efficiency when compared to a MAPbI3 film prepared with a standard
NREL Fuels and Engines Research: Maximizing Vehicle Efficiency and
Laboratory, we analyze the effects of fuel chemistry on ignition and the potential emissions impacts. Our lab research. It can be used to investigate fuel chemistry effects on current and near-term engine technology , independent control allows for deeper interrogation of fuel effects on future-generation engine strategies
Maximizing Energy Savings for Small Business Text Version | Buildings |
owners have a big opportunity to save money and energy, while cutting greenhouse gas emissions. Drawing have the money, nor time, to pursue something like that. Drawing of computer screen, showing NREL's energy and non-energy related benefits. Drawing of money, buildings, machinery, and furniture. Narrator
American Energy Data Challenge - by IdeaScale
part of this community and receive updates on Open Data by Design Contest. PUBLIC VOTING HAS CLOSED - WINNERS WILL BE ANNOUNCED THIS MONTH! Arrow About Contest #3: Open Data by Design The Department of Energy will award $17,500 in prizes for the best designs that maximize the potential of our open energy data
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Flexible mini gamma camera reconstructions of extended sources using step and shoot and list mode.
Gardiazabal, José; Matthies, Philipp; Vogel, Jakob; Frisch, Benjamin; Navab, Nassir; Ziegler, Sibylle; Lasser, Tobias
2016-12-01
Hand- and robot-guided mini gamma cameras have been introduced for the acquisition of single-photon emission computed tomography (SPECT) images. Less cumbersome than whole-body scanners, they allow for a fast acquisition of the radioactivity distribution, for example, to differentiate cancerous from hormonally hyperactive lesions inside the thyroid. This work compares acquisition protocols and reconstruction algorithms in an attempt to identify the most suitable approach for fast acquisition and efficient image reconstruction, suitable for localization of extended sources, such as lesions inside the thyroid. Our setup consists of a mini gamma camera with precise tracking information provided by a robotic arm, which also provides reproducible positioning for our experiments. Based on a realistic phantom of the thyroid including hot and cold nodules as well as background radioactivity, the authors compare "step and shoot" (SAS) and continuous data (CD) acquisition protocols in combination with two different statistical reconstruction methods: maximum-likelihood expectation-maximization (ML-EM) for time-integrated count values and list-mode expectation-maximization (LM-EM) for individually detected gamma rays. In addition, the authors simulate lower uptake values by statistically subsampling the experimental data in order to study the behavior of their approach without changing other aspects of the acquired data. All compared methods yield suitable results, resolving the hot nodules and the cold nodule from the background. However, the CD acquisition is twice as fast as the SAS acquisition, while yielding better coverage of the thyroid phantom, resulting in qualitatively more accurate reconstructions of the isthmus between the lobes. For CD acquisitions, the LM-EM reconstruction method is preferable, as it yields comparable image quality to ML-EM at significantly higher speeds, on average by an order of magnitude. This work identifies CD acquisition protocols combined with LM-EM reconstruction as a prime candidate for the wider introduction of SPECT imaging with flexible mini gamma cameras in the clinical practice.
SubspaceEM: A Fast Maximum-a-posteriori Algorithm for Cryo-EM Single Particle Reconstruction
Dvornek, Nicha C.; Sigworth, Fred J.; Tagare, Hemant D.
2015-01-01
Single particle reconstruction methods based on the maximum-likelihood principle and the expectation-maximization (E–M) algorithm are popular because of their ability to produce high resolution structures. However, these algorithms are computationally very expensive, requiring a network of computational servers. To overcome this computational bottleneck, we propose a new mathematical framework for accelerating maximum-likelihood reconstructions. The speedup is by orders of magnitude and the proposed algorithm produces similar quality reconstructions compared to the standard maximum-likelihood formulation. Our approach uses subspace approximations of the cryo-electron microscopy (cryo-EM) data and projection images, greatly reducing the number of image transformations and comparisons that are computed. Experiments using simulated and actual cryo-EM data show that speedup in overall execution time compared to traditional maximum-likelihood reconstruction reaches factors of over 300. PMID:25839831
Varying-energy CT imaging method based on EM-TV
NASA Astrophysics Data System (ADS)
Chen, Ping; Han, Yan
2016-11-01
For complicated structural components with wide x-ray attenuation ranges, conventional fixed-energy computed tomography (CT) imaging cannot obtain all the structural information. This limitation results in a shortage of CT information because the effective thickness of the components along the direction of x-ray penetration exceeds the limit of the dynamic range of the x-ray imaging system. To address this problem, a varying-energy x-ray CT imaging method is proposed. In this new method, the tube voltage is adjusted several times with the fixed lesser interval. Next, the fusion of grey consistency and logarithm demodulation are applied to obtain full and lower noise projection with a high dynamic range (HDR). In addition, for the noise suppression problem of the analytical method, EM-TV (expectation maximization-total Jvariation) iteration reconstruction is used. In the process of iteration, the reconstruction result obtained at one x-ray energy is used as the initial condition of the next iteration. An accompanying experiment demonstrates that this EM-TV reconstruction can also extend the dynamic range of x-ray imaging systems and provide a higher reconstruction quality relative to the fusion reconstruction method.
NASA Astrophysics Data System (ADS)
Floberg, J. M.; Holden, J. E.
2013-02-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
Optimal Clustering in Graphs with Weighted Edges: A Unified Approach to the Threshold Problem.
ERIC Educational Resources Information Center
Goetschel, Roy; Voxman, William
1987-01-01
Relations on a finite set V are viewed as weighted graphs. Using the language of graph theory, two methods of partitioning V are examined: selecting threshold values and applying them to a maximal weighted spanning forest, and using a parametric linear program to obtain a most adhesive partition. (Author/EM)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kagie, Matthew J.; Lanterman, Aaron D.
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Khambampati, A. K.; Rashid, A.; Kim, B. S.; Liu, Dong; Kim, S.; Kim, K. Y.
2010-04-01
EIT has been used for the dynamic estimation of organ boundaries. One specific application in this context is the estimation of lung boundaries during pulmonary circulation. This would help track the size and shape of lungs of the patients suffering from diseases like pulmonary edema and acute respiratory failure (ARF). The dynamic boundary estimation of the lungs can also be utilized to set and control the air volume and pressure delivered to the patients during artificial ventilation. In this paper, the expectation-maximization (EM) algorithm is used as an inverse algorithm to estimate the non-stationary lung boundary. The uncertainties caused in Kalman-type filters due to inaccurate selection of model parameters are overcome using EM algorithm. Numerical experiments using chest shaped geometry are carried out with proposed method and the performance is compared with extended Kalman filter (EKF). Results show superior performance of EM in estimation of the lung boundary.
Semi-supervised Learning for Phenotyping Tasks.
Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K
2015-01-01
Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.
Detection of the power lines in UAV remote sensed images using spectral-spatial methods.
Bhola, Rishav; Krishna, Nandigam Hari; Ramesh, K N; Senthilnath, J; Anand, Gautham
2018-01-15
In this paper, detection of the power lines on images acquired by Unmanned Aerial Vehicle (UAV) based remote sensing is carried out using spectral-spatial methods. Spectral clustering was performed using Kmeans and Expectation Maximization (EM) algorithm to classify the pixels into the power lines and non-power lines. The spectral clustering methods used in this study are parametric in nature, to automate the number of clusters Davies-Bouldin index (DBI) is used. The UAV remote sensed image is clustered into the number of clusters determined by DBI. The k clustered image is merged into 2 clusters (power lines and non-power lines). Further, spatial segmentation was performed using morphological and geometric operations, to eliminate the non-power line regions. In this study, UAV images acquired at different altitudes and angles were analyzed to validate the robustness of the proposed method. It was observed that the EM with spatial segmentation (EM-Seg) performed better than the Kmeans with spatial segmentation (Kmeans-Seg) on most of the UAV images. Copyright © 2017 Elsevier Ltd. All rights reserved.
Detection of delamination defects in CFRP materials using ultrasonic signal processing.
Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak
2008-12-01
In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.
The EM Method in a Probabilistic Wavelet-Based MRI Denoising
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images. PMID:26089959
The EM Method in a Probabilistic Wavelet-Based MRI Denoising.
Martin-Fernandez, Marcos; Villullas, Sergio
2015-01-01
Human body heat emission and others external causes can interfere in magnetic resonance image acquisition and produce noise. In this kind of images, the noise, when no signal is present, is Rayleigh distributed and its wavelet coefficients can be approximately modeled by a Gaussian distribution. Noiseless magnetic resonance images can be modeled by a Laplacian distribution in the wavelet domain. This paper proposes a new magnetic resonance image denoising method to solve this fact. This method performs shrinkage of wavelet coefficients based on the conditioned probability of being noise or detail. The parameters involved in this filtering approach are calculated by means of the expectation maximization (EM) method, which avoids the need to use an estimator of noise variance. The efficiency of the proposed filter is studied and compared with other important filtering techniques, such as Nowak's, Donoho-Johnstone's, Awate-Whitaker's, and nonlocal means filters, in different 2D and 3D images.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Improved Correction of Atmospheric Pressure Data Obtained by Smartphones through Machine Learning
Kim, Yong-Hyuk; Ha, Ji-Hun; Kim, Na-Young; Im, Hyo-Hyuc; Sim, Sangjin; Choi, Reno K. Y.
2016-01-01
A correction method using machine learning aims to improve the conventional linear regression (LR) based method for correction of atmospheric pressure data obtained by smartphones. The method proposed in this study conducts clustering and regression analysis with time domain classification. Data obtained in Gyeonggi-do, one of the most populous provinces in South Korea surrounding Seoul with the size of 10,000 km2, from July 2014 through December 2014, using smartphones were classified with respect to time of day (daytime or nighttime) as well as day of the week (weekday or weekend) and the user's mobility, prior to the expectation-maximization (EM) clustering. Subsequently, the results were analyzed for comparison by applying machine learning methods such as multilayer perceptron (MLP) and support vector regression (SVR). The results showed a mean absolute error (MAE) 26% lower on average when regression analysis was performed through EM clustering compared to that obtained without EM clustering. For machine learning methods, the MAE for SVR was around 31% lower for LR and about 19% lower for MLP. It is concluded that pressure data from smartphones are as good as the ones from national automatic weather station (AWS) network. PMID:27524999
ERIC Educational Resources Information Center
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
NASA Astrophysics Data System (ADS)
Zhou, Ya-Tong; Fan, Yu; Chen, Zi-Yi; Sun, Jian-Cheng
2017-05-01
The contribution of this work is twofold: (1) a multimodality prediction method of chaotic time series with the Gaussian process mixture (GPM) model is proposed, which employs a divide and conquer strategy. It automatically divides the chaotic time series into multiple modalities with different extrinsic patterns and intrinsic characteristics, and thus can more precisely fit the chaotic time series. (2) An effective sparse hard-cut expectation maximization (SHC-EM) learning algorithm for the GPM model is proposed to improve the prediction performance. SHC-EM replaces a large learning sample set with fewer pseudo inputs, accelerating model learning based on these pseudo inputs. Experiments on Lorenz and Chua time series demonstrate that the proposed method yields not only accurate multimodality prediction, but also the prediction confidence interval. SHC-EM outperforms the traditional variational learning in terms of both prediction accuracy and speed. In addition, SHC-EM is more robust and insusceptible to noise than variational learning. Supported by the National Natural Science Foundation of China under Grant No 60972106, the China Postdoctoral Science Foundation under Grant No 2014M561053, the Humanity and Social Science Foundation of Ministry of Education of China under Grant No 15YJA630108, and the Hebei Province Natural Science Foundation under Grant No E2016202341.
Robust EM Continual Reassessment Method in Oncology Dose Finding
Yuan, Ying; Yin, Guosheng
2012-01-01
The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092
NASA Astrophysics Data System (ADS)
Yozgatligil, Ceylan; Aslan, Sipan; Iyigun, Cem; Batmaz, Inci
2013-04-01
This study aims to compare several imputation methods to complete the missing values of spatio-temporal meteorological time series. To this end, six imputation methods are assessed with respect to various criteria including accuracy, robustness, precision, and efficiency for artificially created missing data in monthly total precipitation and mean temperature series obtained from the Turkish State Meteorological Service. Of these methods, simple arithmetic average, normal ratio (NR), and NR weighted with correlations comprise the simple ones, whereas multilayer perceptron type neural network and multiple imputation strategy adopted by Monte Carlo Markov Chain based on expectation-maximization (EM-MCMC) are computationally intensive ones. In addition, we propose a modification on the EM-MCMC method. Besides using a conventional accuracy measure based on squared errors, we also suggest the correlation dimension (CD) technique of nonlinear dynamic time series analysis which takes spatio-temporal dependencies into account for evaluating imputation performances. Depending on the detailed graphical and quantitative analysis, it can be said that although computational methods, particularly EM-MCMC method, are computationally inefficient, they seem favorable for imputation of meteorological time series with respect to different missingness periods considering both measures and both series studied. To conclude, using the EM-MCMC algorithm for imputing missing values before conducting any statistical analyses of meteorological data will definitely decrease the amount of uncertainty and give more robust results. Moreover, the CD measure can be suggested for the performance evaluation of missing data imputation particularly with computational methods since it gives more precise results in meteorological time series.
Generalized expectation-maximization segmentation of brain MR images
NASA Astrophysics Data System (ADS)
Devalkeneer, Arnaud A.; Robe, Pierre A.; Verly, Jacques G.; Phillips, Christophe L. M.
2006-03-01
Manual segmentation of medical images is unpractical because it is time consuming, not reproducible, and prone to human error. It is also very difficult to take into account the 3D nature of the images. Thus, semi- or fully-automatic methods are of great interest. Current segmentation algorithms based on an Expectation- Maximization (EM) procedure present some limitations. The algorithm by Ashburner et al., 2005, does not allow multichannel inputs, e.g. two MR images of different contrast, and does not use spatial constraints between adjacent voxels, e.g. Markov random field (MRF) constraints. The solution of Van Leemput et al., 1999, employs a simplified model (mixture coefficients are not estimated and only one Gaussian is used by tissue class, with three for the image background). We have thus implemented an algorithm that combines the features of these two approaches: multichannel inputs, intensity bias correction, multi-Gaussian histogram model, and Markov random field (MRF) constraints. Our proposed method classifies tissues in three iterative main stages by way of a Generalized-EM (GEM) algorithm: (1) estimation of the Gaussian parameters modeling the histogram of the images, (2) correction of image intensity non-uniformity, and (3) modification of prior classification knowledge by MRF techniques. The goal of the GEM algorithm is to maximize the log-likelihood across the classes and voxels. Our segmentation algorithm was validated on synthetic data (with the Dice metric criterion) and real data (by a neurosurgeon) and compared to the original algorithms by Ashburner et al. and Van Leemput et al. Our combined approach leads to more robust and accurate segmentation.
NASA Astrophysics Data System (ADS)
Dabiri, Mohammad Taghi; Sadough, Seyed Mohammad Sajad
2018-04-01
In the free-space optical (FSO) links, atmospheric turbulence lead to scintillation in the received signal. Due to its ease of implementation, intensity modulation with direct detection (IM/DD) based on ON-OFF keying (OOK) is a popular signaling scheme in these systems. Over turbulence channel, to detect OOK symbols in a blind way, i.e., without sending pilot symbols, an expectation-maximization (EM)-based detection method was recently proposed in the literature related to free-space optical (FSO) communication. However, the performance of EM-based detection methods severely depends on the length of the observation interval (Ls). To choose the optimum values of Ls at target bit error rates (BER)s of FSO communications which are commonly lower than 10-9, Monte-Carlo simulations would be very cumbersome and require a very long processing time. To facilitate performance evaluation, in this letter we derive the analytic expressions for BER and outage probability. Numerical results validate the accuracy of our derived analytic expressions. Our results may serve to evaluate the optimum value for Ls without resorting to time-consuming Monte-Carlo simulations.
Counting malaria parasites with a two-stage EM based algorithm using crowsourced data.
Cabrera-Bean, Margarita; Pages-Zamora, Alba; Diaz-Vilor, Carles; Postigo-Camps, Maria; Cuadrado-Sanchez, Daniel; Luengo-Oroz, Miguel Angel
2017-07-01
Malaria eradication of the worldwide is currently one of the main WHO's global goals. In this work, we focus on the use of human-machine interaction strategies for low-cost fast reliable malaria diagnostic based on a crowdsourced approach. The addressed technical problem consists in detecting spots in images even under very harsh conditions when positive objects are very similar to some artifacts. The clicks or tags delivered by several annotators labeling an image are modeled as a robust finite mixture, and techniques based on the Expectation-Maximization (EM) algorithm are proposed for accurately counting malaria parasites on thick blood smears obtained by microscopic Giemsa-stained techniques. This approach outperforms other traditional methods as it is shown through experimentation with real data.
Biceps brachii muscle oxygenation in electrical muscle stimulation.
Muthalib, Makii; Jubeau, Marc; Millet, Guillaume Y; Maffiuletti, Nicola A; Ferrari, Marco; Nosaka, Kazunori
2010-09-01
The purpose of this study was to compare between electrical muscle stimulation (EMS) and maximal voluntary (VOL) isometric contractions of the elbow flexors for changes in biceps brachii muscle oxygenation (tissue oxygenation index, TOI) and haemodynamics (total haemoglobin volume, tHb = oxygenated-Hb + deoxygenated-Hb) determined by near-infrared spectroscopy (NIRS). The biceps brachii muscle of 10 healthy men (23-39 years) was electrically stimulated at high frequency (75 Hz) via surface electrodes to evoke 50 intermittent (4-s contraction, 15-s relaxation) isometric contractions at maximum tolerated current level (EMS session). The contralateral arm performed 50 intermittent (4-s contraction, 15-s relaxation) maximal voluntary isometric contractions (VOL session) in a counterbalanced order separated by 2-3 weeks. Results indicated that although the torque produced during EMS was approximately 50% of VOL (P<0.05), there was no significant difference in the changes in TOI amplitude or TOI slope between EMS and VOL over the 50 contractions. However, the TOI amplitude divided by peak torque was approximately 50% lower for EMS than VOL (P<0.05), which indicates EMS was less efficient than VOL. This seems likely because of the difference in the muscles involved in the force production between conditions. Mean decrease in tHb amplitude during the contraction phases was significantly (P<0.05) greater for EMS than VOL from the 10th contraction onwards, suggesting that the muscle blood volume was lower in EMS than VOL. It is concluded that local oxygen demand of the biceps brachii sampled by NIRS is similar between VOL and EMS.
Matkowski, Boris; Lepers, Romuald; Martin, Alain
2015-05-01
The aim of this study was to analyze the neuromuscular mechanisms involved in the torque decrease induced by submaximal electromyostimulation (EMS) of the quadriceps muscle. It was hypothesized that torque decrease after EMS would reflect the fatigability of the activated motor units (MUs), but also a reduction in the number of MUs recruited as a result of changes in axonal excitability threshold. Two experiments were performed on 20 men to analyze 1) the supramaximal twitch superimposed and evoked at rest during EMS (Experiment 1, n = 9) and 2) the twitch response and torque-frequency relation of the MUs activated by EMS (Experiment 2, n = 11). Torque loss was assessed by 15 EMS-evoked contractions (50 Hz; 6 s on/6 s off), elicited at a constant intensity that evoked 20% of the maximal voluntary contraction (MVC) torque. The same stimulation intensity delivered over the muscles was used to induce the torque-frequency relation and the single electrical pulse evoked after each EMS contraction (Experiment 2). In Experiment 1, supramaximal twitch was induced by femoral nerve stimulation. Torque decreased by ~60% during EMS-evoked contractions and by only ~18% during MVCs. This was accompanied by a rightward shift of the torque-frequency relation of MUs activated and an increase of the ratio between the superimposed and posttetanic maximal twitch evoked during EMS contraction. These findings suggest that the torque decrease observed during submaximal EMS-evoked contractions involved muscular mechanisms but also a reduction in the number of MUs recruited due to changes in axonal excitability. Copyright © 2015 the American Physiological Society.
An EM Algorithm for Maximum Likelihood Estimation of Process Factor Analysis Models
ERIC Educational Resources Information Center
Lee, Taehun
2010-01-01
In this dissertation, an Expectation-Maximization (EM) algorithm is developed and implemented to obtain maximum likelihood estimates of the parameters and the associated standard error estimates characterizing temporal flows for the latent variable time series following stationary vector ARMA processes, as well as the parameters defining the…
Paavolainen, Lassi; Acar, Erman; Tuna, Uygar; Peltonen, Sari; Moriya, Toshio; Soonsawad, Pan; Marjomäki, Varpu; Cheng, R Holland; Ruotsalainen, Ulla
2014-01-01
Electron tomography (ET) of biological samples is used to study the organization and the structure of the whole cell and subcellular complexes in great detail. However, projections cannot be acquired over full tilt angle range with biological samples in electron microscopy. ET image reconstruction can be considered an ill-posed problem because of this missing information. This results in artifacts, seen as the loss of three-dimensional (3D) resolution in the reconstructed images. The goal of this study was to achieve isotropic resolution with a statistical reconstruction method, sequential maximum a posteriori expectation maximization (sMAP-EM), using no prior morphological knowledge about the specimen. The missing wedge effects on sMAP-EM were examined with a synthetic cell phantom to assess the effects of noise. An experimental dataset of a multivesicular body was evaluated with a number of gold particles. An ellipsoid fitting based method was developed to realize the quantitative measures elongation and contrast in an automated, objective, and reliable way. The method statistically evaluates the sub-volumes containing gold particles randomly located in various parts of the whole volume, thus giving information about the robustness of the volume reconstruction. The quantitative results were also compared with reconstructions made with widely-used weighted backprojection and simultaneous iterative reconstruction technique methods. The results showed that the proposed sMAP-EM method significantly suppresses the effects of the missing information producing isotropic resolution. Furthermore, this method improves the contrast ratio, enhancing the applicability of further automatic and semi-automatic analysis. These improvements in ET reconstruction by sMAP-EM enable analysis of subcellular structures with higher three-dimensional resolution and contrast than conventional methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2016 accomplishments and primary areas of focus for the Department of Energy's (DOE's) Office of Environmental Management and EM sites are presented. For DOE EM, these include Focusing on the Field, Teaming with Cleanup Partners, Developing New Technology, and Maximizing Cleanup Dollars. Major 2016 achievements are highlighted for EM, Richland Operations Office, Office of River Protection, Savannah River Site, Oak Ridge, Idaho, Waste Isolation Pilot Plant, Los Alamos, Portsmouth, Paducah, West Valley Demonstration Project, and the Nevada National Security Site,
Wang, Jiexin; Uchibe, Eiji; Doya, Kenji
2017-01-01
EM-based policy search methods estimate a lower bound of the expected return from the histories of episodes and iteratively update the policy parameters using the maximum of a lower bound of expected return, which makes gradient calculation and learning rate tuning unnecessary. Previous algorithms like Policy learning by Weighting Exploration with the Returns, Fitness Expectation Maximization, and EM-based Policy Hyperparameter Exploration implemented the mechanisms to discard useless low-return episodes either implicitly or using a fixed baseline determined by the experimenter. In this paper, we propose an adaptive baseline method to discard worse samples from the reward history and examine different baselines, including the mean, and multiples of SDs from the mean. The simulation results of benchmark tasks of pendulum swing up and cart-pole balancing, and standing up and balancing of a two-wheeled smartphone robot showed improved performances. We further implemented the adaptive baseline with mean in our two-wheeled smartphone robot hardware to test its performance in the standing up and balancing task, and a view-based approaching task. Our results showed that with adaptive baseline, the method outperformed the previous algorithms and achieved faster, and more precise behaviors at a higher successful rate. PMID:28167910
Directly Reconstructing Principal Components of Heterogeneous Particles from Cryo-EM Images
Tagare, Hemant D.; Kucukelbir, Alp; Sigworth, Fred J.; Wang, Hongwei; Rao, Murali
2015-01-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the (posterior) likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the inluenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. PMID:26049077
Jiang, Z; Dou, Z; Song, W L; Xu, J; Wu, Z Y
2017-11-10
Objective: To compare results of different methods: in organizing HIV viral load (VL) data with missing values mechanism. Methods We used software SPSS 17.0 to simulate complete and missing data with different missing value mechanism from HIV viral loading data collected from MSM in 16 cities in China in 2013. Maximum Likelihood Methods Using the Expectation and Maximization Algorithm (EM), regressive method, mean imputation, delete method, and Markov Chain Monte Carlo (MCMC) were used to supplement missing data respectively. The results: of different methods were compared according to distribution characteristics, accuracy and precision. Results HIV VL data could not be transferred into a normal distribution. All the methods showed good results in iterating data which is Missing Completely at Random Mechanism (MCAR). For the other types of missing data, regressive and MCMC methods were used to keep the main characteristic of the original data. The means of iterating database with different methods were all close to the original one. The EM, regressive method, mean imputation, and delete method under-estimate VL while MCMC overestimates it. Conclusion: MCMC can be used as the main imputation method for HIV virus loading missing data. The iterated data can be used as a reference for mean HIV VL estimation among the investigated population.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
Application and performance of an ML-EM algorithm in NEXT
NASA Astrophysics Data System (ADS)
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Youngrok
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates ofmore » nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.« less
Electrically-induced muscle fatigue affects feedforward mechanisms of control.
Monjo, F; Forestier, N
2015-08-01
To investigate the effects of focal muscle fatigue induced by electromyostimulation (EMS) on Anticipatory Postural Adjustments (APAs) during arm flexions performed at maximal velocity. Fifteen healthy subjects performed self-paced arm flexions at maximal velocity before and after the completion of fatiguing electromyostimulation programs involving the medial and anterior deltoids and aiming to degrade movement peak acceleration. APA timing and magnitude were measured using surface electromyography. Following muscle fatigue, despite a lower mechanical disturbance evidenced by significant decreased peak accelerations (-12%, p<.001), APAs remained unchanged as compared to control trials (p>.11 for all analyses). The fatigue signals evoked by externally-generated contractions seem to be gated by the Central Nervous System and result in postural strategy changes which aim to increase the postural safety margin. EMS is widely used in rehabilitation and training programs for its neuromuscular function-related benefits. However and from a motor control viewpoint, the present results show that the use of EMS can lead to acute inaccuracies in predictive motor control. We propose that clinicians should investigate the chronic and global effects of EMS on motor control. Copyright © 2014 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Castillo-Barnes, Diego; Peis, Ignacio; Martínez-Murcia, Francisco J.; Segovia, Fermín; Illán, Ignacio A.; Górriz, Juan M.; Ramírez, Javier; Salas-Gonzalez, Diego
2017-01-01
A wide range of segmentation approaches assumes that intensity histograms extracted from magnetic resonance images (MRI) have a distribution for each brain tissue that can be modeled by a Gaussian distribution or a mixture of them. Nevertheless, intensity histograms of White Matter and Gray Matter are not symmetric and they exhibit heavy tails. In this work, we present a hidden Markov random field model with expectation maximization (EM-HMRF) modeling the components using the α-stable distribution. The proposed model is a generalization of the widely used EM-HMRF algorithm with Gaussian distributions. We test the α-stable EM-HMRF model in synthetic data and brain MRI data. The proposed methodology presents two main advantages: Firstly, it is more robust to outliers. Secondly, we obtain similar results than using Gaussian when the Gaussian assumption holds. This approach is able to model the spatial dependence between neighboring voxels in tomographic brain MRI. PMID:29209194
A segmentation/clustering model for the analysis of array CGH data.
Picard, F; Robin, S; Lebarbier, E; Daudin, J-J
2007-09-01
Microarray-CGH (comparative genomic hybridization) experiments are used to detect and map chromosomal imbalances. A CGH profile can be viewed as a succession of segments that represent homogeneous regions in the genome whose representative sequences share the same relative copy number on average. Segmentation methods constitute a natural framework for the analysis, but they do not provide a biological status for the detected segments. We propose a new model for this segmentation/clustering problem, combining a segmentation model with a mixture model. We present a new hybrid algorithm called dynamic programming-expectation maximization (DP-EM) to estimate the parameters of the model by maximum likelihood. This algorithm combines DP and the EM algorithm. We also propose a model selection heuristic to select the number of clusters and the number of segments. An example of our procedure is presented, based on publicly available data sets. We compare our method to segmentation methods and to hidden Markov models, and we show that the new segmentation/clustering model is a promising alternative that can be applied in the more general context of signal processing.
Direct 4D reconstruction of parametric images incorporating anato-functional joint entropy.
Tang, Jing; Kuwabara, Hiroto; Wong, Dean F; Rahmim, Arman
2010-08-07
We developed an anatomy-guided 4D closed-form algorithm to directly reconstruct parametric images from projection data for (nearly) irreversible tracers. Conventional methods consist of individually reconstructing 2D/3D PET data, followed by graphical analysis on the sequence of reconstructed image frames. The proposed direct reconstruction approach maintains the simplicity and accuracy of the expectation-maximization (EM) algorithm by extending the system matrix to include the relation between the parametric images and the measured data. A closed-form solution was achieved using a different hidden complete-data formulation within the EM framework. Furthermore, the proposed method was extended to maximum a posterior reconstruction via incorporation of MR image information, taking the joint entropy between MR and parametric PET features as the prior. Using realistic simulated noisy [(11)C]-naltrindole PET and MR brain images/data, the quantitative performance of the proposed methods was investigated. Significant improvements in terms of noise versus bias performance were demonstrated when performing direct parametric reconstruction, and additionally upon extending the algorithm to its Bayesian counterpart using the MR-PET joint entropy measure.
Mismatch removal via coherent spatial relations
NASA Astrophysics Data System (ADS)
Chen, Jun; Ma, Jiayi; Yang, Changcai; Tian, Jinwen
2014-07-01
We propose a method for removing mismatches from the given putative point correspondences in image pairs based on "coherent spatial relations." Under the Bayesian framework, we formulate our approach as a maximum likelihood problem and solve a coherent spatial relation between the putative point correspondences using an expectation-maximization (EM) algorithm. Our approach associates each point correspondence with a latent variable indicating it as being either an inlier or an outlier, and alternatively estimates the inlier set and recovers the coherent spatial relation. It can handle not only the case of image pairs with rigid motions but also the case of image pairs with nonrigid motions. To parameterize the coherent spatial relation, we choose two-view geometry and thin-plate spline as models for rigid and nonrigid cases, respectively. The mismatches could be successfully removed via the coherent spatial relations after the EM algorithm converges. The quantitative results on various experimental data demonstrate that our method outperforms many state-of-the-art methods, it is not affected by low initial correct match percentages, and is robust to most geometric transformations including a large viewing angle, image rotation, and affine transformation.
NASA Astrophysics Data System (ADS)
Savin, A.; Novy, F.; Fintova, S.; Steigmann, R.
2017-08-01
The current stage of nondestructive evaluation techniques imposes the development of new electromagnetic (EM) methods that are based on high spatial resolution and increased sensitivity. In order to achieve high performance, the work frequencies must be either radifrequencies or microwaves. At these frequencies, at the dielectric/conductor interface, plasmon polaritons can appear, propagating between conductive regions as evanescent waves. In order to use the evanescent wave that can appear even if the slits width is much smaller that the wavwelength of incident EM wave, a sensor with metamaterial (MM) is used. The study of the EM field diffraction against the edge of long thin discontinuity placed under the inspected surface of a conductive plate has been performed using the geometrical optics principles. This type of sensor having the reception coils shielded by a conductive screen with a circular aperture placed in the front of reception coil of emission reception sensor has been developed and “transported” information for obtaining of magnified image of the conductive structures inspected. This work presents a sensor, using MM conical Swiss roll type that allows the propagation of evanescent waves and the electromagnetic images are magnified. The test method can be successfully applied in a variety of applications of maxim importance such as defect/damage detection in materials used in automotive and aviation technologies. Applying this testing method, spatial resolution can be improved.
Directly reconstructing principal components of heterogeneous particles from cryo-EM images.
Tagare, Hemant D; Kucukelbir, Alp; Sigworth, Fred J; Wang, Hongwei; Rao, Murali
2015-08-01
Structural heterogeneity of particles can be investigated by their three-dimensional principal components. This paper addresses the question of whether, and with what algorithm, the three-dimensional principal components can be directly recovered from cryo-EM images. The first part of the paper extends the Fourier slice theorem to covariance functions showing that the three-dimensional covariance, and hence the principal components, of a heterogeneous particle can indeed be recovered from two-dimensional cryo-EM images. The second part of the paper proposes a practical algorithm for reconstructing the principal components directly from cryo-EM images without the intermediate step of calculating covariances. This algorithm is based on maximizing the posterior likelihood using the Expectation-Maximization algorithm. The last part of the paper applies this algorithm to simulated data and to two real cryo-EM data sets: a data set of the 70S ribosome with and without Elongation Factor-G (EF-G), and a data set of the influenza virus RNA dependent RNA Polymerase (RdRP). The first principal component of the 70S ribosome data set reveals the expected conformational changes of the ribosome as the EF-G binds and unbinds. The first principal component of the RdRP data set reveals a conformational change in the two dimers of the RdRP. Copyright © 2015 Elsevier Inc. All rights reserved.
What variables affect public perceptions for EMS meeting general community needs?
Blau, Gary; Hochner, Arthur; Portwood, James
2012-01-01
In the fall, 2010, a phone survey of 928 respondents examined two research questions: does the general public perceive Emergency Medical Services (EMS) as meeting their community needs? And what factors or correlates help to explain EMS meeting community needs? To maximize geographical representation across the contiguous United States, a clustered stratified sampling strategy was used based upon zip codes across the 48 states. Results showed strong support by the sample for perceiving that EMS was meeting their general community needs. 17 percent of the variance in EMS meeting community needs was collectively explained by the demographic and perceptual variables in the regression model. Of the correlates tested, the strongest relationship was found between greater admiration for EMS professionals and higher perception of EMS meeting community needs. Study limitations included sampling households with only landline (no cell) phones, using a simulated emergency situation, and not collecting gender data.
2010-01-01
Background The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. Results This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. Conclusions emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time. PMID:20969788
NASA Astrophysics Data System (ADS)
Uchida, Y.; Takada, E.; Fujisaki, A.; Kikuchi, T.; Ogawa, K.; Isobe, M.
2017-08-01
A method to stochastically discriminate neutron and γ-ray signals measured with a stilbene organic scintillator is proposed. Each pulse signal was stochastically categorized into two groups: neutron and γ-ray. In previous work, the Expectation Maximization (EM) algorithm was used with the assumption that the measured data followed a Gaussian mixture distribution. It was shown that probabilistic discrimination between these groups is possible. Moreover, by setting the initial parameters for the Gaussian mixture distribution with a k-means algorithm, the possibility of automatic discrimination was demonstrated. In this study, the Student's t-mixture distribution was used as a probabilistic distribution with the EM algorithm to improve the robustness against the effect of outliers caused by pileup of the signals. To validate the proposed method, the figures of merit (FOMs) were compared for the EM algorithm assuming a t-mixture distribution and a Gaussian mixture distribution. The t-mixture distribution resulted in an improvement of the FOMs compared with the Gaussian mixture distribution. The proposed data processing technique is a promising tool not only for neutron and γ-ray discrimination in fusion experiments but also in other fields, for example, homeland security, cancer therapy with high energy particles, nuclear reactor decommissioning, pattern recognition, and so on.
Joint Prior Learning for Visual Sensor Network Noisy Image Super-Resolution
Yue, Bo; Wang, Shuang; Liang, Xuefeng; Jiao, Licheng; Xu, Caijin
2016-01-01
The visual sensor network (VSN), a new type of wireless sensor network composed of low-cost wireless camera nodes, is being applied for numerous complex visual analyses in wild environments, such as visual surveillance, object recognition, etc. However, the captured images/videos are often low resolution with noise. Such visual data cannot be directly delivered to the advanced visual analysis. In this paper, we propose a joint-prior image super-resolution (JPISR) method using expectation maximization (EM) algorithm to improve VSN image quality. Unlike conventional methods that only focus on upscaling images, JPISR alternatively solves upscaling mapping and denoising in the E-step and M-step. To meet the requirement of the M-step, we introduce a novel non-local group-sparsity image filtering method to learn the explicit prior and induce the geometric duality between images to learn the implicit prior. The EM algorithm inherently combines the explicit prior and implicit prior by joint learning. Moreover, JPISR does not rely on large external datasets for training, which is much more practical in a VSN. Extensive experiments show that JPISR outperforms five state-of-the-art methods in terms of both PSNR, SSIM and visual perception. PMID:26927114
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-01-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate Ki as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting Ki images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit Ki bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source Software for Tomographic Image Reconstruction (STIR) platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced Ki target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D vs. the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10–20 sub-iterations. Moreover, systematic reduction in Ki % bias and improved TBR were observed for gPatlak vs. sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior Ki CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging. PMID:27383991
NASA Astrophysics Data System (ADS)
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were observed for gPatlak versus sPatlak. Finally, validation on clinical WB dynamic data demonstrated the clinical feasibility and superior K i CNR performance for the proposed 4D framework compared to indirect Patlak and SUV imaging.
Joint Segmentation and Deformable Registration of Brain Scans Guided by a Tumor Growth Model
Gooya, Ali; Pohl, Kilian M.; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR ) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth. PMID:21995070
Joint segmentation and deformable registration of brain scans guided by a tumor growth model.
Gooya, Ali; Pohl, Kilian M; Bilello, Michel; Biros, George; Davatzikos, Christos
2011-01-01
This paper presents an approach for joint segmentation and deformable registration of brain scans of glioma patients to a normal atlas. The proposed method is based on the Expectation Maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the normal atlas into one with a tumor and edema. The modified atlas is registered into the patient space and utilized for the posterior probability estimation of various tissue labels. EM iteratively refines the estimates of the registration parameters, the posterior probabilities of tissue labels and the tumor growth model parameters. We have applied this approach to 10 glioma scans acquired with four Magnetic Resonance (MR) modalities (T1, T1-CE, T2 and FLAIR) and validated the result by comparing them to manual segmentations by clinical experts. The resulting segmentations look promising and quantitatively match well with the expert provided ground truth.
Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images
Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.
2012-01-01
SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773
Sparse-view proton computed tomography using modulated proton beams.
Lee, Jiseoc; Kim, Changhwan; Min, Byungjun; Kwak, Jungwon; Park, Seyjoon; Lee, Se Byeong; Park, Sungyong; Cho, Seungryong
2015-02-01
Proton imaging that uses a modulated proton beam and an intensity detector allows a relatively fast image acquisition compared to the imaging approach based on a trajectory tracking detector. In addition, it requires a relatively simple implementation in a conventional proton therapy equipment. The model of geometric straight ray assumed in conventional computed tomography (CT) image reconstruction is however challenged by multiple-Coulomb scattering and energy straggling in the proton imaging. Radiation dose to the patient is another important issue that has to be taken care of for practical applications. In this work, the authors have investigated iterative image reconstructions after a deconvolution of the sparsely view-sampled data to address these issues in proton CT. Proton projection images were acquired using the modulated proton beams and the EBT2 film as an intensity detector. Four electron-density cylinders representing normal soft tissues and bone were used as imaged object and scanned at 40 views that are equally separated over 360°. Digitized film images were converted to water-equivalent thickness by use of an empirically derived conversion curve. For improving the image quality, a deconvolution-based image deblurring with an empirically acquired point spread function was employed. They have implemented iterative image reconstruction algorithms such as adaptive steepest descent-projection onto convex sets (ASD-POCS), superiorization method-projection onto convex sets (SM-POCS), superiorization method-expectation maximization (SM-EM), and expectation maximization-total variation minimization (EM-TV). Performance of the four image reconstruction algorithms was analyzed and compared quantitatively via contrast-to-noise ratio (CNR) and root-mean-square-error (RMSE). Objects of higher electron density have been reconstructed more accurately than those of lower density objects. The bone, for example, has been reconstructed within 1% error. EM-based algorithms produced an increased image noise and RMSE as the iteration reaches about 20, while the POCS-based algorithms showed a monotonic convergence with iterations. The ASD-POCS algorithm outperformed the others in terms of CNR, RMSE, and the accuracy of the reconstructed relative stopping power in the region of lung and soft tissues. The four iterative algorithms, i.e., ASD-POCS, SM-POCS, SM-EM, and EM-TV, have been developed and applied for proton CT image reconstruction. Although it still seems that the images need to be improved for practical applications to the treatment planning, proton CT imaging by use of the modulated beams in sparse-view sampling has demonstrated its feasibility.
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Estimation for general birth-death processes
Crawford, Forrest W.; Minin, Vladimir N.; Suchard, Marc A.
2013-01-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of “particles” in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution. PMID:25328261
Estimation for general birth-death processes.
Crawford, Forrest W; Minin, Vladimir N; Suchard, Marc A
2014-04-01
Birth-death processes (BDPs) are continuous-time Markov chains that track the number of "particles" in a system over time. While widely used in population biology, genetics and ecology, statistical inference of the instantaneous particle birth and death rates remains largely limited to restrictive linear BDPs in which per-particle birth and death rates are constant. Researchers often observe the number of particles at discrete times, necessitating data augmentation procedures such as expectation-maximization (EM) to find maximum likelihood estimates. For BDPs on finite state-spaces, there are powerful matrix methods for computing the conditional expectations needed for the E-step of the EM algorithm. For BDPs on infinite state-spaces, closed-form solutions for the E-step are available for some linear models, but most previous work has resorted to time-consuming simulation. Remarkably, we show that the E-step conditional expectations can be expressed as convolutions of computable transition probabilities for any general BDP with arbitrary rates. This important observation, along with a convenient continued fraction representation of the Laplace transforms of the transition probabilities, allows for novel and efficient computation of the conditional expectations for all BDPs, eliminating the need for truncation of the state-space or costly simulation. We use this insight to derive EM algorithms that yield maximum likelihood estimation for general BDPs characterized by various rate models, including generalized linear models. We show that our Laplace convolution technique outperforms competing methods when they are available and demonstrate a technique to accelerate EM algorithm convergence. We validate our approach using synthetic data and then apply our methods to cancer cell growth and estimation of mutation parameters in microsatellite evolution.
Smith, Cory M; Housh, Terry J; Hill, Ethan C; Johnson, Glen O; Schmidt, Richard J
2017-04-01
This study used a combined electromyographic, mechanomyographic, and force approach to identify electromechanical delay (EMD) from the onsets of the electromyographic to force signals (EMD E-F ), onsets of the electromyographic to mechanomyogrpahic signals (EMD E-M ), and onsets of mechanomyographic to force signals (EMD M-F ). The purposes of the current study were to examine: (1) the differences in EMD E-F , EMD E-M , and EMD M-F from the vastus lateralis during maximal, voluntary dynamic (1 repetition maximum [1-RM]) and isometric (maximal voluntary isometric contraction [MVIC]) muscle actions; and (2) the effects of fatigue on EMD E-F , EMD M-F , and EMD E-M . Ten men performed pretest and posttest 1-RM and MVIC leg extension muscle actions. The fatiguing workbout consisted of 70% 1-RM dynamic constant external resistance leg extension muscle actions to failure. The results indicated that there were no significant differences between 1-RM and MVIC EMD E-F , EMD E-M , or EMD M-F. There were, however, significant fatigue-induced increases in EMD E-F (94% and 63%), EMD E-M (107%), and EMD M-F (63%) for both the 1-RM and MVIC measurements. Therefore, these findings demonstrated the effects of fatigue on EMD measures and supported comparisons among studies which examined dynamic or isometric EMD measures from the vastus lateralis using a combined electromyographic, mechanomyographic, and force approach. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Wei-Chen; Maitra, Ranjan
2011-01-01
We propose a model-based approach for clustering time series regression data in an unsupervised machine learning framework to identify groups under the assumption that each mixture component follows a Gaussian autoregressive regression model of order p. Given the number of groups, the traditional maximum likelihood approach of estimating the parameters using the expectation-maximization (EM) algorithm can be employed, although it is computationally demanding. The somewhat fast tune to the EM folk song provided by the Alternating Expectation Conditional Maximization (AECM) algorithm can alleviate the problem to some extent. In this article, we develop an alternative partial expectation conditional maximization algorithmmore » (APECM) that uses an additional data augmentation storage step to efficiently implement AECM for finite mixture models. Results on our simulation experiments show improved performance in both fewer numbers of iterations and computation time. The methodology is applied to the problem of clustering mutual funds data on the basis of their average annual per cent returns and in the presence of economic indicators.« less
NASA Astrophysics Data System (ADS)
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Ma, Chuang; Chen, Han-Shuang; Lai, Ying-Cheng; Zhang, Hai-Feng
2018-02-01
Complex networks hosting binary-state dynamics arise in a variety of contexts. In spite of previous works, to fully reconstruct the network structure from observed binary data remains challenging. We articulate a statistical inference based approach to this problem. In particular, exploiting the expectation-maximization (EM) algorithm, we develop a method to ascertain the neighbors of any node in the network based solely on binary data, thereby recovering the full topology of the network. A key ingredient of our method is the maximum-likelihood estimation of the probabilities associated with actual or nonexistent links, and we show that the EM algorithm can distinguish the two kinds of probability values without any ambiguity, insofar as the length of the available binary time series is reasonably long. Our method does not require any a priori knowledge of the detailed dynamical processes, is parameter-free, and is capable of accurate reconstruction even in the presence of noise. We demonstrate the method using combinations of distinct types of binary dynamical processes and network topologies, and provide a physical understanding of the underlying reconstruction mechanism. Our statistical inference based reconstruction method contributes an additional piece to the rapidly expanding "toolbox" of data based reverse engineering of complex networked systems.
Deep neural network and noise classification-based speech enhancement
NASA Astrophysics Data System (ADS)
Shi, Wenhua; Zhang, Xiongwei; Zou, Xia; Han, Wei
2017-07-01
In this paper, a speech enhancement method using noise classification and Deep Neural Network (DNN) was proposed. Gaussian mixture model (GMM) was employed to determine the noise type in speech-absent frames. DNN was used to model the relationship between noisy observation and clean speech. Once the noise type was determined, the corresponding DNN model was applied to enhance the noisy speech. GMM was trained with mel-frequency cepstrum coefficients (MFCC) and the parameters were estimated with an iterative expectation-maximization (EM) algorithm. Noise type was updated by spectrum entropy-based voice activity detection (VAD). Experimental results demonstrate that the proposed method could achieve better objective speech quality and smaller distortion under stationary and non-stationary conditions.
Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N
2016-04-01
Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.
Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui
2013-12-01
In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.
Spatially adapted augmentation of age-specific atlas-based segmentation using patch-based priors
NASA Astrophysics Data System (ADS)
Liu, Mengyuan; Seshamani, Sharmishtaa; Harrylock, Lisa; Kitsch, Averi; Miller, Steven; Chau, Van; Poskitt, Kenneth; Rousseau, Francois; Studholme, Colin
2014-03-01
One of the most common approaches to MRI brain tissue segmentation is to employ an atlas prior to initialize an Expectation- Maximization (EM) image labeling scheme using a statistical model of MRI intensities. This prior is commonly derived from a set of manually segmented training data from the population of interest. However, in cases where subject anatomy varies significantly from the prior anatomical average model (for example in the case where extreme developmental abnormalities or brain injuries occur), the prior tissue map does not provide adequate information about the observed MRI intensities to ensure the EM algorithm converges to an anatomically accurate labeling of the MRI. In this paper, we present a novel approach for automatic segmentation of such cases. This approach augments the atlas-based EM segmentation by exploring methods to build a hybrid tissue segmentation scheme that seeks to learn where an atlas prior fails (due to inadequate representation of anatomical variation in the statistical atlas) and utilize an alternative prior derived from a patch driven search of the atlas data. We describe a framework for incorporating this patch-based augmentation of EM (PBAEM) into a 4D age-specific atlas-based segmentation of developing brain anatomy. The proposed approach was evaluated on a set of MRI brain scans of premature neonates with ages ranging from 27.29 to 46.43 gestational weeks (GWs). Results indicated superior performance compared to the conventional atlas-based segmentation method, providing improved segmentation accuracy for gray matter, white matter, ventricles and sulcal CSF regions.
NASA Technical Reports Server (NTRS)
Schroeder, Lyle C.; Bailey, M. C.; Mitchell, John L.
1992-01-01
Methods for increasing the electromagnetic (EM) performance of reflectors with rough surfaces were tested and evaluated. First, one quadrant of the 15-meter hoop-column antenna was retrofitted with computer-driven and controlled motors to allow automated adjustment of the reflector surface. The surface errors, measured with metric photogrammetry, were used in a previously verified computer code to calculate control motor adjustments. With this system, a rough antenna surface (rms of approximately 0.180 inch) was corrected in two iterations to approximately the structural surface smoothness limit of 0.060 inch rms. The antenna pattern and gain improved significantly as a result of these surface adjustments. The EM performance was evaluated with a computer program for distorted reflector antennas which had been previously verified with experimental data. Next, the effects of the surface distortions were compensated for in computer simulations by superimposing excitation from an array feed to maximize antenna performance relative to an undistorted reflector. Results showed that a 61-element array could produce EM performance improvements equal to surface adjustments. When both mechanical surface adjustment and feed compensation techniques were applied, the equivalent operating frequency increased from approximately 6 to 18 GHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bagdon, M.J.; Martin, P.J.
1997-06-01
In 1994, Novus Engineering and EME Group began a project for the New York State Office of Mental Health (OMH) to maximize the use and benefit of energy management systems (EMS) installed at various large psychiatric hospitals throughout New York State. The project, which was funded and managed by the Dormitory Authority of the State of New York (DASNY), had three major objectives: (1) Maximize Energy Savings - Novus staff quickly learned that EMS systems as set up by contractors are far from optimal for generating energy savings. This part of the program revealed numerous opportunities for increased energy savings,more » such as: fine tuning proportional/integral/derivative (PID) loops to eliminate valve and damper hunting; adjusting temperature reset schedules to reduce energy consumption and provide more uniform temperature conditions throughout the facilities; and modifying equipment schedules. (2) Develop Monitoring Protocols - Large EMS systems are so complex that they require a systematic approach to daily, monthly and seasonal monitoring of building system conditions in order to locate system problems before they turn into trouble calls or equipment failures. In order to assist local facility staff in their monitoring efforts, Novus prepared user-friendly handbooks on each EMS. These included monitoring protocols tailored to each facility. (3) Provide Staff Training - When a new EMS is installed at a facility, it is frequently the maintenance staffs first exposure to a complex computerized system. Without proper training in what to look for, staff use of the EMS is generally very limited. With proper training, staff can be taught to take a pro-active approach to identify and solve problems before they get out of hand. The staff then realize that the EMS is a powerful preventative maintenance tool that can be used to make their work more effective and efficient. Case histories are presented.« less
NASA Astrophysics Data System (ADS)
Wang, Xun; Quost, Benjamin; Chazot, Jean-Daniel; Antoni, Jérôme
2016-01-01
This paper considers the problem of identifying multiple sound sources from acoustical measurements obtained by an array of microphones. The problem is solved via maximum likelihood. In particular, an expectation-maximization (EM) approach is used to estimate the sound source locations and strengths, the pressure measured by a microphone being interpreted as a mixture of latent signals emitted by the sources. This work also considers two kinds of uncertainties pervading the sound propagation and measurement process: uncertain microphone locations and uncertain wavenumber. These uncertainties are transposed to the data in the belief functions framework. Then, the source locations and strengths can be estimated using a variant of the EM algorithm, known as the Evidential EM (E2M) algorithm. Eventually, both simulation and real experiments are shown to illustrate the advantage of using the EM in the case without uncertainty and the E2M in the case of uncertain measurement.
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Robust statistical reconstruction for charged particle tomography
Schultz, Larry Joe; Klimenko, Alexei Vasilievich; Fraser, Andrew Mcleod; Morris, Christopher; Orum, John Christopher; Borozdin, Konstantin N; Sossong, Michael James; Hengartner, Nicolas W
2013-10-08
Systems and methods for charged particle detection including statistical reconstruction of object volume scattering density profiles from charged particle tomographic data to determine the probability distribution of charged particle scattering using a statistical multiple scattering model and determine a substantially maximum likelihood estimate of object volume scattering density using expectation maximization (ML/EM) algorithm to reconstruct the object volume scattering density. The presence of and/or type of object occupying the volume of interest can be identified from the reconstructed volume scattering density profile. The charged particle tomographic data can be cosmic ray muon tomographic data from a muon tracker for scanning packages, containers, vehicles or cargo. The method can be implemented using a computer program which is executable on a computer.
Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C
2010-06-01
We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation. Copyright 2009 Elsevier Ltd. All rights reserved.
[Imputation methods for missing data in educational diagnostic evaluation].
Fernández-Alonso, Rubén; Suárez-Álvarez, Javier; Muñiz, José
2012-02-01
In the diagnostic evaluation of educational systems, self-reports are commonly used to collect data, both cognitive and orectic. For various reasons, in these self-reports, some of the students' data are frequently missing. The main goal of this research is to compare the performance of different imputation methods for missing data in the context of the evaluation of educational systems. On an empirical database of 5,000 subjects, 72 conditions were simulated: three levels of missing data, three types of loss mechanisms, and eight methods of imputation. The levels of missing data were 5%, 10%, and 20%. The loss mechanisms were set at: Missing completely at random, moderately conditioned, and strongly conditioned. The eight imputation methods used were: listwise deletion, replacement by the mean of the scale, by the item mean, the subject mean, the corrected subject mean, multiple regression, and Expectation-Maximization (EM) algorithm, with and without auxiliary variables. The results indicate that the recovery of the data is more accurate when using an appropriate combination of different methods of recovering lost data. When a case is incomplete, the mean of the subject works very well, whereas for completely lost data, multiple imputation with the EM algorithm is recommended. The use of this combination is especially recommended when data loss is greater and its loss mechanism is more conditioned. Lastly, the results are discussed, and some future lines of research are analyzed.
Fragment assignment in the cloud with eXpress-D
2013-01-01
Background Probabilistic assignment of ambiguously mapped fragments produced by high-throughput sequencing experiments has been demonstrated to greatly improve accuracy in the analysis of RNA-Seq and ChIP-Seq, and is an essential step in many other sequence census experiments. A maximum likelihood method using the expectation-maximization (EM) algorithm for optimization is commonly used to solve this problem. However, batch EM-based approaches do not scale well with the size of sequencing datasets, which have been increasing dramatically over the past few years. Thus, current approaches to fragment assignment rely on heuristics or approximations for tractability. Results We present an implementation of a distributed EM solution to the fragment assignment problem using Spark, a data analytics framework that can scale by leveraging compute clusters within datacenters–“the cloud”. We demonstrate that our implementation easily scales to billions of sequenced fragments, while providing the exact maximum likelihood assignment of ambiguous fragments. The accuracy of the method is shown to be an improvement over the most widely used tools available and can be run in a constant amount of time when cluster resources are scaled linearly with the amount of input data. Conclusions The cloud offers one solution for the difficulties faced in the analysis of massive high-thoughput sequencing data, which continue to grow rapidly. Researchers in bioinformatics must follow developments in distributed systems–such as new frameworks like Spark–for ways to port existing methods to the cloud and help them scale to the datasets of the future. Our software, eXpress-D, is freely available at: http://github.com/adarob/express-d. PMID:24314033
Estimation of mating system parameters in plant populations using marker loci with null alleles.
Ross, H A
1986-06-01
An Expectation-Maximization (EM)-algorithm procedure is presented that extends Cheliak et al. (1983) method of maximum-likelihood estimation of mating system parameters of mixed mating system models. The extension permits the estimation of the rate of self-fertilization (s) and allele frequencies (Pi) at loci in outcrossing pollen, at marker loci having recessive null alleles. The algorithm makes use of maternal and filial genotypic arrays obtained by the electrophoretic analysis of cohorts of progeny. The genotypes of maternal plants must be known. Explicit equations are given for cases when the genotype of the maternal gamete inherited by a seed can (gymnosperms) or cannot (angiosperms) be determined. The procedure can accommodate any number of codominant alleles, but only one recessive null allele at each locus. An example, using actual data from Pinus banksiana, is presented to illustrate the application of this EM algorithm to the estimation of mating system parameters using marker loci having both codominant and recessive alleles.
Test of 3D CT reconstructions by EM + TV algorithm from undersampled data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evseev, Ivan; Ahmann, Francielle; Silva, Hamilton P. da
2013-05-06
Computerized tomography (CT) plays an important role in medical imaging for diagnosis and therapy. However, CT imaging is connected with ionization radiation exposure of patients. Therefore, the dose reduction is an essential issue in CT. In 2011, the Expectation Maximization and Total Variation Based Model for CT Reconstruction (EM+TV) was proposed. This method can reconstruct a better image using less CT projections in comparison with the usual filtered back projection (FBP) technique. Thus, it could significantly reduce the overall dose of radiation in CT. This work reports the results of an independent numerical simulation for cone beam CT geometry withmore » alternative virtual phantoms. As in the original report, the 3D CT images of 128 Multiplication-Sign 128 Multiplication-Sign 128 virtual phantoms were reconstructed. It was not possible to implement phantoms with lager dimensions because of the slowness of code execution even by the CORE i7 CPU.« less
NASA Astrophysics Data System (ADS)
Chen, Siyue; Leung, Henry; Dondo, Maxwell
2014-05-01
As computer network security threats increase, many organizations implement multiple Network Intrusion Detection Systems (NIDS) to maximize the likelihood of intrusion detection and provide a comprehensive understanding of intrusion activities. However, NIDS trigger a massive number of alerts on a daily basis. This can be overwhelming for computer network security analysts since it is a slow and tedious process to manually analyse each alert produced. Thus, automated and intelligent clustering of alerts is important to reveal the structural correlation of events by grouping alerts with common features. As the nature of computer network attacks, and therefore alerts, is not known in advance, unsupervised alert clustering is a promising approach to achieve this goal. We propose a joint optimization technique for feature selection and clustering to aggregate similar alerts and to reduce the number of alerts that analysts have to handle individually. More precisely, each identified feature is assigned a binary value, which reflects the feature's saliency. This value is treated as a hidden variable and incorporated into a likelihood function for clustering. Since computing the optimal solution of the likelihood function directly is analytically intractable, we use the Expectation-Maximisation (EM) algorithm to iteratively update the hidden variable and use it to maximize the expected likelihood. Our empirical results, using a labelled Defense Advanced Research Projects Agency (DARPA) 2000 reference dataset, show that the proposed method gives better results than the EM clustering without feature selection in terms of the clustering accuracy.
NASA Astrophysics Data System (ADS)
Lee, Kyunghoon
To evaluate the maximum likelihood estimates (MLEs) of probabilistic principal component analysis (PPCA) parameters such as a factor-loading, PPCA can invoke an expectation-maximization (EM) algorithm, yielding an EM algorithm for PPCA (EM-PCA). In order to examine the benefits of the EM-PCA for aerospace engineering applications, this thesis attempts to qualitatively and quantitatively scrutinize the EM-PCA alongside both POD and gappy POD using high-dimensional simulation data. In pursuing qualitative investigations, the theoretical relationship between POD and PPCA is transparent such that the factor-loading MLE of PPCA, evaluated by the EM-PCA, pertains to an orthogonal basis obtained by POD. By contrast, the analytical connection between gappy POD and the EM-PCA is nebulous because they distinctively approximate missing data due to their antithetical formulation perspectives: gappy POD solves a least-squares problem whereas the EM-PCA relies on the expectation of the observation probability model. To juxtapose both gappy POD and the EM-PCA, this research proposes a unifying least-squares perspective that embraces the two disparate algorithms within a generalized least-squares framework. As a result, the unifying perspective reveals that both methods address similar least-squares problems; however, their formulations contain dissimilar bases and norms. Furthermore, this research delves into the ramifications of the different bases and norms that will eventually characterize the traits of both methods. To this end, two hybrid algorithms of gappy POD and the EM-PCA are devised and compared to the original algorithms for a qualitative illustration of the different basis and norm effects. After all, a norm reflecting a curve-fitting method is found to more significantly affect estimation error reduction than a basis for two example test data sets: one is absent of data only at a single snapshot and the other misses data across all the snapshots. From a numerical performance aspect, the EM-PCA is computationally less efficient than POD for intact data since it suffers from slow convergence inherited from the EM algorithm. For incomplete data, this thesis quantitatively found that the number of data missing snapshots predetermines whether the EM-PCA or gappy POD outperforms the other because of the computational cost of a coefficient evaluation, resulting from a norm selection. For instance, gappy POD demands laborious computational effort in proportion to the number of data-missing snapshots as a consequence of the gappy norm. In contrast, the computational cost of the EM-PCA is invariant to the number of data-missing snapshots thanks to the L2 norm. In general, the higher the number of data-missing snapshots, the wider the gap between the computational cost of gappy POD and the EM-PCA. Based on the numerical experiments reported in this thesis, the following criterion is recommended regarding the selection between gappy POD and the EM-PCA for computational efficiency: gappy POD for an incomplete data set containing a few data-missing snapshots and the EM-PCA for an incomplete data set involving multiple data-missing snapshots. Last, the EM-PCA is applied to two aerospace applications in comparison to gappy POD as a proof of concept: one with an emphasis on basis extraction and the other with a focus on missing data reconstruction for a given incomplete data set with scattered missing data. The first application exploits the EM-PCA to efficiently construct reduced-order models of engine deck responses obtained by the numerical propulsion system simulation (NPSS), some of whose results are absent due to failed analyses caused by numerical instability. Model-prediction tests validate that engine performance metrics estimated by the reduced-order NPSS model exhibit considerably good agreement with those directly obtained by NPSS. Similarly, the second application illustrates that the EM-PCA is significantly more cost effective than gappy POD at repairing spurious PIV measurements obtained from acoustically-excited, bluff-body jet flow experiments. The EM-PCA reduces computational cost on factors 8 ˜ 19 compared to gappy POD while generating the same restoration results as those evaluated by gappy POD. All in all, through comprehensive theoretical and numerical investigation, this research establishes that the EM-PCA is an efficient alternative to gappy POD for an incomplete data set containing missing data over an entire data set. (Abstract shortened by UMI.)
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
Parameter Estimation of Multiple Frequency-Hopping Signals with Two Sensors
Pan, Jin; Ma, Boyuan
2018-01-01
This paper essentially focuses on parameter estimation of multiple wideband emitting sources with time-varying frequencies, such as two-dimensional (2-D) direction of arrival (DOA) and signal sorting, with a low-cost circular synthetic array (CSA) consisting of only two rotating sensors. Our basic idea is to decompose the received data, which is a superimposition of phase measurements from multiple sources into separated groups and separately estimate the DOA associated with each source. Motivated by joint parameter estimation, we propose to adopt the expectation maximization (EM) algorithm in this paper; our method involves two steps, namely, the expectation-step (E-step) and the maximization (M-step). In the E-step, the correspondence of each signal with its emitting source is found. Then, in the M-step, the maximum-likelihood (ML) estimates of the DOA parameters are obtained. These two steps are iteratively and alternatively executed to jointly determine the DOAs and sort multiple signals. Closed-form DOA estimation formulae are developed by ML estimation based on phase data, which also realize an optimal estimation. Directional ambiguity is also addressed by another ML estimation method based on received complex responses. The Cramer-Rao lower bound is derived for understanding the estimation accuracy and performance comparison. The verification of the proposed method is demonstrated with simulations. PMID:29617323
Comparison of methods for H*(10) calculation from measured LaBr3(Ce) detector spectra.
Vargas, A; Cornejo, N; Camp, A
2018-07-01
The Universitat Politecnica de Catalunya (UPC) and the Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT) have evaluated methods based on stripping, conversion coefficients and Maximum Likelihood Estimation using Expectation Maximization (ML-EM) in calculating the H*(10) rates from photon pulse-height spectra acquired with a spectrometric LaBr 3 (Ce)(1.5″ × 1.5″) detector. There is a good agreement between results of the different H*(10) rate calculation methods using the spectra measured at the UPC secondary standard calibration laboratory in Barcelona. From the outdoor study at ESMERALDA station in Madrid, it can be concluded that the analysed methods provide results quite similar to those obtained with the reference RSS ionization chamber. In addition, the spectrometric detectors can also facilitate radionuclide identification. Copyright © 2018 Elsevier Ltd. All rights reserved.
Optimized multiple linear mappings for single image super-resolution
NASA Astrophysics Data System (ADS)
Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo
2017-12-01
Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.
Menze, Bjoern H.; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-André; Székely, Gabor; Ayache, Nicholas; Golland, Polina
2016-01-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM) to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as “tumor core” or “fluid-filled structure”, but without a one-to-one correspondence to the hypo-or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the generative-discriminative model to be one of the top ranking methods in the BRATS evaluation. PMID:26599702
Menze, Bjoern H; Van Leemput, Koen; Lashkari, Danial; Riklin-Raviv, Tammy; Geremia, Ezequiel; Alberts, Esther; Gruber, Philipp; Wegener, Susanne; Weber, Marc-Andre; Szekely, Gabor; Ayache, Nicholas; Golland, Polina
2016-04-01
We introduce a generative probabilistic model for segmentation of brain lesions in multi-dimensional images that generalizes the EM segmenter, a common approach for modelling brain images using Gaussian mixtures and a probabilistic tissue atlas that employs expectation-maximization (EM), to estimate the label map for a new image. Our model augments the probabilistic atlas of the healthy tissues with a latent atlas of the lesion. We derive an estimation algorithm with closed-form EM update equations. The method extracts a latent atlas prior distribution and the lesion posterior distributions jointly from the image data. It delineates lesion areas individually in each channel, allowing for differences in lesion appearance across modalities, an important feature of many brain tumor imaging sequences. We also propose discriminative model extensions to map the output of the generative model to arbitrary labels with semantic and biological meaning, such as "tumor core" or "fluid-filled structure", but without a one-to-one correspondence to the hypo- or hyper-intense lesion areas identified by the generative model. We test the approach in two image sets: the publicly available BRATS set of glioma patient scans, and multimodal brain images of patients with acute and subacute ischemic stroke. We find the generative model that has been designed for tumor lesions to generalize well to stroke images, and the extended discriminative -discriminative model to be one of the top ranking methods in the BRATS evaluation.
Ikeda, Mitsuru
2017-01-01
Information extraction and knowledge discovery regarding adverse drug reaction (ADR) from large-scale clinical texts are very useful and needy processes. Two major difficulties of this task are the lack of domain experts for labeling examples and intractable processing of unstructured clinical texts. Even though most previous works have been conducted on these issues by applying semisupervised learning for the former and a word-based approach for the latter, they face with complexity in an acquisition of initial labeled data and ignorance of structured sequence of natural language. In this study, we propose automatic data labeling by distant supervision where knowledge bases are exploited to assign an entity-level relation label for each drug-event pair in texts, and then, we use patterns for characterizing ADR relation. The multiple-instance learning with expectation-maximization method is employed to estimate model parameters. The method applies transductive learning to iteratively reassign a probability of unknown drug-event pair at the training time. By investigating experiments with 50,998 discharge summaries, we evaluate our method by varying large number of parameters, that is, pattern types, pattern-weighting models, and initial and iterative weightings of relations for unlabeled data. Based on evaluations, our proposed method outperforms the word-based feature for NB-EM (iEM), MILR, and TSVM with F1 score of 11.3%, 9.3%, and 6.5% improvement, respectively. PMID:29090077
Jee, Yong-Seok
2018-02-01
Recently, whole body-electromyostimulation (WB-EMS) has upgraded its functions and capabilities and has overcome limitations and inconveniences from past systems. Although the efficacy and safety of EMS have been examined in some studies, specific guidelines for applying WB-EMS are lacking. To determine the efficacy and safety of applying it in healthy men to improve cardiopulmonary and psychophysiological variables when applying WB-EMS. Sixty-four participants were randomly grouped into control group (without electrical stimuli) or WB-EMS group after a 6-week baseline period. The control group (n=33; female. 15; male, 18) wore the WB-EMS suit as much as the WB-EMS group (n=31; female, 15; male, 16). There were no abnormal changes in the cardiopulmonary variables (heart rate, systolic blood pressure [SBP], diastolic blood pressure, and oxygen uptake) during or after the graded exercise test (GXT) in both groups. There was a significant decrease in SBP and an increase of oxygen uptake from stages 3 to 5 of the GXT in the WB-EMS group. The psychophysiological factors for a WB-EMS group, which consisted of soreness, anxiety, fatigability, and sleeplessness were significantly decreased after the experiment. The application of WB-EMS in healthy young men did not negatively affect the cardiopulmonary and psychophysiological factors. Rather, the application of WB-EMS improved SBP and oxygen uptake in submaximal and maximal stages of GXT. This study also confirmed that 6 weeks of WB-EMS training can improve psychophysiological factors.
Multimodal Event Detection in Twitter Hashtag Networks
Yilmaz, Yasin; Hero, Alfred O.
2016-07-01
In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less
Sampling-based ensemble segmentation against inter-operator variability
NASA Astrophysics Data System (ADS)
Huo, Jing; Okada, Kazunori; Pope, Whitney; Brown, Matthew
2011-03-01
Inconsistency and a lack of reproducibility are commonly associated with semi-automated segmentation methods. In this study, we developed an ensemble approach to improve reproducibility and applied it to glioblastoma multiforme (GBM) brain tumor segmentation on T1-weigted contrast enhanced MR volumes. The proposed approach combines samplingbased simulations and ensemble segmentation into a single framework; it generates a set of segmentations by perturbing user initialization and user-specified internal parameters, then fuses the set of segmentations into a single consensus result. Three combination algorithms were applied: majority voting, averaging and expectation-maximization (EM). The reproducibility of the proposed framework was evaluated by a controlled experiment on 16 tumor cases from a multicenter drug trial. The ensemble framework had significantly better reproducibility than the individual base Otsu thresholding method (p<.001).
Wang, Huiya; Feng, Jun; Wang, Hongyu
2017-07-20
Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.
Differential correlation for sequencing data.
Siska, Charlotte; Kechris, Katerina
2017-01-19
Several methods have been developed to identify differential correlation (DC) between pairs of molecular features from -omics studies. Most DC methods have only been tested with microarrays and other platforms producing continuous and Gaussian-like data. Sequencing data is in the form of counts, often modeled with a negative binomial distribution making it difficult to apply standard correlation metrics. We have developed an R package for identifying DC called Discordant which uses mixture models for correlations between features and the Expectation Maximization (EM) algorithm for fitting parameters of the mixture model. Several correlation metrics for sequencing data are provided and tested using simulations. Other extensions in the Discordant package include additional modeling for different types of differential correlation, and faster implementation, using a subsampling routine to reduce run-time and address the assumption of independence between molecular feature pairs. With simulations and breast cancer miRNA-Seq and RNA-Seq data, we find that Spearman's correlation has the best performance among the tested correlation methods for identifying differential correlation. Application of Spearman's correlation in the Discordant method demonstrated the most power in ROC curves and sensitivity/specificity plots, and improved ability to identify experimentally validated breast cancer miRNA. We also considered including additional types of differential correlation, which showed a slight reduction in power due to the additional parameters that need to be estimated, but more versatility in applications. Finally, subsampling within the EM algorithm considerably decreased run-time with negligible effect on performance. A new method and R package called Discordant is presented for identifying differential correlation with sequencing data. Based on comparisons with different correlation metrics, this study suggests Spearman's correlation is appropriate for sequencing data, but other correlation metrics are available to the user depending on the application and data type. The Discordant method can also be extended to investigate additional DC types and subsampling with the EM algorithm is now available for reduced run-time. These extensions to the R package make Discordant more robust and versatile for multiple -omics studies.
Mapping of electrical muscle stimulation using MRI
NASA Technical Reports Server (NTRS)
Adams, Gregory R.; Harris, Robert T.; Woodard, Daniel; Dudley, Gary A.
1993-01-01
The pattern of muscle contractile activity elicited by electromyostimulation (EMS) was mapped and compared to the contractile-activity pattern produced by voluntary effort. This was done by examining the patterns and the extent of contrast shift, as indicated by T2 values, im magnetic resonance (MR) images after isometric activity of the left m. quadriceps of human subjects was elicited by EMS (1-sec train of 500-microsec sine wave pulses at 50 Hz) or voluntary effort. The results suggest that, whereas EMS stimulates the same fibers repeatedly, thereby increasing the metabolic demand and T2 values, the voluntary efforts are performed by more diffuse asynchronous activation of skeletal muscle even at forces up to 75 percent of maximal to maintain performance.
Automatic CT Brain Image Segmentation Using Two Level Multiresolution Mixture Model of EM
NASA Astrophysics Data System (ADS)
Jiji, G. Wiselin; Dehmeshki, Jamshid
2014-04-01
Tissue classification in computed tomography (CT) brain images is an important issue in the analysis of several brain dementias. A combination of different approaches for the segmentation of brain images is presented in this paper. A multi resolution algorithm is proposed along with scaled versions using Gaussian filter and wavelet analysis that extends expectation maximization (EM) algorithm. It is found that it is less sensitive to noise and got more accurate image segmentation than traditional EM. Moreover the algorithm has been applied on 20 sets of CT of the human brain and compared with other works. The segmentation results show the advantages of the proposed work have achieved more promising results and the results have been tested with Doctors.
and algal biomass analysis methods and applications of these methods to different processes. Templeton , internally funded research project to develop microalgal compositional analysis methods that included setting methods Closing mass and component balances around pretreatment, saccharification, and fermentation unit
Contaminants | Hydrogen and Fuel Cells | NREL
-Derived Contaminants Overview Materials Methods Data Tool Partners Publications System Contaminants using several screening methods. The materials are from different manufacturers, comprise different Characterization Methods A flowchart graphic that shows the experimental methods used in the system contaminants
biomass analysis methods and is primary author on 11 Laboratory Analytical Procedures, which are ) spectroscopic analysis methods. These methods allow analysts to predict the composition of feedstock and process . Patent No. 6,737,258 (2002) Featured Publications "Improved methods for the determination of drying
simulation methods for materials physics and chemistry, with particular expertise in post-DFT, high accuracy methods such as the GW approximation for electronic structure and random phase approximation (RPA) total the art in computational methods, including efficient methods for including the effects of substrates
NREL Patents Method for Continuous Monitoring of Materials During
Manufacturing | News | NREL NREL Patents Method for Continuous Monitoring of Materials During Manufacturing News Release: NREL Patents Method for Continuous Monitoring of Materials During Manufacturing patent for a novel method that rapidly characterizes specialized materials during the manufacturing
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated that similar results could be reached using both methods, but large differences result for the arbitrary selection of SINV-PVC parameters. The presented SV-PVC method was performed without user intervention, requiring only a tumor mask as input. Research involving PET-imaged tumor heterogeneity should include correcting for partial volume effects to improve the quantitative accuracy of results. PMID:20009194
Robust Multimodal Dictionary Learning
Cao, Tian; Jojic, Vladimir; Modla, Shannon; Powell, Debbie; Czymmek, Kirk; Niethammer, Marc
2014-01-01
We propose a robust multimodal dictionary learning method for multimodal images. Joint dictionary learning for both modalities may be impaired by lack of correspondence between image modalities in training data, for example due to areas of low quality in one of the modalities. Dictionaries learned with such non-corresponding data will induce uncertainty about image representation. In this paper, we propose a probabilistic model that accounts for image areas that are poorly corresponding between the image modalities. We cast the problem of learning a dictionary in presence of problematic image patches as a likelihood maximization problem and solve it with a variant of the EM algorithm. Our algorithm iterates identification of poorly corresponding patches and re-finements of the dictionary. We tested our method on synthetic and real data. We show improvements in image prediction quality and alignment accuracy when using the method for multimodal image registration. PMID:24505674
Sparse Bayesian learning for DOA estimation with mutual coupling.
Dai, Jisheng; Hu, Nan; Xu, Weichao; Chang, Chunqi
2015-10-16
Sparse Bayesian learning (SBL) has given renewed interest to the problem of direction-of-arrival (DOA) estimation. It is generally assumed that the measurement matrix in SBL is precisely known. Unfortunately, this assumption may be invalid in practice due to the imperfect manifold caused by unknown or misspecified mutual coupling. This paper describes a modified SBL method for joint estimation of DOAs and mutual coupling coefficients with uniform linear arrays (ULAs). Unlike the existing method that only uses stationary priors, our new approach utilizes a hierarchical form of the Student t prior to enforce the sparsity of the unknown signal more heavily. We also provide a distinct Bayesian inference for the expectation-maximization (EM) algorithm, which can update the mutual coupling coefficients more efficiently. Another difference is that our method uses an additional singular value decomposition (SVD) to reduce the computational complexity of the signal reconstruction process and the sensitivity to the measurement noise.
Matroshka AstroRad Radiation Experiment (MARE) on the Deep Space Gateway
NASA Astrophysics Data System (ADS)
Gaza, R.; Hussein, H.; Murrow, D.; Hopkins, J.; Waterman, G.; Milstein, O.; Berger, T.; Przybyla, B.; Aeckerlein, J.; Marsalek, K.; Matthiae, D.; Rutczynska, A.
2018-02-01
The Matroshka AstroRad Radiation Experiment is a science payload on Orion EM-1 flight. A research platform derived from MARE is proposed for the Deep Space Gateway. Feedback is invited on desired Deep Space Gateway design features to maximize its science potential.
Perovskite Technology is Scalable, But Questions Remain about the Best
Methods | News | NREL Perovskite Technology is Scalable, But Questions Remain about the Best Methods News Release: Perovskite Technology is Scalable, But Questions Remain about the Best Methods NREL be used on a larger surface. The NREL researchers examined potential scalable deposition methods
Sun, Wanjie; Larsen, Michael D; Lachin, John M
2014-04-15
In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). Copyright © 2013 John Wiley & Sons, Ltd.
The taxonomy statistic uncovers novel clinical patterns in a population of ischemic stroke patients.
Tukiendorf, Andrzej; Kaźmierski, Radosław; Michalak, Sławomir
2013-01-01
In this paper, we describe a simple taxonomic approach for clinical data mining elaborated by Marczewski and Steinhaus (M-S), whose performance equals the advanced statistical methodology known as the expectation-maximization (E-M) algorithm. We tested these two methods on a cohort of ischemic stroke patients. The comparison of both methods revealed strong agreement. Direct agreement between M-S and E-M classifications reached 83%, while Cohen's coefficient of agreement was κ = 0.766(P < 0.0001). The statistical analysis conducted and the outcomes obtained in this paper revealed novel clinical patterns in ischemic stroke patients. The aim of the study was to evaluate the clinical usefulness of Marczewski-Steinhaus' taxonomic approach as a tool for the detection of novel patterns of data in ischemic stroke patients and the prediction of disease outcome. In terms of the identification of fairly frequent types of stroke patients using their age, National Institutes of Health Stroke Scale (NIHSS), and diabetes mellitus (DM) status, when dealing with rough characteristics of patients, four particular types of patients are recognized, which cannot be identified by means of routine clinical methods. Following the obtained taxonomical outcomes, the strong correlation between the health status at moment of admission to emergency department (ED) and the subsequent recovery of patients is established. Moreover, popularization and simplification of the ideas of advanced mathematicians may provide an unconventional explorative platform for clinical problems.
ARC Researchers at ASME 2015 Internal Combustion Engine Division Fall
-sense. Therefore, the focus of this paper is on the various methods of computing CA50 for analysing and classifying cycle-to-cycle variability. The assumptions made to establish fast and possibly on-line methods SI engine. Then the various fast methods for computing CA50 feed the two statistical methods
Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.
ERIC Educational Resources Information Center
Wang, Yuh-Yin Wu; Schafer, William D.
This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…
, microquasars, neutron stars, pulsars, black holes astro-ph.IM - Instrumentation and Methods for Astrophysics Astrophysics. Methods for data analysis, statistical methods. Software, database design astro-ph.SR - Solar and
EM in high-dimensional spaces.
Draper, Bruce A; Elliott, Daniel L; Hayes, Jeremy; Baek, Kyungim
2005-06-01
This paper considers fitting a mixture of Gaussians model to high-dimensional data in scenarios where there are fewer data samples than feature dimensions. Issues that arise when using principal component analysis (PCA) to represent Gaussian distributions inside Expectation-Maximization (EM) are addressed, and a practical algorithm results. Unlike other algorithms that have been proposed, this algorithm does not try to compress the data to fit low-dimensional models. Instead, it models Gaussian distributions in the (N - 1)-dimensional space spanned by the N data samples. We are able to show that this algorithm converges on data sets where low-dimensional techniques do not.
possible reasons. Want information on the technical assumptions and methods behind the site ? - Documentation on appliances, heating/cooling methods, and the tariff analysis methods is all available here
Re, Rebecca; Muthalib, Makii; Contini, Davide; Zucchelli, Lucia; Torricelli, Alessandro; Spinelli, Lorenzo; Caffini, Matteo; Ferrari, Marco; Quaresima, Valentina; Perrey, Stephane; Kerr, Graham
2013-01-01
The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.
GLISTR: Glioma Image Segmentation and Registration
Pohl, Kilian M.; Bilello, Michel; Cirillo, Luigi; Biros, George; Melhem, Elias R.; Davatzikos, Christos
2015-01-01
We present a generative approach for simultaneously registering a probabilistic atlas of a healthy population to brain magnetic resonance (MR) scans showing glioma and segmenting the scans into tumor as well as healthy tissue labels. The proposed method is based on the expectation maximization (EM) algorithm that incorporates a glioma growth model for atlas seeding, a process which modifies the original atlas into one with tumor and edema adapted to best match a given set of patient’s images. The modified atlas is registered into the patient space and utilized for estimating the posterior probabilities of various tissue labels. EM iteratively refines the estimates of the posterior probabilities of tissue labels, the deformation field and the tumor growth model parameters. Hence, in addition to segmentation, the proposed method results in atlas registration and a low-dimensional description of the patient scans through estimation of tumor model parameters. We validate the method by automatically segmenting 10 MR scans and comparing the results to those produced by clinical experts and two state-of-the-art methods. The resulting segmentations of tumor and edema outperform the results of the reference methods, and achieve a similar accuracy from a second human rater. We additionally apply the method to 122 patients scans and report the estimated tumor model parameters and their relations with segmentation and registration results. Based on the results from this patient population, we construct a statistical atlas of the glioma by inverting the estimated deformation fields to warp the tumor segmentations of patients scans into a common space. PMID:22907965
SSHAC Workshop 1 - November 15-18, 2010 | NGA East
(2:00-4:30) Methods for finite fault simulations 2:00-3:00 Methods considered: description of selected methods (2.Yuehua Zeng; Sim WG) 3:00-3:30 Discussion 3:30-3:45 Break 3:45-4:00 Discussion: Capture of representative methods 4:00-4:30 Summary of today's key issues (3.Norman Abrahamson) 4:30-5:00
method for testing home energy audit software and associated calibration methods. BESTEST-EX is one of Energy Analysis Model Calibration Methods. When completed, the ANSI/RESNET SMOT will specify test procedures for evaluating calibration methods used in conjunction with predicting building energy use and
student, he developed a parallel spectral finite element method for treating the interaction of large mechanics of fluids, structures, and their interaction|Spectral finite-element methods for time-dependent
Ensuring the effectiveness of community-wide emergency cardiac care.
Becker, L B; Pepe, P E
1993-02-01
To improve emergency cardiac care (ECC) on the national or international level, we must translate to the rest of our communities the successes found in cities with high survival rates. In recent years, important developments have evolved in our understanding of the treatment and evaluation of cardiac arrest. Some of the most important of these developments include 1) recognition of the chain of survival, which is necessary to achieve high survival rates; 2) widespread acceptance that survival rates must be assessed routinely to ensure continuous quality improvements in the emergency medical services (EMS) system; and 3) development of improved methods for performing survival rate studies that will maximize the effectiveness of information gathering and analysis. While each community should determine how to optimize their own ECC services, some general guidelines are useful. Successful treatment of cardiac arrest starts in the community with prevention and education, including early recognition of the signs and symptoms of cardiovascular ischemia. Obtaining 911 service (and preferably enhanced 911) should be a top priority for all communities. EMS dispatchers should dispatch the unit to the scene in less than one minute, provide critical information to the responders regarding the type of emergency, and offer the caller telephone-assisted CPR instructions. The EMS first-responders should strive to arrive at the patient's side in less than four minutes, be able to immediately defibrillate if necessary, and begin basic CPR. An excellent strategy to accomplish this is to equip and train all fire-fighting units in the operation of automatic external defibrillators and dispatch them as a first-responder team. To manage the cardiac arrest patient, a minimum of two rescuers trained in advanced cardiac life support plus two or more rescuers trained in basic life support are needed. Furthermore, an EMS system is not complete without on-going evaluation. Therefore, the 1992 National Conference on CPR and ECC strongly endorses the position that all ECC systems assess their survival rates through an ongoing quality improvement process and that all members of the chain of providers should be represented in the outcome assessment team. We still have much to discover regarding optimal techniques of CPR, methods for data collection, and optimal structure of an EMS system. Research in these areas will provide the foundation for future changes in EMS systems development.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2011-07-01
In most methods for evaluation of cardiac function based on echocardiography, the heart wall is currently identified manually by an operator. However, this task is very time-consuming and suffers from inter- and intraobserver variability. The present paper proposes a method that uses multiple features of ultrasonic echo signals for automated identification of the heart wall region throughout an entire cardiac cycle. In addition, the optimal cardiac phase to select a frame of interest, i.e., the frame for the initiation of tracking, was determined. The heart wall region at the frame of interest in this cardiac phase was identified by the expectation-maximization (EM) algorithm, and heart wall regions in the following frames were identified by tracking each point classified in the initial frame as the heart wall region using the phased tracking method. The results for two subjects indicate the feasibility of the proposed method in the longitudinal axis view of the heart.
Half-blind remote sensing image restoration with partly unknown degradation
NASA Astrophysics Data System (ADS)
Xie, Meihua; Yan, Fengxia
2017-01-01
The problem of image restoration has been extensively studied for its practical importance and theoretical interest. This paper mainly discusses the problem of image restoration with partly unknown kernel. In this model, the degraded kernel function is known but its parameters are unknown. With this model, we should estimate the parameters in Gaussian kernel and the real image simultaneity. For this new problem, a total variation restoration model is put out and an intersect direction iteration algorithm is designed. Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index Measurement (SSIM) are used to measure the performance of the method. Numerical results show that we can estimate the parameters in kernel accurately, and the new method has both much higher PSNR and much higher SSIM than the expectation maximization (EM) method in many cases. In addition, the accuracy of estimation is not sensitive to noise. Furthermore, even though the support of the kernel is unknown, we can also use this method to get accurate estimation.
pretreatment conditions and biological digestion methods, which might not be detected by large-scale ) "Coherent Raman Microscopy Analysis of Plant Cell Walls," Biomass Conversion: Methods and Protocols, Methods in Molecular Biology (2012) "Chemical, Ultrastructural and Supramolecular Analysis
-3228 Research Interests Application of numerical methods to process problems Fuel and chemical biochemistry and numerical methods), University of Wisconsin at Madison, 2009-2014 Professional Experience Stem Cells Under Defined Conditions," Tissue Engineering Part C Methods (2013)
NREL's New Perovskite Ink Opens Window for Quality Cells | News | News |
scalable deposition methods, which are suitable for future module production, still lag behind state-of-the -coating methods. Both methods were tested and produced indistinguishable film morphology and device
Combined Heat and Power Protocol for Uniform Methods Project | Advanced
Manufacturing Research | NREL Combined Heat and Power Protocol for Uniform Methods Project Combined Heat and Power Protocol for Uniform Methods Project NREL developed a protocol that provides a ; is consistent with the scope and other protocols developed for the Uniform Methods Project (UMP
Norman Ramsey and the Separated Oscillatory Fields Method
methods of investigation; in particular, he contributed many refinements of the molecular beam method for the study of atomic and molecular properties, he invented the separated oscillatory field method of atomic and molecular spectroscopy and it is the practical basis for the most precise atomic clocks
also created new codes, new methods of analysis for wind turbine testing and new methods to develop of 35 employees. Dr. Thresher's group was responsible for the next generation wind turbine . Thresher was asked to work for two years with DOE in Washington D.C. to manage the innovative wind turbine
Jennifer.Vanrij@nrel.gov | 303-384-7180 Jennifer's expertise is in developing computational modeling methods for collaboratively developing numerical modeling methods to simulate the hydrodynamic, structural dynamic, power -elastic interactions. Her other diverse work experiences include developing numerical modeling methods for
architectures. Crowlely's group has designed and implemented new methods and algorithms specifically for biomass , Crowley developed highly parallel methods for simulations of bio-macromolecules. Affiliated Research advanced sampling methods, Crowley and his team determine free energies such as binding of substrates
Energy Analytics Campaign > 2014-2018 Assessment of Automated M&V Methods > 2012-2018 Better Assessment of automated measurement and verification methods. Granderson, J. et al. Lawrence Berkeley . PDF, 726 KB Performance Metrics and Objective Testing Methods for Energy Baseline Modeling Software
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
Science and Technology Highlights | NREL
Leads to Enhanced Upgrading Methods NREL's efforts to standardize techniques for bio-oil analysis inform enhanced modeling capability and affordable methods to increase energy efficiency. December 2012 NREL Meets Performance Demands of Advanced Lithium-ion Batteries Novel surface modification methods are
Solar Measurement and Modeling | Grid Modernization | NREL
Energy SunShot Initiative by improving the tools and methods that measure solar radiation to reduce and disseminate accurate solar measurement and modeling methods, best practices and standards, and Normal Irradiance Measurements, Solar Energy (2016) Radiometer Calibration Methods and Resulting
the published methods to increase LAP applicability. The adaptations must be tested to ensure the same concentrations must approximate the corresponding sugar concentrations in the sample. Methods for optimizing ;Compositional Analysis of Lignocellulosic Feedstocks. 1. Review and Description of Methods," J. Agric. Food
Pradhan, Sudeep; Song, Byungjeong; Lee, Jaeyeon; Chae, Jung-Woo; Kim, Kyung Im; Back, Hyun-Moon; Han, Nayoung; Kwon, Kwang-Il; Yun, Hwi-Yeol
2017-12-01
Exploratory preclinical, as well as clinical trials, may involve a small number of patients, making it difficult to calculate and analyze the pharmacokinetic (PK) parameters, especially if the PK parameters show very high inter-individual variability (IIV). In this study, the performance of a classical first-order conditional estimation with interaction (FOCE-I) and expectation maximization (EM)-based Markov chain Monte Carlo Bayesian (BAYES) estimation methods were compared for estimating the population parameters and its distribution from data sets having a low number of subjects. In this study, 100 data sets were simulated with eight sampling points for each subject and with six different levels of IIV (5%, 10%, 20%, 30%, 50%, and 80%) in their PK parameter distribution. A stochastic simulation and estimation (SSE) study was performed to simultaneously simulate data sets and estimate the parameters using four different methods: FOCE-I only, BAYES(C) (FOCE-I and BAYES composite method), BAYES(F) (BAYES with all true initial parameters and fixed ω 2 ), and BAYES only. Relative root mean squared error (rRMSE) and relative estimation error (REE) were used to analyze the differences between true and estimated values. A case study was performed with a clinical data of theophylline available in NONMEM distribution media. NONMEM software assisted by Pirana, PsN, and Xpose was used to estimate population PK parameters, and R program was used to analyze and plot the results. The rRMSE and REE values of all parameter (fixed effect and random effect) estimates showed that all four methods performed equally at the lower IIV levels, while the FOCE-I method performed better than other EM-based methods at higher IIV levels (greater than 30%). In general, estimates of random-effect parameters showed significant bias and imprecision, irrespective of the estimation method used and the level of IIV. Similar performance of the estimation methods was observed with theophylline dataset. The classical FOCE-I method appeared to estimate the PK parameters more reliably than the BAYES method when using a simple model and data containing only a few subjects. EM-based estimation methods can be considered for adapting to the specific needs of a modeling project at later steps of modeling.
Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced
Upgrading Methods | NREL Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Science and Technology Highlights Highlights in Research & Development Reliable Characterization for Pyrolysis Bio-Oils Leads to Enhanced Upgrading Methods Key Research Results Achievement As co
NGS Pretesting and QC Using Illumina Infinium Arrays CIDR IGES Posters - 2017 A Comparison of Methods fragmentation methods for input into library construction protocol Development of a Low Input FFPE workflow for Evaluation of Copy Number Variation (CNV) detection methods in whole exome sequencing (WES) data CIDR AGBT
chemical transformations Scalable methods for solution-phase nanomaterials synthesis Production of premium Patents "Metal Phosphide Catalysts and Methods for Making the Same and Uses Thereof," U.S . Patent No. 9,636,664 B1 (2017) "Metal Phosphide Catalysts and Methods for Making the Same and Uses
Items New to the Collection - Betty Petersen Memorial Library
Symbolic-numeric Methods. Springer Verlag. Ambaum MHP. 2010. Thermal physics of the atmosphere. Hoboken ; Boston, Mass.: American Meteorological Society. Tarantola A. 1987. Inverse Problem Theory Methods for Wiley & Sons. Wilks DS. 2010. Statistical methods in the atmospheric sciences. Amsterdam: Elsevier
Biomass Characterization | Bioenergy | NREL
analytical methods for biomass characterization available for downloading. View the Biomass Compositional Methods Molecular Beam Mass Spectrometry Photo of a man in front of multiple computer screens that present Characterization of Biomass We develop new methods and tools to understand the chemical composition of raw biomass
Battery Materials Synthesis | Transportation Research | NREL
research has achieved greater battery stability through both conventional and innovative methods. The lab's provided innovative and cost-effective methods to mitigate lifespan and reliability concerns. Atomic Layer into an in-line, roll-to-roll format that can be integrated with manufacturing methods. Electrodes
Richard Schrock, Robert Grubbs, and Metathesis Method in Organic Synthesis
Organic Synthesis Resources with Additional Information Richard R. Schrock of the Massachusetts Institute Nobel Prize in Chemistry "for the development of the metathesis method in organic synthesis" ] Chauvin, Grubbs and Schrock "for the development of the metathesis method in organic synthesis,"
CRD's Daniela Ushizima Receives DOE Early Career Award
Science. The award will fund research into developing new methods to help scientists extract more -the-art data analysis methods with emphasis on pattern recognition and machine learning emerging sources, multidisciplinary teams to interpret the data and the computational methods to automate some of
Grid Standards and Codes | Grid Modernization | NREL
simulations that take advantage of advanced concepts such as hardware-in-the-loop testing. Such methods of methods and solutions. Projects Accelerating Systems Integration Standards Sharp increases in goal of this project is to develop streamlined and accurate methods for New York utilities to determine
Generalized Wishart Mixtures for Unsupervised Classification of PolSAR Data
NASA Astrophysics Data System (ADS)
Li, Lan; Chen, Erxue; Li, Zengyuan
2013-01-01
This paper presents an unsupervised clustering algorithm based upon the expectation maximization (EM) algorithm for finite mixture modelling, using the complex wishart probability density function (PDF) for the probabilities. The mixture model enables to consider heterogeneous thematic classes which could not be better fitted by the unimodal wishart distribution. In order to make it fast and robust to calculate, we use the recently proposed generalized gamma distribution (GΓD) for the single polarization intensity data to make the initial partition. Then we use the wishart probability density function for the corresponding sample covariance matrix to calculate the posterior class probabilities for each pixel. The posterior class probabilities are used for the prior probability estimates of each class and weights for all class parameter updates. The proposed method is evaluated and compared with the wishart H-Alpha-A classification. Preliminary results show that the proposed method has better performance.
NOTE: Acceleration of Monte Carlo-based scatter compensation for cardiac SPECT
NASA Astrophysics Data System (ADS)
Sohlberg, A.; Watabe, H.; Iida, H.
2008-07-01
Single proton emission computed tomography (SPECT) images are degraded by photon scatter making scatter compensation essential for accurate reconstruction. Reconstruction-based scatter compensation with Monte Carlo (MC) modelling of scatter shows promise for accurate scatter correction, but it is normally hampered by long computation times. The aim of this work was to accelerate the MC-based scatter compensation using coarse grid and intermittent scatter modelling. The acceleration methods were compared to un-accelerated implementation using MC-simulated projection data of the mathematical cardiac torso (MCAT) phantom modelling 99mTc uptake and clinical myocardial perfusion studies. The results showed that when combined the acceleration methods reduced the reconstruction time for 10 ordered subset expectation maximization (OS-EM) iterations from 56 to 11 min without a significant reduction in image quality indicating that the coarse grid and intermittent scatter modelling are suitable for MC-based scatter compensation in cardiac SPECT.
Time-of-flight PET image reconstruction using origin ensembles.
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-07
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Time-of-flight PET image reconstruction using origin ensembles
NASA Astrophysics Data System (ADS)
Wülker, Christian; Sitek, Arkadiusz; Prevrhal, Sven
2015-03-01
The origin ensemble (OE) algorithm is a novel statistical method for minimum-mean-square-error (MMSE) reconstruction of emission tomography data. This method allows one to perform reconstruction entirely in the image domain, i.e. without the use of forward and backprojection operations. We have investigated the OE algorithm in the context of list-mode (LM) time-of-flight (TOF) PET reconstruction. In this paper, we provide a general introduction to MMSE reconstruction, and a statistically rigorous derivation of the OE algorithm. We show how to efficiently incorporate TOF information into the reconstruction process, and how to correct for random coincidences and scattered events. To examine the feasibility of LM-TOF MMSE reconstruction with the OE algorithm, we applied MMSE-OE and standard maximum-likelihood expectation-maximization (ML-EM) reconstruction to LM-TOF phantom data with a count number typically registered in clinical PET examinations. We analyzed the convergence behavior of the OE algorithm, and compared reconstruction time and image quality to that of the EM algorithm. In summary, during the reconstruction process, MMSE-OE contrast recovery (CRV) remained approximately the same, while background variability (BV) gradually decreased with an increasing number of OE iterations. The final MMSE-OE images exhibited lower BV and a slightly lower CRV than the corresponding ML-EM images. The reconstruction time of the OE algorithm was approximately 1.3 times longer. At the same time, the OE algorithm can inherently provide a comprehensive statistical characterization of the acquired data. This characterization can be utilized for further data processing, e.g. in kinetic analysis and image registration, making the OE algorithm a promising approach in a variety of applications.
Scientist Honored by DOE for Outstanding Research Accomplishments,
passive design tools. The American Society of Heating, Refrigeration and Air Conditioning Engineer's mixed systems. This accomplishment gave the solar energy design community a direct, verifiable method of design manual, Passive Solar Heating Analysis, is an outgrowth of this method. Dr. Balcomb's involvement
Thermal Measurements of Packed Copper Wire Enables Better Electric Motor
transmittance characterization methods both parallel and perpendicular to the axis. A measurement of apparent from all three test methods indicated that the k_app of the packed copper wire was significantly higher methods for examining the thermal impact of new materials for winding structures relevant to motor
Biomass Compositional Analysis Laboratory Procedures | Bioenergy | NREL
Compositional Analysis This procedure describes methods for sample drying and size reduction, obtaining samples methods used to determine the amount of solids or moisture present in a solid or slurry biomass sample as values? We have found that neutral detergent fiber (NDF) and acid detergent fiber (ADF) methods report
Alternative Fuels Data Center: Truck Stop Electrification Site Data
Collection Methods Tools Printable Version Share this resource Send a link to Alternative Fuels Data Center: Truck Stop Electrification Site Data Collection Methods to someone by E-mail Share Alternative Fuels Data Center: Truck Stop Electrification Site Data Collection Methods on Facebook Tweet about
Center for Nondestructive Evaluation - Center for Nondestructive Evaluation
available for the full range of inspection methods, housed in a 52,000 sq. ft. facility with over $5M in - 1990): Development of NDE methods for application to DOE energy and weapons programs, including multi for enhanced frequency bandwidth and improved flaw reconstruction, and novel methods for poling
Working Toward the Very Low Energy Consumption Building of the Future |
systems engineering methods that have transformed other industries, including the aircraft and automobile Merced and United Technologies are studying the use of sensors and occupancy estimating methods to , occupancy dynamics models, and energy control methods. The team will test whether this technology can
SPECT reconstruction using DCT-induced tight framelet regularization
NASA Astrophysics Data System (ADS)
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method.
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC.
Low Dose PET Image Reconstruction with Total Variation Using Alternating Direction Method
Yu, Xingjian; Wang, Chenye; Hu, Hongjie; Liu, Huafeng
2016-01-01
In this paper, a total variation (TV) minimization strategy is proposed to overcome the problem of sparse spatial resolution and large amounts of noise in low dose positron emission tomography (PET) imaging reconstruction. Two types of objective function were established based on two statistical models of measured PET data, least-square (LS) TV for the Gaussian distribution and Poisson-TV for the Poisson distribution. To efficiently obtain high quality reconstructed images, the alternating direction method (ADM) is used to solve these objective functions. As compared with the iterative shrinkage/thresholding (IST) based algorithms, the proposed ADM can make full use of the TV constraint and its convergence rate is faster. The performance of the proposed approach is validated through comparisons with the expectation-maximization (EM) method using synthetic and experimental biological data. In the comparisons, the results of both LS-TV and Poisson-TV are taken into consideration to find which models are more suitable for PET imaging, in particular low-dose PET. To evaluate the results quantitatively, we computed bias, variance, and the contrast recovery coefficient (CRC) and drew profiles of the reconstructed images produced by the different methods. The results show that both Poisson-TV and LS-TV can provide a high visual quality at a low dose level. The bias and variance of the proposed LS-TV and Poisson-TV methods are 20% to 74% less at all counting levels than those of the EM method. Poisson-TV gives the best performance in terms of high-accuracy reconstruction with the lowest bias and variance as compared to the ground truth (14.3% less bias and 21.9% less variance). In contrast, LS-TV gives the best performance in terms of the high contrast of the reconstruction with the highest CRC. PMID:28005929
Species Tree Inference Using a Mixture Model.
Ullah, Ikram; Parviainen, Pekka; Lagergren, Jens
2015-09-01
Species tree reconstruction has been a subject of substantial research due to its central role across biology and medicine. A species tree is often reconstructed using a set of gene trees or by directly using sequence data. In either of these cases, one of the main confounding phenomena is the discordance between a species tree and a gene tree due to evolutionary events such as duplications and losses. Probabilistic methods can resolve the discordance by coestimating gene trees and the species tree but this approach poses a scalability problem for larger data sets. We present MixTreEM-DLRS: A two-phase approach for reconstructing a species tree in the presence of gene duplications and losses. In the first phase, MixTreEM, a novel structural expectation maximization algorithm based on a mixture model is used to reconstruct a set of candidate species trees, given sequence data for monocopy gene families from the genomes under study. In the second phase, PrIME-DLRS, a method based on the DLRS model (Åkerborg O, Sennblad B, Arvestad L, Lagergren J. 2009. Simultaneous Bayesian gene tree reconstruction and reconciliation analysis. Proc Natl Acad Sci U S A. 106(14):5714-5719), is used for selecting the best species tree. PrIME-DLRS can handle multicopy gene families since DLRS, apart from modeling sequence evolution, models gene duplication and loss using a gene evolution model (Arvestad L, Lagergren J, Sennblad B. 2009. The gene evolution model and computing its associated probabilities. J ACM. 56(2):1-44). We evaluate MixTreEM-DLRS using synthetic and biological data, and compare its performance with a recent genome-scale species tree reconstruction method PHYLDOG (Boussau B, Szöllősi GJ, Duret L, Gouy M, Tannier E, Daubin V. 2013. Genome-scale coestimation of species and gene trees. Genome Res. 23(2):323-330) as well as with a fast parsimony-based algorithm Duptree (Wehe A, Bansal MS, Burleigh JG, Eulenstein O. 2008. Duptree: a program for large-scale phylogenetic analyses using gene tree parsimony. Bioinformatics 24(13):1540-1541). Our method is competitive with PHYLDOG in terms of accuracy and runs significantly faster and our method outperforms Duptree in accuracy. The analysis constituted by MixTreEM without DLRS may also be used for selecting the target species tree, yielding a fast and yet accurate algorithm for larger data sets. MixTreEM is freely available at http://prime.scilifelab.se/mixtreem/. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu; Zhang, Jiahan
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work.more » Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data.« less
Li, Si; Zhang, Jiahan; Krol, Andrzej; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin; Lipson, Edward; Feiglin, David; Xu, Yuesheng
2015-01-01
Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean square errors (MSEs), and report the convergence speed and computation time. Results: HOTV-PAPA yields the best signal-to-noise ratio, followed by TV-PAPA and TV-OSL/GPF-EM. The local spatial resolution of HOTV-PAPA is somewhat worse than that of TV-PAPA and TV-OSL. Images reconstructed using HOTV-PAPA have the lowest local noise power spectrum (LNPS) amplitudes, followed by TV-PAPA, TV-OSL, and GPF-EM. The LNPS peak of GPF-EM is shifted toward higher spatial frequencies than those for the three other methods. The PAPA-type methods exhibit much lower ensemble noise, ensemble voxel variance, and image roughness. HOTV-PAPA performs best in these categories. Whereas images reconstructed using both TV-PAPA and TV-OSL are degraded by severe staircase artifacts; HOTV-PAPA substantially reduces such artifacts. It also converges faster than the other three methods and exhibits the lowest overall reconstruction error level, as measured by MSE. Conclusions: For high-noise simulated SPECT data, HOTV-PAPA outperforms TV-PAPA, GPF-EM, and TV-OSL in terms of hot lesion detectability, noise suppression, MSE, and computational efficiency. Unlike TV-PAPA and TV-OSL, HOTV-PAPA does not create sizable staircase artifacts. Moreover, HOTV-PAPA effectively suppresses noise, with only limited loss of local spatial resolution. Of the four methods, HOTV-PAPA shows the best lesion detectability, thanks to its superior noise suppression. HOTV-PAPA shows promise for clinically useful reconstructions of low-dose SPECT data. PMID:26233214
Bayesian inversion analysis of nonlinear dynamics in surface heterogeneous reactions.
Omori, Toshiaki; Kuwatani, Tatsu; Okamoto, Atsushi; Hukushima, Koji
2016-09-01
It is essential to extract nonlinear dynamics from time-series data as an inverse problem in natural sciences. We propose a Bayesian statistical framework for extracting nonlinear dynamics of surface heterogeneous reactions from sparse and noisy observable data. Surface heterogeneous reactions are chemical reactions with conjugation of multiple phases, and they have the intrinsic nonlinearity of their dynamics caused by the effect of surface-area between different phases. We adapt a belief propagation method and an expectation-maximization (EM) algorithm to partial observation problem, in order to simultaneously estimate the time course of hidden variables and the kinetic parameters underlying dynamics. The proposed belief propagation method is performed by using sequential Monte Carlo algorithm in order to estimate nonlinear dynamical system. Using our proposed method, we show that the rate constants of dissolution and precipitation reactions, which are typical examples of surface heterogeneous reactions, as well as the temporal changes of solid reactants and products, were successfully estimated only from the observable temporal changes in the concentration of the dissolved intermediate product.
Fast estimation of diffusion tensors under Rician noise by the EM algorithm.
Liu, Jia; Gasbarra, Dario; Railavo, Juha
2016-01-15
Diffusion tensor imaging (DTI) is widely used to characterize, in vivo, the white matter of the central nerve system (CNS). This biological tissue contains much anatomic, structural and orientational information of fibers in human brain. Spectral data from the displacement distribution of water molecules located in the brain tissue are collected by a magnetic resonance scanner and acquired in the Fourier domain. After the Fourier inversion, the noise distribution is Gaussian in both real and imaginary parts and, as a consequence, the recorded magnitude data are corrupted by Rician noise. Statistical estimation of diffusion leads a non-linear regression problem. In this paper, we present a fast computational method for maximum likelihood estimation (MLE) of diffusivities under the Rician noise model based on the expectation maximization (EM) algorithm. By using data augmentation, we are able to transform a non-linear regression problem into the generalized linear modeling framework, reducing dramatically the computational cost. The Fisher-scoring method is used for achieving fast convergence of the tensor parameter. The new method is implemented and applied using both synthetic and real data in a wide range of b-amplitudes up to 14,000s/mm(2). Higher accuracy and precision of the Rician estimates are achieved compared with other log-normal based methods. In addition, we extend the maximum likelihood (ML) framework to the maximum a posteriori (MAP) estimation in DTI under the aforementioned scheme by specifying the priors. We will describe how close numerically are the estimators of model parameters obtained through MLE and MAP estimation. Copyright © 2015 Elsevier B.V. All rights reserved.
Maximum likelihood estimates, from censored data, for mixed-Weibull distributions
NASA Astrophysics Data System (ADS)
Jiang, Siyuan; Kececioglu, Dimitri
1992-06-01
A new algorithm for estimating the parameters of mixed-Weibull distributions from censored data is presented. The algorithm follows the principle of maximum likelihood estimate (MLE) through the expectation and maximization (EM) algorithm, and it is derived for both postmortem and nonpostmortem time-to-failure data. It is concluded that the concept of the EM algorithm is easy to understand and apply (only elementary statistics and calculus are required). The log-likelihood function cannot decrease after an EM sequence; this important feature was observed in all of the numerical calculations. The MLEs of the nonpostmortem data were obtained successfully for mixed-Weibull distributions with up to 14 parameters in a 5-subpopulation, mixed-Weibull distribution. Numerical examples indicate that some of the log-likelihood functions of the mixed-Weibull distributions have multiple local maxima; therefore, the algorithm should start at several initial guesses of the parameter set.
Time series modeling by a regression approach based on a latent process.
Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice
2009-01-01
Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.
Smart dual-mode fluorescent gold nanoparticle agents.
Kang, Kyung A; Wang, Jianting
2014-01-01
Fluorophore-mediated, molecular sensing is one of the most popular and important technique in biomedical studies. As in any sensing technique, the two most important factors in this sensing are the sensitivity and specificity. Since the fluorescence of a fluorophore is emitted in the process of fluorophore electrons returning from their excited to ground state, a tool that can locally manipulate the electron state can be useful to maximize the sensitivity and specificity. A good tool candidate for this purpose is nanosized metal particles that can form an electromagnetic (EM) field at a sufficiently strong level, upon receiving a particular wavelength that fits the excitation wavelength of the fluorophore to be used. There are several metal nanoparticle types that can generate a sufficiently strong EM field for this purpose. Nevertheless, for the biomedical studies, which require minimal toxicity, gold nanoparticles (GNPs) are known to be the most suitable. In this article, various methods for fluorescence alteration using GNPs, which can be beneficially utilized for biomarker-specific, highly sensitive molecular sensing and imaging, are discussed. For further resources related to this article, please visit the WIREs website. The authors have declared no conflicts of interest for this article. © 2014 Wiley Periodicals, Inc.
Gctf: Real-time CTF determination and correction
Zhang, Kai
2016-01-01
Accurate estimation of the contrast transfer function (CTF) is critical for a near-atomic resolution cryo electron microscopy (cryoEM) reconstruction. Here, a GPU-accelerated computer program, Gctf, for accurate and robust, real-time CTF determination is presented. The main target of Gctf is to maximize the cross-correlation of a simulated CTF with the logarithmic amplitude spectra (LAS) of observed micrographs after background subtraction. Novel approaches in Gctf improve both speed and accuracy. In addition to GPU acceleration (e.g. 10–50×), a fast ‘1-dimensional search plus 2-dimensional refinement (1S2R)’ procedure further speeds up Gctf. Based on the global CTF determination, the local defocus for each particle and for single frames of movies is accurately refined, which improves CTF parameters of all particles for subsequent image processing. Novel diagnosis method using equiphase averaging (EPA) and self-consistency verification procedures have also been implemented in the program for practical use, especially for aims of near-atomic reconstruction. Gctf is an independent program and the outputs can be easily imported into other cryoEM software such as Relion (Scheres, 2012) and Frealign (Grigorieff, 2007). The results from several representative datasets are shown and discussed in this paper. PMID:26592709
NASA Astrophysics Data System (ADS)
Commer, M.; Kowalsky, M. B.; Dafflon, B.; Wu, Y.; Hubbard, S. S.
2013-12-01
Geologic carbon sequestration is being evaluated as a means to mitigate the effects of greenhouse gas emissions. Efforts are underway to identify adequate reservoirs and to evaluate the behavior of injected CO2 over time; time-lapse geophysical methods are considered effective tools for these purposes. Pilot studies have shown that the invasion of CO2 into a background pore fluid can alter the electrical resistivity, with increases from CO2 in the super-critical or gaseous phase, and decreases from CO2 dissolved in groundwater (especially when calcite dissolution is occurring). Because of their sensitivity to resistivity changes, electrical and electromagnetic (EM) methods have been used in such studies for indirectly assessing CO2 saturation changes. While the electrical resistance tomography (ERT) method is a well-established technique for both crosswell and surface applications, its usefulness is limited by the relatively low-resolution information it provides. Controlled-source EM methods, including both frequency-domain and time-domain (transient EM) methods, can offer improved resolution. We report on three studies that aim to maximize the information content of electrical and electromagnetic measurements in inverse modeling applications that target the monitoring of resistivity changes due to CO2 migration and/or leakage. The first study considers a three-dimensional crosswell data set collected at an analogue site used for investigating CO2 distribution and geochemical reactivity within a shallow formation. We invert both resistance and phase data using a gradient-weighting method for descent-based inversion algorithms. This method essentially steers the search direction in the model space using low-cost non-linear conjugate gradient methods towards the more computationally expensive Gauss-Newton direction. The second study involves ERT data that were collected at the SECARB Cranfield site near Natchez, Mississippi, at depths exceeding 3000 m. We employ a ratio data inversion scheme, where the time-lapse input data are given by the measured ERT data normalized by their baseline values. We investigate whether three-dimensional time-lapse inversions yield improved results compared to two-dimensional results that were previously reported. Finally, we present a synthetic study that investigates a novel time-domain controlled-source EM method that has the potential for exploiting the resolution properties of vertically oriented source antennas while avoiding their logistical difficulties. A vertical source is replaced by an array of multiple horizontal dipoles arranged in a circle such that all dipoles have a common endpoint in the center. Overall, this study presents significant advances in developing adequate geophysical techniques to monitor CO2 migration and/or potential leaks in geological reservoirs.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Scott, Christopher; Putnam, Brant; Bricker, Scott; Schneider, Laura; Raby, Stephanie; Koenig, William; Gausche-Hill, Marianne
2012-06-01
Over the past two decades, Los Angeles County has implemented a Hospital Emergency Response Team (HERT) to provide on-scene, advanced surgical care of injured patients as an element of the local Emergency Medical Services (EMS) system. Since 2008, the primary responsibility of the team has been to perform surgical procedures in the austere field setting when prolonged extrication is anticipated. Following the maxim of "life over limb," the team is equipped to provide rapid amputation of an entrapped extremity as well as other procedures and medical care, such as anxiolytics and advanced pain control. This report describes the development and implementation of a local EMS system HERT.
, bioinformatics, and literature analyses. In total, 75 proteins were identified using the in-solution method, and 236 proteins were identified using the in-gel method, among which approximately 10% of proteins were Molecular Biology (2012) "Tracking Dynamics of Biomass Composting by Monitoring the Changes in
Approximate, computationally efficient online learning in Bayesian spiking neurons.
Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André
2014-03-01
Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.
on methods to improve visualization techniques by adding qualitative information regarding sketch-based methods for conveying levels of reliability in architectural renderings. Dr. Potter
Alternative Fuels Data Center: State Requirements Boost the Transition to
these fleets to choose between one of two compliance methods - Standard Compliance, which requires Laws and Incentives website also includes representative examples of incentives and regulations at the participating in multi-party partnerships are examples of innovative methods that will drive legislation and
Multiple imputation of rainfall missing data in the Iberian Mediterranean context
NASA Astrophysics Data System (ADS)
Miró, Juan Javier; Caselles, Vicente; Estrela, María José
2017-11-01
Given the increasing need for complete rainfall data networks, in recent years have been proposed diverse methods for filling gaps in observed precipitation series, progressively more advanced that traditional approaches to overcome the problem. The present study has consisted in validate 10 methods (6 linear, 2 non-linear and 2 hybrid) that allow multiple imputation, i.e., fill at the same time missing data of multiple incomplete series in a dense network of neighboring stations. These were applied for daily and monthly rainfall in two sectors in the Júcar River Basin Authority (east Iberian Peninsula), which is characterized by a high spatial irregularity and difficulty of rainfall estimation. A classification of precipitation according to their genetic origin was applied as pre-processing, and a quantile-mapping adjusting as post-processing technique. The results showed in general a better performance for the non-linear and hybrid methods, highlighting that the non-linear PCA (NLPCA) method outperforms considerably the Self Organizing Maps (SOM) method within non-linear approaches. On linear methods, the Regularized Expectation Maximization method (RegEM) was the best, but far from NLPCA. Applying EOF filtering as post-processing of NLPCA (hybrid approach) yielded the best results.
Alternative Fuels Data Center: ASTM Biodiesel Specifications
purchased from ASTM International. Property Test Method Grade No.1-B S15 Grade No.1-B S500 Grade No.2-B S15 ASTM International. Property Test Method Grade B6 to B20 S15 B6 to B20 S500 j B6 to B20 S5000 Acid for the intended use and expected ambient temperatures. Test Methods D 4539 and D 6371 may be useful
Frequency-domain multiscale quantum mechanics/electromagnetics simulation method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Lingyi; Yin, Zhenyu; Yam, ChiYung, E-mail: yamcy@yangtze.hku.hk, E-mail: ghc@everest.hku.hk
A frequency-domain quantum mechanics and electromagnetics (QM/EM) method is developed. Compared with the time-domain QM/EM method [Meng et al., J. Chem. Theory Comput. 8, 1190–1199 (2012)], the newly developed frequency-domain QM/EM method could effectively capture the dynamic properties of electronic devices over a broader range of operating frequencies. The system is divided into QM and EM regions and solved in a self-consistent manner via updating the boundary conditions at the QM and EM interface. The calculated potential distributions and current densities at the interface are taken as the boundary conditions for the QM and EM calculations, respectively, which facilitate themore » information exchange between the QM and EM calculations and ensure that the potential, charge, and current distributions are continuous across the QM/EM interface. Via Fourier transformation, the dynamic admittance calculated from the time-domain and frequency-domain QM/EM methods is compared for a carbon nanotube based molecular device.« less
Research Associate, University of Colorado, 2004-2007 Patents "Methods for improving syngas-to-alcohol catalyst activity and selectivity," U.S. Patent No. 8,318,986 (2012) "Methods for improving
liquid chromatography analysis Bench-scale methods Education B.S., Chemistry (Mathematics Minor), Adams ;Improved methods for the determination of drying conditions and fraction insoluble solids (FIS) in biomass
Laboratory Analytical Procedures | Bioenergy | NREL
analytical procedures (LAPs) to provide validated methods for biofuels and pyrolysis bio-oils research . Biomass Compositional Analysis These lab procedures provide tested and accepted methods for performing
. 2015. Methods for Analyzing the Economic Value of Concentrating Solar Power with Thermal Energy in the United States, Potential Lessons for ChinaPDF. Golden, CO: National Renewable Energy . Renewable Electricity: Insights for the Coming DecadePDF. Golden, CO: National Renewable Energy Laboratory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, H; Xing, L; Liang, Z
Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of eachmore » tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions, which is valuable for the early diagnosis of certain brain diseases, e.g. multiple sclerosis.« less
Angelis, G I; Reader, A J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [(11)C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice for many applications, especially given that in practice convergence is often not desired for algorithms seeking ML estimates.
improved and higher throughput methods for analysis of biomass feedstocks Agronomics-using NIR spectroscopy in-house and external client training. She has also developed improved and high-throughput methods
portfolio standards, new methods for accessing natural gas reserves and aging power plants are opening standards, new methods for accessing natural gas reserves, and aging power plants are opening opportunities
Willard Libby, Radiocarbon, and Carbon Dating
: Radiocarbon from Pile Graphite; Chemical Methods for Its Concentrations, DOE Technical Report Download Adobe Carbon Dating Chronological Methods 8 - Radiocarbon Dating, University of California, Santa Barbara Why
Eliot's expertise is in computational fluid dynamics and aeroelasticity as well as numerical methods. His methods for rotor wakes, and application of advanced data mapping techniques. At NREL, Eliot's research
Distributed Generation Interconnection Collaborative | NREL
, reduce paperwork, and improve customer service. Analytical Methods for Interconnection Many utilities and jurisdictions are seeking the right screening and analytical methods and tools to meet their reliability
Materials Discovery | Materials Science | NREL
measurement methods and specialized analysis algorithms. Projects Basic Research The basic research projects applications using high-throughput combinatorial research methods. Email | 303-384-6467 Photo of John Perkins
;Back-of-Module Temperature Measurement Methods." Solar Pro, 4.6, Nov/Dec 2014; NREL/JA-5200-52213 -temperature-measurement-methods. Sekulic, B. 2004. DC Current Transducer Environmental Drift Test (Technical
NREL'S Zunger Receives Top Scientific Honors
Zunger's research endeavors, specifically the development of pioneering theoretical methods for quantum -mechanical computations and predictions of the properties of solids. These methods allow the prediction of
NREL Technique Leads to Improved Perovskite Solar Cells | News | NREL
), devised a method to improve perovskite solar cells, making them more efficient and reliable with higher according to the skills of the researchers making perovskites at different laboratories, to somewhere cell. The scientists from NREL and SJTU came up with a better method, using what's called the Ostwald
neutrons to thermal energies. US 2,573,069 METHOD AND APPARATUS FOR MEASURING STRONG ALPHA EMITTERS - Segrà ¨, E. G.; October 30, 1951 (to the U.S. Atomic Energy Commission) This patent describes an apparatus and method for the determination of the strength of a strong alpha emitter. The apparatus is so
Concentrating Solar Power Projects - Olivenza 1 | Concentrating Solar Power
Manufacturer: Siemens Turbine Description: 5 extractions Output Type: Steam Rankine Power Cycle Pressure: 100.0 bar Cooling Method: Wet cooling Cooling Method Description: Cooling Towers
ERIC Educational Resources Information Center
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
Computational methods for constructing protein structure models from 3D electron microscopy maps.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2013-10-01
Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.
National Wind Technology Center | NREL
. Wind Energy Research Wind turbine blade Wind energy research at the NWTC allows for validation and verification of large and small components and wind turbine systems. Photo by Dennis Schroeder / NREL 40935 Wind energy research at the NWTC has pioneered wind turbine components, systems, and modeling methods
2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm
NASA Astrophysics Data System (ADS)
Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.
2017-08-01
Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).
Mallick, Himel; Tiwari, Hemant K.
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice. PMID:27066062
Mallick, Himel; Tiwari, Hemant K
2016-01-01
Count data are increasingly ubiquitous in genetic association studies, where it is possible to observe excess zero counts as compared to what is expected based on standard assumptions. For instance, in rheumatology, data are usually collected in multiple joints within a person or multiple sub-regions of a joint, and it is not uncommon that the phenotypes contain enormous number of zeroes due to the presence of excessive zero counts in majority of patients. Most existing statistical methods assume that the count phenotypes follow one of these four distributions with appropriate dispersion-handling mechanisms: Poisson, Zero-inflated Poisson (ZIP), Negative Binomial, and Zero-inflated Negative Binomial (ZINB). However, little is known about their implications in genetic association studies. Also, there is a relative paucity of literature on their usefulness with respect to model misspecification and variable selection. In this article, we have investigated the performance of several state-of-the-art approaches for handling zero-inflated count data along with a novel penalized regression approach with an adaptive LASSO penalty, by simulating data under a variety of disease models and linkage disequilibrium patterns. By taking into account data-adaptive weights in the estimation procedure, the proposed method provides greater flexibility in multi-SNP modeling of zero-inflated count phenotypes. A fast coordinate descent algorithm nested within an EM (expectation-maximization) algorithm is implemented for estimating the model parameters and conducting variable selection simultaneously. Results show that the proposed method has optimal performance in the presence of multicollinearity, as measured by both prediction accuracy and empirical power, which is especially apparent as the sample size increases. Moreover, the Type I error rates become more or less uncontrollable for the competing methods when a model is misspecified, a phenomenon routinely encountered in practice.
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.
Chemistry and Nanoscience News | Chemistry and Nanoscience Research | NREL
News Chemistry and Nanoscience News December 7, 2017 News Release: NREL Develops Novel Method to Laboratory (NREL) establishes a novel catalytic method to produce renewable acrylonitrile using 3
ACHP | Historic Preservation in Technical or Scientific Facilities
of Historic Preservation: Methods and Examples Assessing the Energy Conservation Benefits of Historic Preservation: Methods and Examples 1979; 91 pages; available as PDF To order copies This study contains
Energy Decision Science and Informatics | Integrated Energy Solutions |
Science Advanced decision science methods include multi-objective and multi-criteria decision support. Our decision science methods, including multi-objective and multi-criteria decision support. For example, we
NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of
Life Cycle Land Use of Electricity from Natural Gas News Release: NREL, Johns Hopkins SAIS Develop Method to Quantify Life Cycle Land Use of Electricity from Natural Gas October 2, 2017 A case study of time provides quantifiable information on the life cycle land use of generating electricity from
Zhang, Yang; He, Zhiyi; Sun, Xuejiao; Li, Zhanhua; Zhao, Lin; Mao, Congzheng; Huang, Dongmei; Zhang, Jianquan; Zhong, Xiaoning
2014-04-01
To investigate the effect of erythromycin (EM) on corticosteroid insensitivity of human THP-1 cells induced by cigarette smoke extract (CSE) and its mechanism. THP-1 cells were treated with EM followed by CSE stimulation. Histone deacetylase-2 (HDAC2) short interference RNA (HDAC2-siRNA) was transfected into the cells using Lipofectamine(TM); 2000. Interleukin-8 (IL-8) level in supernatants was measured by ELISA and HDAC2 expression was determined by real-time quantitative PCR (qRT-PCR) and Western blotting. The inhibition ratio of IL-8 in the EM group was significantly higher than that in the CSE group, but lower than that in the control group (P<0.05). The half-maximal inhibitory concentration of dexamethasone (IC50;-Dex) in the EM group was lower than that in the CSE group, but higher than that in the control group (P<0.05). The expression of HDAC2 protein in the EM group was higher than that in the CSE group, but lower than that in the control group (P<0.05). Besides, HDAC2 mRNA and HDAC2 protein expressions were lower in the HDAC2-siRNA group than in the scrambled oligonucleotide (SC) group. EM could reverse HDAC2 mRNA and HDAC2 protein reduction induced by HDAC2-siRNA (P<0.05). Corticosteroid sensitivity of THP-1 cells could be reduced by CSE. EM could reverse the corticosteroid insensitivity by up-regulating the expression of HDAC2 protein.
Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J.; Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago
2016-07-07
Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work,more » we present the unfolding results using the EM algorithm.« less
NASA Astrophysics Data System (ADS)
Wang, J.; Feng, B.
2016-12-01
Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in this study. This fact proves the value of V-I-S EM selection method in not only moderate spatial resolution satellite image but also the more and more accessible high spatial resolution airborne image. This semi-automatic EM selection method can be adopted into a wide range of remote sensing images and provide ISA map for hydrology analysis.
Multi-ray-based system matrix generation for 3D PET reconstruction
NASA Astrophysics Data System (ADS)
Moehrs, Sascha; Defrise, Michel; Belcari, Nicola; DelGuerra, Alberto; Bartoli, Antonietta; Fabbri, Serena; Zanetti, Gianluigi
2008-12-01
Iterative image reconstruction algorithms for positron emission tomography (PET) require a sophisticated system matrix (model) of the scanner. Our aim is to set up such a model offline for the YAP-(S)PET II small animal imaging tomograph in order to use it subsequently with standard ML-EM (maximum-likelihood expectation maximization) and OSEM (ordered subset expectation maximization) for fully three-dimensional image reconstruction. In general, the system model can be obtained analytically, via measurements or via Monte Carlo simulations. In this paper, we present the multi-ray method, which can be considered as a hybrid method to set up the system model offline. It incorporates accurate analytical (geometric) considerations as well as crystal depth and crystal scatter effects. At the same time, it has the potential to model seamlessly other physical aspects such as the positron range. The proposed method is based on multiple rays which are traced from/to the detector crystals through the image volume. Such a ray-tracing approach itself is not new; however, we derive a novel mathematical formulation of the approach and investigate the positioning of the integration (ray-end) points. First, we study single system matrix entries and show that the positioning and weighting of the ray-end points according to Gaussian integration give better results compared to equally spaced integration points (trapezoidal integration), especially if only a small number of integration points (rays) are used. Additionally, we show that, for a given variance of the single matrix entries, the number of rays (events) required to calculate the whole matrix is a factor of 20 larger when using a pure Monte-Carlo-based method. Finally, we analyse the quality of the model by reconstructing phantom data from the YAP-(S)PET II scanner.
Hydrogen Infrastructure Testing and Research Facility | Hydrogen and Fuel
stations, enabling NREL to validate current industry standards and methods for hydrogen fueling as well as the HITRF to: Develop, quantify performance of, and improve renewable hydrogen production methods
Using Q-Chem on the Peregrine System | High-Performance Computing | NREL
initio quantum chemistry package with special strengths in excited state methods, non-adiabatic coupling , solvation models, explicitly correlated wavefunction methods, and cutting-edge DFT. Running Q-Chem on
Fuel Cell Manufacturing Research and Development | Hydrogen and Fuel Cells
methods to meet volume and cost targets for transportation and other applications. Fortunately, much can set Develop predictive models to help industry design better manufacturing processes and methods
Betty Petersen Memorial Library - NCWCP Publications - NWS
. Polger P. Comparative Analysis of a New Integration Method With Certain Standard Methods (.PDF file) 52 file) 169 1978 Gerrity J. Elemental-Filter Design Considerations (.PDF file) 170 1978 Shuman F. G
Concentrating Solar Power Projects - Enerstar | Concentrating Solar Power |
Capacity (Net): 50.0 MW Turbine Manufacturer: Man-Turbo Turbine Description: 3 extractions Output Type : Steam Rankine Power Cycle Pressure: 100.0 bar Cooling Method: Wet cooling Cooling Method Description
Manufacturing Laboratory | Energy Systems Integration Facility | NREL
Manufacturing Laboratory Manufacturing Laboratory Researchers in the Energy Systems Integration Facility's Manufacturing Laboratory develop methods and technologies to scale up renewable energy technology manufacturing capabilities. Photo of researchers and equipment in the Manufacturing Laboratory. Capability Hubs
Elashoff, Robert M.; Li, Gang; Li, Ning
2009-01-01
Summary In this article we study a joint model for longitudinal measurements and competing risks survival data. Our joint model provides a flexible approach to handle possible nonignorable missing data in the longitudinal measurements due to dropout. It is also an extension of previous joint models with a single failure type, offering a possible way to model informatively censored events as a competing risk. Our model consists of a linear mixed effects submodel for the longitudinal outcome and a proportional cause-specific hazards frailty submodel (Prentice et al., 1978, Biometrics 34, 541-554) for the competing risks survival data, linked together by some latent random effects. We propose to obtain the maximum likelihood estimates of the parameters by an expectation maximization (EM) algorithm and estimate their standard errors using a profile likelihood method. The developed method works well in our simulation studies and is applied to a clinical trial for the scleroderma lung disease. PMID:18162112
Zhang, Mei; Zhang, Yong; Ren, Siqi; Zhang, Zunjian; Wang, Yongren; Song, Rui
2018-06-06
A method for monitoring l-asparagine (ASN) depletion in patients' serum using reversed-phase high-performance liquid chromatography with precolumn o-phthalaldehyde and ethanethiol (ET) derivatization is described. In order to improve the signal and stability of analytes, several important factors including precipitant reagent, derivatization conditions and detection wavelengths were optimized. The recovery of the analytes in biological matrix was the highest when 4% sulfosalicylic acid (1:1, v/v) was used as a precipitant reagent. Optimal fluorescence detection parameters were determined as λex = 340 nm and λem = 444 nm for maximal signal. The signal of analytes was the highest when the reagent ET and borate buffer of pH 9.9 were used in the derivatization solution. And the corresponding derivative products were stable up to 19 h. The validated method had been successfully applied to monitor ASN depletion and l-aspartic acid, l-glutamine, l-glutamic acid levels in pediatric patients during l-asparaginase therapy.
Mining patterns in persistent surveillance systems with smart query and visual analytics
NASA Astrophysics Data System (ADS)
Habibi, Mohammad S.; Shirkhodaie, Amir
2013-05-01
In Persistent Surveillance Systems (PSS) the ability to detect and characterize events geospatially help take pre-emptive steps to counter adversary's actions. Interactive Visual Analytic (VA) model offers this platform for pattern investigation and reasoning to comprehend and/or predict such occurrences. The need for identifying and offsetting these threats requires collecting information from diverse sources, which brings with it increasingly abstract data. These abstract semantic data have a degree of inherent uncertainty and imprecision, and require a method for their filtration before being processed further. In this paper, we have introduced an approach based on Vector Space Modeling (VSM) technique for classification of spatiotemporal sequential patterns of group activities. The feature vectors consist of an array of attributes extracted from generated sensors semantic annotated messages. To facilitate proper similarity matching and detection of time-varying spatiotemporal patterns, a Temporal-Dynamic Time Warping (DTW) method with Gaussian Mixture Model (GMM) for Expectation Maximization (EM) is introduced. DTW is intended for detection of event patterns from neighborhood-proximity semantic frames derived from established ontology. GMM with EM, on the other hand, is employed as a Bayesian probabilistic model to estimated probability of events associated with a detected spatiotemporal pattern. In this paper, we present a new visual analytic tool for testing and evaluation group activities detected under this control scheme. Experimental results demonstrate the effectiveness of proposed approach for discovery and matching of subsequences within sequentially generated patterns space of our experiments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wellman, Dawn M.; Triplett, Mark B.; Freshley, Mark D.
DOE-EM, Office of Groundwater and Soil Remediation and DOE Richland, in collaboration with the Hanford site and Pacific Northwest National Laboratory, have established the Deep Vadose Zone Applied Field Research Center (DVZ-AFRC). The DVZ-AFRC leverages DOE investments in basic science from the Office of Science, applied research from DOE EM Office of Technology Innovation and Development, and site operation (e.g., site contractors [CH2M HILL Plateau Remediation Contractor and Washington River Protection Solutions], DOE-EM RL and ORP) in a collaborative effort to address the complex region of the deep vadose zone. Although the aim, goal, motivation, and contractual obligation of eachmore » organization is different, the integration of these activities into the framework of the DVZ-AFRC brings the resources and creativity of many to provide sites with viable alternative remedial strategies to current baseline approaches for persistent contaminants and deep vadose zone contamination. This cooperative strategy removes stove pipes, prevents duplication of efforts, maximizes resources, and facilitates development of the scientific foundation needed to make sound and defensible remedial decisions that will successfully meet the target cleanup goals for one of DOE EM's most intractable problems, in a manner that is acceptable by regulators.« less
EM-navigated catheter placement for gynecologic brachytherapy: an accuracy study
NASA Astrophysics Data System (ADS)
Mehrtash, Alireza; Damato, Antonio; Pernelle, Guillaume; Barber, Lauren; Farhat, Nabgha; Viswanathan, Akila; Cormack, Robert; Kapur, Tina
2014-03-01
Gynecologic malignancies, including cervical, endometrial, ovarian, vaginal and vulvar cancers, cause significant mortality in women worldwide. The standard care for many primary and recurrent gynecologic cancers consists of chemoradiation followed by brachytherapy. In high dose rate (HDR) brachytherapy, intracavitary applicators and /or interstitial needles are placed directly inside the cancerous tissue so as to provide catheters to deliver high doses of radiation. Although technology for the navigation of catheters and needles is well developed for procedures such as prostate biopsy, brain biopsy, and cardiac ablation, it is notably lacking for gynecologic HDR brachytherapy. Using a benchtop study that closely mimics the clinical interstitial gynecologic brachytherapy procedure, we developed a method for evaluating the accuracy of image-guided catheter placement. Future bedside translation of this technology offers the potential benefit of maximizing tumor coverage during catheter placement while avoiding damage to the adjacent organs, for example bladder, rectum and bowel. In the study, two independent experiments were performed on a phantom model to evaluate the targeting accuracy of an electromagnetic (EM) tracking system. The procedure was carried out using a laptop computer (2.1GHz Intel Core i7 computer, 8GB RAM, Windows 7 64-bit), an EM Aurora tracking system with a 1.3mm diameter 6 DOF sensor, and 6F (2 mm) brachytherapy catheters inserted through a Syed-Neblett applicator. The 3D Slicer and PLUS open source software were used to develop the system. The mean of the targeting error was less than 2.9mm, which is comparable to the targeting errors in commercial clinical navigation systems.
Access to hyperacute stroke services across Canadian provinces: a geospatial analysis
Eswaradass, Prasanna Venkatesan; Swartz, Richard H.; Rosen, Jamey; Hill, Michael D.; Lindsay, M. Patrice
2017-01-01
Background: Canada's vast geography creates challenges for ensuring prompt transport to hospital of patients who have had a stroke. We sought to determine the proportion of people across various Canadian provinces for whom hyperacute stroke services are accessible within evidence-based time targets. Methods: We calculated, for the 8 provinces with available data, drive-time polygons on a map of Canada that delineated the area around stroke centres and emergency medical services (EMS) base centres to which one can drive in 3.5-6 hours. We calculated the proportional area of each forward sortation area (first 3 digits of the postal code) contained within a drive-time polygon. We applied this ratio to the 2011 Canadian census population of the forward sortation area to estimate the population that can reach a stroke centre in a designated time. Results: A total of 47.1%-96.4% of Canadians live within a 4.5-hour drive to a stroke centre via road EMS, and 53.3%-96.8% live within a 6-hour drive. Assuming a total travel time of 5 hours by EMS from base centre to patient and patient to hospital, 84.7%-99.8% of the population has access to a current or proposed endovascular thrombectomy site. Interpretation: Most Canadians live within 6 hours' road access to a stroke centre. Geospatial mapping could be used to inform decisions for additional sites and identify gaps in service accessibility. Coordinated systems of care and ambulance bypass agreements must continue to evolve to ensure maximal access to time-sensitive emergency stroke services. PMID:28615192
Classification Comparisons Between Compact Polarimetric and Quad-Pol SAR Imagery
NASA Astrophysics Data System (ADS)
Souissi, Boularbah; Doulgeris, Anthony P.; Eltoft, Torbjørn
2015-04-01
Recent interest in dual-pol SAR systems has lead to a novel approach, the so-called compact polarimetric imaging mode (CP) which attempts to reconstruct fully polarimetric information based on a few simple assumptions. In this work, the CP image is simulated from the full quad-pol (QP) image. We present here the initial comparison of polarimetric information content between QP and CP imaging modes. The analysis of multi-look polarimetric covariance matrix data uses an automated statistical clustering method based upon the expectation maximization (EM) algorithm for finite mixture modeling, using the complex Wishart probability density function. Our results showed that there are some different characteristics between the QP and CP modes. The classification is demonstrated using a E-SAR and Radarsat2 polarimetric SAR images acquired over DLR Oberpfaffenhofen in Germany and Algiers in Algeria respectively.
Solar Energy Evolution and Diffusion Studies | Solar Research | NREL
industry-wide studies that use data-driven and evidence-based methods to identify characteristics developed models of U.S. household PV adoption. The project also conducted two market pilots to test methods
Microalgae Compositional Analysis Laboratory Procedures | Bioenergy | NREL
these methods build on years of research in algal biomass analysis. By combining the appropriate LAPs and Ash in Algal Biomass This procedure describes the methods used to determine the amount of moisture
Solar Power Tower Integrated Layout and Optimization Tool | Concentrating
methods to reduce the overall computational burden while generating accurate and precise results. These methods have been developed as part of the U.S. Department of Energy (DOE) SunShot Initiative research
Renewable Energy on the Grid: Redefining What's Possible | Energy Analysis
, new methods for accessing natural gas reserves and aging power plants are opening opportunities for methods for accessing natural gas reserves and aging power plants are opening opportunities for new
DRIVE: Drive-Cycle Rapid Investigation, Visualization, and Evaluation
specialized statistical clustering methods. The duration of these representative drive cycles, which aim to , DRIVE can benefit a variety of users. For example: Fleet managers can use the tool to make educated investment decisions by determining, in advance, the payback period for a given technology. Vehicle
; International Journal of Energy Research 32 (5), 379-407. Ahn, K.S., Shet, S., Deutsch, T., Jiang, C.S., Yan, Y and Sustainable Energy, an AIP journal. Research Interests Dr. Turner has been recognized as a methods, definitions, and reporting protocols." Journal of Materials Research 25 (01), 3-16. Yin, W.J
A Unified Framework for Brain Segmentation in MR Images
Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.
2015-01-01
Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978
NASA Technical Reports Server (NTRS)
Duvoisin, Marc R.; Convertino, Victor A; Buchanan, Paul; Gollinick, Philip D.; Dudley, Gary A.
1989-01-01
During 30 days (d) of bedrest, the practicality of using Elec- troMyoStimulation (EMS) as a deterrent to atrophy and strength loss of lower limb musculature was examined. An EMS system was developed that provided variable but quantifiable levels of EMS, and measured torque. The dominant log of three male subjects was stimulated twice daily in a 3-d on/1-d off cycle during bedrest. The non-dominant leg of each subject acted as a control. A stimulator, using a 0.3 ms monophasic 60 Hz pulse waveform, activated muscle tissue for 4 s. The output waveform from the stimulator was sequenced to the Knee Extensors (KE), Knee Flex- ors (KF), Ankle Extensors (AE), and Ankle Flexors (AF), and caused three isometric contractions of each muscle group per minute. Subject tolerance determined EMS Intensity. Each muscle group received four 5-min bouts of EMS each session with a 10 -min rest between bouts. EMS and torque levels for each muscle action were recorded directly an a computer. Overall average EMS Intensity was 197, 197, 195, and 188 mA for the KE, KF, AF, and AE, respectively. Overall average torque development for these muscle groups was 70, 16, 12, and 27 Nm, respectively. EMS intensity doubled during the study, and average torque increased 2.5 times. Average maximum torque throughout a session reached 54% of maximal voluntary for the KE and 29% for the KF. Reductions in leg volume, muscle compartment size, cross-sectional area of slow and fast-twitch fibers, strength, and aerobic enzyme activities, and increased log compliance were attenuated in the legs which received EMS during bedrest. These results indicate that similar EMS levels induce different torques among different muscle groups and that repeated exposure to EMS increases tolerance and torque development. Longer orien- tation periods, therefore, may enhance its effectiveness. Our preliminary data suggest that the efficacy of EMS as an effective countermeasure for muscle atrophy and strength loss during long duration space travel warrants further investigation.
Blood detection in wireless capsule endoscopy using expectation maximization clustering
NASA Astrophysics Data System (ADS)
Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.
2006-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-07
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Locally adaptive MR intensity models and MRF-based segmentation of multiple sclerosis lesions
NASA Astrophysics Data System (ADS)
Galimzianova, Alfiia; Lesjak, Žiga; Likar, Boštjan; Pernuš, Franjo; Špiclin, Žiga
2015-03-01
Neuroimaging biomarkers are an important paraclinical tool used to characterize a number of neurological diseases, however, their extraction requires accurate and reliable segmentation of normal and pathological brain structures. For MR images of healthy brains the intensity models of normal-appearing brain tissue (NABT) in combination with Markov random field (MRF) models are known to give reliable and smooth NABT segmentation. However, the presence of pathology, MR intensity bias and natural tissue-dependent intensity variability altogether represent difficult challenges for a reliable estimation of NABT intensity model based on MR images. In this paper, we propose a novel method for segmentation of normal and pathological structures in brain MR images of multiple sclerosis (MS) patients that is based on locally-adaptive NABT model, a robust method for the estimation of model parameters and a MRF-based segmentation framework. Experiments on multi-sequence brain MR images of 27 MS patients show that, compared to whole-brain model and compared to the widely used Expectation-Maximization Segmentation (EMS) method, the locally-adaptive NABT model increases the accuracy of MS lesion segmentation.
Distribution Grid Integration Costs Under High PV Penetrations Workshop |
grids. These distribution grid integration costs are one component of a complete cost-benefit analysis . Engaging stakeholders to coalesce around transparent and mutually acceptable frameworks for cost-benefit -voltage only). In particular, there was be a focus on methods most appropriate for cost-benefit analysis
Offshore Wind Research | Wind | NREL
validation and certification. A photo of an offshore wind turbine with a yellow foundation floating in the wind turbine with three turbines and blue ocean in the background. Design Methods, Tools, and Standards Applying 35 years of wind turbine validation expertise, NREL has developed instrumentation for high
, Statistical Analysis and Data Mining: The ASA Data Science Journal (2017) Using GIS-Based Methods and Lidar techniques to the problem of large area coverage mapping for wireless networks. He has also done work in -4297 Dr. Caleb Phillips is a data scientist with the Computational Science Center at NREL. Caleb comes
Offshore Wind Resource Characterization | Wind | NREL
identify critical data needed. Remote Sensing and Modeling Photo of the SeaZephIR Prototype at sea. 2009 techniques such as remote sensing and modeling to provide data on design conditions. Research includes comparing the data provided by remote sensing devices and models to data collected by traditional methods
NASA Astrophysics Data System (ADS)
Liu, Yang; Pu, Huangsheng; Zhang, Xi; Li, Baojuan; Liang, Zhengrong; Lu, Hongbing
2017-03-01
Arterial spin labeling (ASL) provides a noninvasive measurement of cerebral blood flow (CBF). Due to relatively low spatial resolution, the accuracy of CBF measurement is affected by the partial volume (PV) effect. To obtain accurate CBF estimation, the contribution of each tissue type in the mixture is desirable. In general, this can be obtained according to the registration of ASL and structural image in current ASL studies. This approach can obtain probability of each tissue type inside each voxel, but it also introduces error, which include error of registration algorithm and imaging itself error in scanning of ASL and structural image. Therefore, estimation of mixture percentage directly from ASL data is greatly needed. Under the assumption that ASL signal followed the Gaussian distribution and each tissue type is independent, a maximum a posteriori expectation-maximization (MAP-EM) approach was formulated to estimate the contribution of each tissue type to the observed perfusion signal at each voxel. Considering the sensitivity of MAP-EM to the initialization, an approximately accurate initialization was obtain using 3D Fuzzy c-means method. Our preliminary results demonstrated that the GM and WM pattern across the perfusion image can be sufficiently visualized by the voxel-wise tissue mixtures, which may be promising for the diagnosis of various brain diseases.
Connecting to HPC Systems | High-Performance Computing | NREL
one of the following methods, which use multi-factor authentication. First, you will need to set up If you just need access to a command line on an HPC system, use one of the following methods
NASA Technical Reports Server (NTRS)
Bauer, Fabrice; Jones, Michael; Shiota, Takahiro; Firstenberg, Michael S.; Qin, Jian Xin; Tsujino, Hiroyuki; Kim, Yong Jin; Sitges, Marta; Cardon, Lisa A.; Zetts, Arthur D.;
2002-01-01
OBJECTIVE: The goal of this study was to analyze left ventricular outflow tract systolic acceleration (LVOT(Acc)) during alterations in left ventricular (LV) contractility and LV filling. BACKGROUND: Most indexes described to quantify LV systolic function, such as LV ejection fraction and cardiac output, are dependent on loading conditions. METHODS: In 18 sheep (4 normal, 6 with aortic regurgitation, and 8 with old myocardial infarction), blood flow velocities through the LVOT were recorded using conventional pulsed Doppler. The LVOT(Acc) was calculated as the aortic peak velocity divided by the time to peak flow; LVOT(Acc) was compared with LV maximal elastance (E(m)) acquired by conductance catheter under different loading conditions, including volume and pressure overload during an acute coronary occlusion (n = 10). In addition, a clinically validated lumped-parameter numerical model of the cardiovascular system was used to support our findings. RESULTS: Left ventricular E(m) and LVOT(Acc) decreased during ischemia (1.67 +/- 0.67 mm Hg.ml(-1) before vs. 0.93 +/- 0.41 mm Hg.ml(-1) during acute coronary occlusion [p < 0.05] and 7.9 +/- 3.1 m.s(-2) before vs. 4.4 +/- 1.0 m.s(-2) during coronary occlusion [p < 0.05], respectively). Left ventricular outflow tract systolic acceleration showed a strong linear correlation with LV E(m) (y = 3.84x + 1.87, r = 0.85, p < 0.001). Similar findings were obtained with the numerical modeling, which demonstrated a strong correlation between predicted and actual LV E(m) (predicted = 0.98 [actual] -0.01, r = 0.86). By analysis of variance, there was no statistically significant difference in LVOT(Acc) under different loading conditions. CONCLUSIONS: For a variety of hemodynamic conditions, LVOT(Acc) was linearly related to the LV contractility index LV E(m) and was independent of loading conditions. These findings were consistent with numerical modeling. Thus, this Doppler index may serve as a good noninvasive index of LV contractility.
Changes of contractile responses due to simulated weightlessness in rat soleus muscle
NASA Astrophysics Data System (ADS)
Elkhammari, A.; Noireaud, J.; Léoty, C.
1994-08-01
Some contractile and electrophysiological properties of muscle fibers isolated from the slow-twitch soleus (SOL) and fast-twitch extensor digitorum longus (EDL) muscles of rats were compared with those measured in SOL muscles from suspended rats. In suspendede SOL (21 days of tail-suspension) membrane potential (Em), intracellular sodium activity (aiNa) and the slope of the relationship between Em and log [K]o were typical of fast-twitch muscles. The relation between the maximal amplitude of K-contractures vs Em was steeper for control SOL than for EDL and suspended SOL muscles. After suspension, in SOL muscles the contractile threshold and the inactivation curves for K-contractures were shifted to more positive Em. Repriming of K-contractures was unaffected by suspencion. The exposure of isolated fibers to perchlorate (ClO4-)-containing (6-40 mM) solutions resulted ina similar concentration-dependent shift to more negative Em of activation curves for EDL and suspended SOL muscles. On exposure to a Na-free TEA solution, SOL from control and suspended rats, in contrast to EDL muscles, generated slow contractile responses. Suspended SOL showed a reduced sensitivity to the contracture-producing effect of caffeine compared to control muscles. These results suggested that the modification observed due to suspension could be encounted by changes in the characteristics of muscle fibers from slow to fast-twitch type.
Computing Project, Marc develops high-fidelity turbulence models to enhance simulation accuracy and efficient numerical algorithms for future high performance computing hardware architectures. Research Interests High performance computing High order numerical methods for computational fluid dynamics Fluid
. energy system through quantitative analysis methods. Research Interests Evaluating the system and , Washington, DC (2014-2017) Postdoctoral Research Fellow, Carnegie Institution for Science, Washington, DC (2012-2014) Graduate Research and Teaching Assistant, California Institute of Technology, Pasadena, CA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feng, Tao; Tsui, Benjamin M. W.; Li, Xin
Purpose: The radioligand {sup 11}C-KR31173 has been introduced for positron emission tomography (PET) imaging of the angiotensin II subtype 1 receptor in the kidney in vivo. To study the biokinetics of {sup 11}C-KR31173 with a compartmental model, the input function is needed. Collection and analysis of arterial blood samples are the established approach to obtain the input function but they are not feasible in patients with renal diseases. The goal of this study was to develop a quantitative technique that can provide an accurate image-derived input function (ID-IF) to replace the conventional invasive arterial sampling and test the method inmore » pigs with the goal of translation into human studies. Methods: The experimental animals were injected with [{sup 11}C]KR31173 and scanned up to 90 min with dynamic PET. Arterial blood samples were collected for the artery derived input function (AD-IF) and used as a gold standard for ID-IF. Before PET, magnetic resonance angiography of the kidneys was obtained to provide the anatomical information required for derivation of the recovery coefficients in the abdominal aorta, a requirement for partial volume correction of the ID-IF. Different image reconstruction methods, filtered back projection (FBP) and ordered subset expectation maximization (OS-EM), were investigated for the best trade-off between bias and variance of the ID-IF. The effects of kidney uptakes on the quantitative accuracy of ID-IF were also studied. Biological variables such as red blood cell binding and radioligand metabolism were also taken into consideration. A single blood sample was used for calibration in the later phase of the input function. Results: In the first 2 min after injection, the OS-EM based ID-IF was found to be biased, and the bias was found to be induced by the kidney uptake. No such bias was found with the FBP based image reconstruction method. However, the OS-EM based image reconstruction was found to reduce variance in the subsequent phase of the ID-IF. The combined use of FBP and OS-EM resulted in reduced bias and noise. After performing all the necessary corrections, the areas under the curves (AUCs) of the AD-IF were close to that of the AD-IF (average AUC ratio =1 ± 0.08) during the early phase. When applied in a two-tissue-compartmental kinetic model, the average difference between the estimated model parameters from ID-IF and AD-IF was 10% which was within the error of the estimation method. Conclusions: The bias of radioligand concentration in the aorta from the OS-EM image reconstruction is significantly affected by radioligand uptake in the adjacent kidney and cannot be neglected for quantitative evaluation. With careful calibrations and corrections, the ID-IF derived from quantitative dynamic PET images can be used as the input function of the compartmental model to quantify the renal kinetics of {sup 11}C-KR31173 in experimental animals and the authors intend to evaluate this method in future human studies.« less
Alternative Fuels Data Center: Biodiesel Related Links
or other domestic, renewable resources using sustainable agricultural methods and encourages its use ) Commodity Operations The Commodity Operations Program seeks to expand industrial consumption of agricultural Agriculture (USDA) National Agricultural Statistics Service The USDA's National Agricultural Statistics
Publications - SR 41 | Alaska Division of Geological & Geophysical Surveys
; Mercury; Minerals; Minerals Report; Mining; Mining Methods; Molybdenum; Nickel; Niobium; Peat; Platinum ; Production Data; Radioactive Minerals; Resource Assessment; Sand and Gravel; Silver; Soapstone; Tantalum
Grid Simulation and Power Hardware-in-the-Loop | Grid Modernization | NREL
used PHIL to investigate the effects of advanced solar PV inverters on Hawaii's grid. A variety of PV Evaluating the Performance of Methods for Coordinated Control of Distributed Residential PV/Energy Storage photovoltaics (PV)-battery energy storage inverter control applied across an electric distribution system
NASA Astrophysics Data System (ADS)
Winant, Celeste D.; Aparici, Carina Mari; Zelnik, Yuval R.; Reutter, Bryan W.; Sitek, Arkadiusz; Bacharach, Stephen L.; Gullberg, Grant T.
2012-01-01
Computer simulations, a phantom study and a human study were performed to determine whether a slowly rotating single-photon computed emission tomography (SPECT) system could provide accurate arterial input functions for quantification of myocardial perfusion imaging using kinetic models. The errors induced by data inconsistency associated with imaging with slow camera rotation during tracer injection were evaluated with an approach called SPECT/P (dynamic SPECT from positron emission tomography (PET)) and SPECT/D (dynamic SPECT from database of SPECT phantom projections). SPECT/P simulated SPECT-like dynamic projections using reprojections of reconstructed dynamic 94Tc-methoxyisobutylisonitrile (94Tc-MIBI) PET images acquired in three human subjects (1 min infusion). This approach was used to evaluate the accuracy of estimating myocardial wash-in rate parameters K1 for rotation speeds providing 180° of projection data every 27 or 54 s. Blood input and myocardium tissue time-activity curves (TACs) were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. For the second method (SPECT/D), an anthropomorphic cardiac torso phantom was used to create real SPECT dynamic projection data of a tracer distribution derived from 94Tc-MIBI PET scans in the blood pool, myocardium, liver and background. This method introduced attenuation, collimation and scatter into the modeling of dynamic SPECT projections. Both approaches were used to evaluate the accuracy of estimating myocardial wash-in parameters for rotation speeds providing 180° of projection data every 27 and 54 s. Dynamic cardiac SPECT was also performed in a human subject at rest using a hybrid SPECT/CT scanner. Dynamic measurements of 99mTc-tetrofosmin in the myocardium were obtained using an infusion time of 2 min. Blood input, myocardium tissue and liver TACs were estimated using the same spatiotemporal splines. The spatiotemporal maximum-likelihood expectation-maximization (4D ML-EM) reconstructions gave more accurate reconstructions than did standard frame-by-frame static 3D ML-EM reconstructions. The SPECT/P results showed that 4D ML-EM reconstruction gave higher and more accurate estimates of K1 than did 3D ML-EM, yielding anywhere from a 44% underestimation to 24% overestimation for the three patients. The SPECT/D results showed that 4D ML-EM reconstruction gave an overestimation of 28% and 3D ML-EM gave an underestimation of 1% for K1. For the patient study the 4D ML-EM reconstruction provided continuous images as a function of time of the concentration in both ventricular cavities and myocardium during the 2 min infusion. It is demonstrated that a 2 min infusion with a two-headed SPECT system rotating 180° every 54 s can produce measurements of blood pool and myocardial TACs, though the SPECT simulation studies showed that one must sample at least every 30 s to capture a 1 min infusion input function.
Effect of Superimposed Electromyostimulation on Back Extensor Strengthening: A Pilot Study.
Park, Jae Hyeon; Seo, Kwan Sik; Lee, Shi-Uk
2016-09-01
Park, JH, Seo, KS, and Lee, S-U. Effect of superimposed electromyostimulation on back extensor strengthening: a pilot study. J Strength Cond Res 30(9): 2470-2475, 2016-Electromyostimulation (EMS) superimposed on voluntary contraction (VC) can increase muscle strength. However, no study has examined the effect of superimposing EMS on back extensor strengthening. The purpose of this study was to determine the effect of superimposed EMS on back extensor strengthening in healthy adults. Twenty healthy men, 20-29 years of age, without low-back pain were recruited. In the EMS group, electrodes were attached to bilateral L2 and L4 paraspinal muscles. Stimulation intensity was set for maximally tolerable intensity. With VC, EMS was superimposed for 10 seconds followed by a 20-second rest period. The same protocol was used in the sham stimulation (SS) group, except that the stimulation intensity was set at the lowest intensity (5 mA). All subjects performed back extension exercise using a Swiss ball, with 10 repetitions per set, 2 sets each day, 5 times a week for 2 weeks. The primary outcome measure was the change in isokinetic strength of the back extensor using an isokinetic dynamometer. Additionally, endurance was measured using the Sorensen test. After 2 weeks of back extension exercise, the peak torque and endurance increased significantly in both groups (p ≤ 0.05). Effect size between the EMS group and the SS group was medium in strength and endurance. However, there was no statistically significant difference between 2 groups. In conclusion, 2 weeks of back extensor strengthening exercise was effective for strength and endurance. Superimposing EMS on back extensor strengthening exercise could provide an additional effect on increasing strength.
Effects of combined electromyostimulation and gymnastics training in prepubertal girls.
Deley, Gaëlle; Cometti, Carole; Fatnassi, Anaïs; Paizis, Christos; Babault, Nicolas
2011-02-01
This study investigated the effects of a 6-week combined electromyostimulation (EMS) and gymnastic training program on muscle strength and vertical jump performance of prepubertal gymnasts. Sixteen young women gymnasts (age 12.4 ± 1.2 yrs) participated in this study, with 8 in the EMS group and the remaining 8 as controls. EMS was conducted on knee extensor muscles for 20 minutes 3 times a week during the first 3 weeks and once a week during the last 3 weeks. Gymnasts from both groups underwent similar gymnastics training 5-6 times a week. Isokinetic torque of the knee extensors was determined at different eccentric and concentric angular velocities ranging from -60 to +240° per second. Jumping ability was evaluated using squat jump (SJ), counter movement jump (CMJ), reactivity test, and 3 gymnastic-specific jumps. After the first 3 weeks of EMS, maximal voluntary torque was increased (+40.0 ± 10.0%, +35.3 ± 11.8%, and +50.6 ± 7.7% for -60, +60, and +240°s⁻¹, respectively; p < 0.05), as well as SJ, reactivity test and specific jump performances (+20.9 ± 8.3%, +20.4 ± 26.2% and +14.9 ± 17.2% respectively; p < 0.05). Six weeks of EMS were necessary to improve the CMJ (+10.1 ± 10.0%, p < 0.05). Improvements in jump ability were still maintained 1 month after the end of the EMS training program. To conclude, these results first demonstrate that in prepubertal gymnasts, a 6-week EMS program, combined with the daily gymnastic training, induced significant increases both in knee extensor muscle strength and nonspecific and some specific jump performances.
Navigating 3D electron microscopy maps with EM-SURFER.
Esquivel-Rodríguez, Juan; Xiong, Yi; Han, Xusi; Guang, Shuomeng; Christoffer, Charles; Kihara, Daisuke
2015-05-30
The Electron Microscopy DataBank (EMDB) is growing rapidly, accumulating biological structural data obtained mainly by electron microscopy and tomography, which are emerging techniques for determining large biomolecular complex and subcellular structures. Together with the Protein Data Bank (PDB), EMDB is becoming a fundamental resource of the tertiary structures of biological macromolecules. To take full advantage of this indispensable resource, the ability to search the database by structural similarity is essential. However, unlike high-resolution structures stored in PDB, methods for comparing low-resolution electron microscopy (EM) density maps in EMDB are not well established. We developed a computational method for efficiently searching low-resolution EM maps. The method uses a compact fingerprint representation of EM maps based on the 3D Zernike descriptor, which is derived from a mathematical series expansion for EM maps that are considered as 3D functions. The method is implemented in a web server named EM-SURFER, which allows users to search against the entire EMDB in real-time. EM-SURFER compares the global shapes of EM maps. Examples of search results from different types of query structures are discussed. We developed EM-SURFER, which retrieves structurally relevant matches for query EM maps from EMDB within seconds. The unique capability of EM-SURFER to detect 3D shape similarity of low-resolution EM maps should prove invaluable in structural biology.
Sricharoen, Pungkava; Yuksen, Chaiyaporn; Sittichanbuncha, Yuwares; Sawanyawisuth, Kittisak
2015-01-01
There are different teaching methods; such as traditional lectures, bedside teaching, and workshops for clinical medical clerkships. Each method has advantages and disadvantages in different situations. Emergency Medicine (EM) focuses on emergency medical conditions and deals with several emergency procedures. This study aimed to compare traditional teaching methods with teaching methods involving workshops in the EM setting for medical students. Fifth year medical students (academic year of 2010) at Ramathibodi Hospital, Faculty of Medicine, Mahidol University, Bangkok, Thailand participated in the study. Half of students received traditional teaching, including lectures and bedside teaching, while the other half received traditional teaching plus three workshops, namely, airway workshop, trauma workshop, and emergency medical services workshop. Student evaluations at the end of the clerkship were recorded. The evaluation form included overall satisfaction, satisfaction in overall teaching methods, and satisfaction in each teaching method. During the academic year 2010, there were 189 students who attended the EM rotation. Of those, 77 students (40.74%) were in the traditional EM curriculum, while 112 students were in the new EM curriculum. The average satisfaction score in teaching method of the new EM curriculum group was higher than the traditional EM curriculum group (4.54 versus 4.07, P-value <0.001). The top three highest average satisfaction scores in the new EM curriculum group were trauma workshop, bedside teaching, and emergency medical services workshop. The mean (standard deviation) satisfaction scores of those three teaching methods were 4.70 (0.50), 4.63 (0.58), and 4.60 (0.55), respectively. Teaching EM with workshops improved student satisfaction in EM education for medical students.
Detectability of Wellbore CO2 Leakage using the Magnetotelluric Method
NASA Astrophysics Data System (ADS)
Yang, X.; Buscheck, T. A.; Mansoor, K.; Carroll, S.
2016-12-01
We assessed the effectiveness of the magnetotelluric (MT) method in detecting CO2 and brine leakage through a wellbore, which penetrates a CO2 storage reservoir, into overlying aquifers, 0 to 1720 m in depth, in support of the USDOE National Risk Assessment Partnership (NRAP) monitoring program. Synthetic datasets based on the Kimberlina site in the southern San Joaquin Basin, California were created using CO2 storage reservoir models, wellbore leakage models, and groundwater/geochemical models of the overlying aquifers. The species concentrations simulated with the groundwater/geochemical models were converted into bulk electrical conductivity (EC) distributions as the MT model input. Brine and CO2 leakage into the overlying aquifers increases ion concentrations, and thus results in an EC increase, which may be detected by the MT method. Our objective was to estimate and maximize the probability of leakage detection using the MT method. The MT method is an electromagnetic geophysical technique that images the subsurface EC distribution by measuring natural electric and magnetic fields in the frequency range from 0.01 Hz to 1 kHz with sensors on the ground surface. The ModEM software was used to predict electromagnetic responses from brine and CO2 leakage and to invert synthetic MT data for recovery of subsurface conductivity distribution. We are in the process of building 1000 simulations for ranges of permeability, leakage flux, and hydraulic gradient to study leakage detectability and to develop an optimization method to answer when, where and how an MT monitoring system should be deployed to maximize the probability of leakage detection. This work was sponsored by the USDOE Fossil Energy, National Energy Technology Laboratory, managed by Traci Rodosta and Andrea McNemar. This work was performed under the auspices of the USDOE by LLNL under contract DE-AC52-07NA27344. LLNL IM release number is LLNL-ABS-699276.
Current federal regulations required monitoring for fecal coliforms or Salmonella in biosolids destined for land application. Methods used for analysis of fecal coliforms and Salmonella were reviewed and a standard protocol was developed. The protocols were then...
Energy Systems Integration News | Energy Systems Integration Facility |
data analytics and forecasting methods to identify correlations between electricity consumption threats, or cyber and physical attacks-our nation's electricity grid must evolve. As part of the Grid other national labs, and several industry partners-to advance resilient electricity distribution systems
researches new methods and technologies for energy-efficient air conditioning systems. He has tested more -6155 Eric joined NREL in 2002 and is a member of the Commercial Buildings Research Group. Eric recommendations. He uses tools such as CAD, Matlab, Engineer Equation Solver, Excel, and statistical software to
[1012.5676] The Exoplanet Orbit Database
: The Exoplanet Orbit Database Authors: Jason T Wright, Onsi Fakhouri, Geoffrey W. Marcy, Eunkyu Han present a database of well determined orbital parameters of exoplanets. This database comprises parameters, and the method used for the planets discovery. This Exoplanet Orbit Database includes all planets
NASA Astrophysics Data System (ADS)
Zhou, Chuan; Chan, Heang-Ping; Hadjiiski, Lubomir M.; Chughtai, Aamer; Patel, Smita; Cascade, Philip N.; Sahiner, Berkman; Wei, Jun; Ge, Jun; Kazerooni, Ella A.
2007-03-01
CT pulmonary angiography (CTPA) has been reported to be an effective means for clinical diagnosis of pulmonary embolism (PE). We are developing a computer-aided detection (CAD) system to assist radiologist in PE detection in CTPA images. 3D multiscale filters in combination with a newly designed response function derived from the eigenvalues of Hessian matrices is used to enhance vascular structures including the vessel bifurcations and suppress non-vessel structures such as the lymphoid tissues surrounding the vessels. A hierarchical EM estimation is then used to segment the vessels by extracting the high response voxels at each scale. The segmented vessels are pre-screened for suspicious PE areas using a second adaptive multiscale EM estimation. A rule-based false positive (FP) reduction method was designed to identify the true PEs based on the features of PE and vessels. 43 CTPA scans were used as an independent test set to evaluate the performance of PE detection. Experienced chest radiologists identified the PE locations which were used as "gold standard". 435 PEs were identified in the artery branches, of which 172 and 263 were subsegmental and proximal to the subsegmental, respectively. The computer-detected volume was considered true positive (TP) when it overlapped with 10% or more of the gold standard PE volume. Our preliminary test results show that, at an average of 33 and 24 FPs/case, the sensitivities of our PE detection method were 81% and 78%, respectively, for proximal PEs, and 79% and 73%, respectively, for subsegmental PEs. The study demonstrates the feasibility that the automated method can identify PE accurately on CTPA images. Further study is underway to improve the sensitivity and reduce the FPs.
interest: mechanical system design sensitivity analysis and optimization of linear and nonlinear structural systems, reliability analysis and reliability-based design optimization, computational methods in committee member, ISSMO; Associate Editor, Mechanics Based Design of Structures and Machines; Associate
NREL-Led Research Effort Creates New Alloys, Phase Diagram | News | NREL
example of what happens when you bring different institutions with different capabilities together," oxide (ZnO), even though their atomic structures are very different. The new alloy will absorb a deposition and magnetron sputtering. Neither method required such high temperatures. "We show that
Renewable Energy Generation and Storage Models | Grid Modernization | NREL
-the-loop testing Projects Generator, Plant, and Storage Modeling, Simulation, and Validation NREL power plants. Power Hardware-in-the-Loop Testing NREL researchers are developing software-and-hardware -combined simulation testing methods known as power hardware-in-the-loop testing. Power hardware in the loop
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, Gregory F.
2013-08-01
This article discusses the paper "Experimental Design for Engineering Dimensional Analysis" by Albrecht et al. (2013, Technometrics). That paper provides and overview of engineering dimensional analysis (DA) for use in developing DA models. The paper proposes methods for generating model-robust experimental designs to supporting fitting DA models. The specific approach is to develop a design that maximizes the efficiency of a specified empirical model (EM) in the original independent variables, subject to a minimum efficiency for a DA model expressed in terms of dimensionless groups (DGs). This discussion article raises several issues and makes recommendations regarding the proposed approach. Also,more » the concept of spurious correlation is raised and discussed. Spurious correlation results from the response DG being calculated using several independent variables that are also used to calculate predictor DGs in the DA model.« less
Multiclass feature selection for improved pediatric brain tumor segmentation
NASA Astrophysics Data System (ADS)
Ahmed, Shaheen; Iftekharuddin, Khan M.
2012-03-01
In our previous work, we showed that fractal-based texture features are effective in detection, segmentation and classification of posterior-fossa (PF) pediatric brain tumor in multimodality MRI. We exploited an information theoretic approach such as Kullback-Leibler Divergence (KLD) for feature selection and ranking different texture features. We further incorporated the feature selection technique with segmentation method such as Expectation Maximization (EM) for segmentation of tumor T and non tumor (NT) tissues. In this work, we extend the two class KLD technique to multiclass for effectively selecting the best features for brain tumor (T), cyst (C) and non tumor (NT). We further obtain segmentation robustness for each tissue types by computing Bay's posterior probabilities and corresponding number of pixels for each tissue segments in MRI patient images. We evaluate improved tumor segmentation robustness using different similarity metric for 5 patients in T1, T2 and FLAIR modalities.
Mixture models with entropy regularization for community detection in networks
NASA Astrophysics Data System (ADS)
Chang, Zhenhai; Yin, Xianjun; Jia, Caiyan; Wang, Xiaoyang
2018-04-01
Community detection is a key exploratory tool in network analysis and has received much attention in recent years. NMM (Newman's mixture model) is one of the best models for exploring a range of network structures including community structure, bipartite and core-periphery structures, etc. However, NMM needs to know the number of communities in advance. Therefore, in this study, we have proposed an entropy regularized mixture model (called EMM), which is capable of inferring the number of communities and identifying network structure contained in a network, simultaneously. In the model, by minimizing the entropy of mixing coefficients of NMM using EM (expectation-maximization) solution, the small clusters contained little information can be discarded step by step. The empirical study on both synthetic networks and real networks has shown that the proposed model EMM is superior to the state-of-the-art methods.
Spatial control of chemical processes on nanostructures through nano-localized water heating.
Jack, Calum; Karimullah, Affar S; Tullius, Ryan; Khorashad, Larousse Khosravi; Rodier, Marion; Fitzpatrick, Brian; Barron, Laurence D; Gadegaard, Nikolaj; Lapthorn, Adrian J; Rotello, Vincent M; Cooke, Graeme; Govorov, Alexander O; Kadodwala, Malcolm
2016-03-10
Optimal performance of nanophotonic devices, including sensors and solar cells, requires maximizing the interaction between light and matter. This efficiency is optimized when active moieties are localized in areas where electromagnetic (EM) fields are confined. Confinement of matter in these 'hotspots' has previously been accomplished through inefficient 'top-down' methods. Here we report a rapid 'bottom-up' approach to functionalize selective regions of plasmonic nanostructures that uses nano-localized heating of the surrounding water induced by pulsed laser irradiation. This localized heating is exploited in a chemical protection/deprotection strategy to allow selective regions of a nanostructure to be chemically modified. As an exemplar, we use the strategy to enhance the biosensing capabilities of a chiral plasmonic substrate. This novel spatially selective functionalization strategy provides new opportunities for efficient high-throughput control of chemistry on the nanoscale over macroscopic areas for device fabrication.
) Water rights and resources engineering Database planning and development Research Interests Collection lean principles to streamline exploration and drilling and reduce error/risk Research, development and Groundwater modeling Quantitative methods in water resource engineering Water resource engineering and
technologies using materials-by-design methods. The basic direction involves research on non-equilibrium doping in semiconductors Materials by Design and Materials Genome Non-equilibrium and metastable . 5, 1117 (2014) "Theoretical Prediction and Experimental Realization of New Stable Inorganic
A Visit to the Lederman Science Center
Lederman Science Center. With the hands-on exhibits, you can discover the tools and methods scientists use Lederman Science Center Roll over the rooms in the floor plan to see the pictures of rooms in the
JILA Science | Exploring the frontiers of physics
group are lighting up dark excitons. Specifically, the Raschke group developed a method to observe dark into a highly reactive hydroxyl radical (OH). And when CO and OH meet, one byproduct is carbon dioxide one of the nation's leading research institutes in the physical sciences. Learn more about JILA -->
Report Methods Acknowledgments Contact Us Search ÃClose Search Go Eastern Meadowlark by Ray Hennessy via Birdshare. The State of the Birds 2017: Farm Bill Special Report cover of Farm Bill Special Report Download the Report Northern Pintail by Stuart Lewis via Birdshare Benefits of the Farm Bill Farmer Frederick
Commercial Vegetables | UGA Cooperative Extension
in many county offices for ag producers. Organic Agriculture Certificate Program Understand organic agriculture production in the southeastern United States. Pesticide Safety Education Program Topics include Hill Award Gaskin's career is focused on the use of conservation methods in traditional agriculture
ARC Collaborative Research Seminar Series
been used to formulate design rules for hydration-based TES systems. Don Siegel is an Associate structural-acoustics, design of complex systems, and blast event simulations. Technology that he developed interests includes advanced fatigue and fracture assessment methodologies, computational methods for
Joe Robertson Photo of Joe Robertson Joe Robertson Research Engineer Joseph.Robertson@nrel.gov | 303-275-4575 Joe joined NREL in 2012. His research activities include automated building model student from the Colorado School of Mines on projects involving numerical methods applied to uncertainty
Real-Time Optimization and Control of Next-Generation Distribution
Infrastructure | Grid Modernization | NREL Real-Time Optimization and Control of Next -Generation Distribution Infrastructure Real-Time Optimization and Control of Next-Generation Distribution Infrastructure This project develops innovative, real-time optimization and control methods for next-generation
Innovation Analysis | Energy Analysis | NREL
. New empirical methods for estimating technical and commercial impact (based on patent citations and Commercial Breakthroughs, NREL employed regression models and multivariate simulations to compare social in the marketplace and found that: Web presence may provide a better representation of the commercial
primarily focused on semiparametric regression, functional data, and variational approximation methods Anderson Cancer Center where he contributed to efforts to study various statistical questions in
A Generalized Fast Frequency Sweep Algorithm for Coupled Circuit-EM Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rockway, J D; Champagne, N J; Sharpe, R M
2004-01-14
Frequency domain techniques are popular for analyzing electromagnetics (EM) and coupled circuit-EM problems. These techniques, such as the method of moments (MoM) and the finite element method (FEM), are used to determine the response of the EM portion of the problem at a single frequency. Since only one frequency is solved at a time, it may take a long time to calculate the parameters for wideband devices. In this paper, a fast frequency sweep based on the Asymptotic Wave Expansion (AWE) method is developed and applied to generalized mixed circuit-EM problems. The AWE method, which was originally developed for lumped-loadmore » circuit simulations, has recently been shown to be effective at quasi-static and low frequency full-wave simulations. Here it is applied to a full-wave MoM solver, capable of solving for metals, dielectrics, and coupled circuit-EM problems.« less
Center for Interface Science and Catalysis | Theory
& Stanford School of Engineering Toggle navigation Home Research Publications People About Academic to overcome challenges associated with the atomic-scale design of catalysts for chemical computational methods we are developing a quantitative description of chemical processes at the solid-gas and
of adaptive optics systems for the next generation of high resolution astronomy instrumentation. The largest telescopes in support of UC Astronomy, including those at the Keck, Gemini, and Lick Observatories optics for astronomy: MEMS and fiber lasers lead the way. In Adaptive Optics: Analysis, Methods and
Fourth International Workshop on Grid Simulator Testing of Wind Turbine
, United Kingdom Smart Reconfiguration and Protection in Advanced Electric Distribution Grids - Mayank Capabilities in Kinectrics - Nicolas Wrathall, Kinectrics, Canada Discussion Day 2: April 26, 2017 Advanced Grid Emulation Methods Advanced PHIL Interface for Multi-MW Scale Inverter Testing - Przemyslaw
Dana Scott Dana Mechanical and Vibrations Engineer Scott.Dana@nrel.gov | 303-384-7036 Scott focuses on field testing of wind turbines and components for mechanical loads and power performance structural health monitoring using dynamic-based sensing methods. Education B.S. and M.S. in Mechanical
efficiency and renewable energy projects. His patent on the Renewable Energy Optimization (REO) method of distribution function for time-series simulation Analytical and numerical optimization Project delivery with System Operations and Maintenance: 2nd Edition, 2016, NREL/Sandia/Sunspec Alliance SuNLaMP PV O&M
The sleeping beauty kissed awake: new methods in electron microscopy to study cellular membranes.
Chlanda, Petr; Krijnse Locker, Jacomine
2017-03-07
Electron microscopy (EM) for biological samples, developed in the 1940-1950s, changed our conception about the architecture of eukaryotic cells. It was followed by a period where EM applied to cell biology had seemingly fallen asleep, even though new methods with important implications for modern EM were developed. Among these was the discovery that samples can be preserved by chemical fixation and most importantly by rapid freezing without the formation of crystalline ice, giving birth to the world of cryo-EM. The past 15-20 years are hallmarked by a tremendous interest in EM, driven by important technological advances. Cryo-EM, in particular, is now capable of revealing structures of proteins at a near-atomic resolution owing to improved sample preparation methods, microscopes and cameras. In this review, we focus on the challenges associated with the imaging of membranes by EM and give examples from the field of host-pathogen interactions, in particular of virus-infected cells. Despite the advantages of imaging membranes under native conditions in cryo-EM, conventional EM will remain an important complementary method, in particular if large volumes need to be imaged. © 2017 The Author(s); published by Portland Press Limited on behalf of the Biochemical Society.
Breakdown Breakthrough: NREL Finds Easier Ways to Deconstruct Biomass |
soften biomass. Photo by Dennis Schroeder, NREL If there's an easier, more efficient method, science will Dennis Schroeder, NREL The process normally used to deconstruct biomass, called simultaneous in NREL's Biosciences Center. Photo by Dennis Schroeder, NREL New Technology Could Provide Boost to
using five different instruments, extending from day -11 to day +58 (in this archive all phases are expressed with respect to B-band maximum). In most cases, the spectra were acquired using different . The supernova spectrum was extracted using the variance weighted optimal aperture extraction method
Artificial Photosynthesis Foundry users, along with staff, have developed a fabrication method to make a square-inch sized artificial photosystem, in the form of an inorganic core-shell nanotube array, that awarded for his pioneering work in the area of advanced x-ray gratings New Catalyst Gives Artificial
Biological and Catalytic Conversion of Sugars and Lignin Publications |
mechanism of free and cellulosomal enzyme synergy, ACS Sustainable Chem. Eng. Evaluation of clean Free Energy, J. Amer. Chem. Soc. Process Design and Economics for the Conversion of Lignocellulosic Processive Cellulase with Multiple Absolute Binding Free Energy Methods, J. Biol. Chem. Optimizing Nucleus
Important Publications in the Area of Photovoltaic Performance |
, 2011, DOI: 978-0-12-385934-1. Photoelectrochemical Water Splitting: Standards, Experimental Methods Energy Systems Testing, Solar Energy 73, 443-467 (2002). D.R. Myers, K. Emery, and C. Gueymard, Revising Performance Evaluation Methodologies for Energy Ratings," Proc. 24th IEEE Photovoltaic Specialists Conf
Edward Purcell and Nuclear Magnetic Resonance (NMR)
"development of new methods for nuclear magnetic precision measurements and discoveries in educated and inspired a generation of physicists, who refer to it often, and depend on it utterly.1 Purcell : A Precise Determination of the Proton Magnetic Moment in Bohr Magnetons; Physical Review, Vol. 76
- Lifesaving & Fire Safety « Coast Guard Maritime Commons
. and Canadian implementation of lifejacket safety requirements and testing methods. 11/22/2017: Notice explore other contributing factors, it uncovered evidence of an ineffective safety management system Guard itself to provide effective oversight of the vessel's compliance with safety regulations. 9/26
a slope faces. Backfiring When attacking a wildland fire using the indirect attack method convective column. Black Line When putting in control lines, the process of burning out any pockets of small wildland fire. Burning Out When attack on the wildland fire is direct, or parallel with the
Data from: Solving the Robot-World Hand-Eye(s) Calibration Problem with
Iterative Methods | National Agricultural Library Skip to main content Home National Agricultural Library United States Department of Agriculture Ag Data Commons Beta Toggle navigation Datasets . License U.S. Public Domain Funding Source(s) National Science Foundation IOS-1339211 Agricultural Research
Copper clusters capture and convert carbon dioxide to make fuel | Argonne
Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Careers Education Photos Videos Fact Sheets, Brochures and Reports Summer Science Writing Internship Copper clusters sites, the current method of reduction creates high-pressure conditions to facilitate stronger bonds
Grid Research | Grid Modernization | NREL
Grid Research Grid Research NREL addresses the challenges of today's electric grid through high researcher in a lab Integrated Devices and Systems Developing and evaluating grid technologies and integrated Controls Developing methods for real-time operations and controls of power systems at any scale Photo of
NREL, International Colleagues Propose Strategy for Stable, Commercial
, Commercial Perovskite Solar Cells News Release: NREL, International Colleagues Propose Strategy for Stable , Commercial Perovskite Solar Cells October 17, 2016 Photo of two men in a lab. NREL Scientists Keith Emery and stable commercial PSCs-that includes the following: Developing a reproducible manufacturing method that
A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.
Morin, Jean-Benoît; Belli, Alain
2004-01-01
The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.
Speech Enhancement, Gain, and Noise Spectrum Adaptation Using Approximate Bayesian Estimation
Hao, Jiucang; Attias, Hagai; Nagarajan, Srikantan; Lee, Te-Won; Sejnowski, Terrence J.
2010-01-01
This paper presents a new approximate Bayesian estimator for enhancing a noisy speech signal. The speech model is assumed to be a Gaussian mixture model (GMM) in the log-spectral domain. This is in contrast to most current models in frequency domain. Exact signal estimation is a computationally intractable problem. We derive three approximations to enhance the efficiency of signal estimation. The Gaussian approximation transforms the log-spectral domain GMM into the frequency domain using minimal Kullback–Leiber (KL)-divergency criterion. The frequency domain Laplace method computes the maximum a posteriori (MAP) estimator for the spectral amplitude. Correspondingly, the log-spectral domain Laplace method computes the MAP estimator for the log-spectral amplitude. Further, the gain and noise spectrum adaptation are implemented using the expectation–maximization (EM) algorithm within the GMM under Gaussian approximation. The proposed algorithms are evaluated by applying them to enhance the speeches corrupted by the speech-shaped noise (SSN). The experimental results demonstrate that the proposed algorithms offer improved signal-to-noise ratio, lower word recognition error rate, and less spectral distortion. PMID:20428253
Wang, Zhu; Shuangge, Ma; Wang, Ching-Yun
2017-01-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using an open-source R package mpath. PMID:26059498
Lee, David; Park, Sang-Hoon; Lee, Sang-Goog
2017-10-07
In this paper, we propose a set of wavelet-based combined feature vectors and a Gaussian mixture model (GMM)-supervector to enhance training speed and classification accuracy in motor imagery brain-computer interfaces. The proposed method is configured as follows: first, wavelet transforms are applied to extract the feature vectors for identification of motor imagery electroencephalography (EEG) and principal component analyses are used to reduce the dimensionality of the feature vectors and linearly combine them. Subsequently, the GMM universal background model is trained by the expectation-maximization (EM) algorithm to purify the training data and reduce its size. Finally, a purified and reduced GMM-supervector is used to train the support vector machine classifier. The performance of the proposed method was evaluated for three different motor imagery datasets in terms of accuracy, kappa, mutual information, and computation time, and compared with the state-of-the-art algorithms. The results from the study indicate that the proposed method achieves high accuracy with a small amount of training data compared with the state-of-the-art algorithms in motor imagery EEG classification.
The USNO Astrometry Department
and methods, such as large scale CCD measuring devices, speckle and radio interferometry, are being the observational programs are published in the Naval Observatory Publications and in refereed
Future Directions of Electromagnetic Methods for Hydrocarbon Applications
NASA Astrophysics Data System (ADS)
Strack, K. M.
2014-01-01
For hydrocarbon applications, seismic exploration is the workhorse of the industry. Only in the borehole, electromagnetic (EM) methods play a dominant role, as they are mostly used to determine oil reserves and to distinguish water from oil-bearing zones. Throughout the past 60 years, we had several periods with an increased interest in EM. This increased with the success of the marine EM industry and now electromagnetics in general is considered for many new applications. The classic electromagnetic methods are borehole, onshore and offshore, and airborne EM methods. Airborne is covered elsewhere (see Smith, this issue). Marine EM material is readily available from the service company Web sites, and here I will only mention some future technical directions that are visible. The marine EM success is being carried back to the onshore market, fueled by geothermal and unconventional hydrocarbon applications. Oil companies are listening to pro-EM arguments, but still are hesitant to go through the learning exercises as early adopters. In particular, the huge business drivers of shale hydrocarbons and reservoir monitoring will bring markets many times bigger than the entire marine EM market. Additional applications include support for seismic operations, sub-salt, and sub-basalt, all areas where seismic exploration is costly and inefficient. Integration with EM will allow novel seismic methods to be applied. In the borehole, anisotropy measurements, now possible, form the missing link between surface measurements and ground truth. Three-dimensional (3D) induction measurements are readily available from several logging contractors. The trend to logging-while-drilling measurements will continue with many more EM technologies, and the effort of controlling the drill bit while drilling including look-ahead-and-around the drill bit is going on. Overall, the market for electromagnetics is increasing, and a demand for EM capable professionals will continue. The emphasis will be more on application and data integration (bottom-line value increase) and less on EM technology and modeling exercises.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2017-01-15
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.
License Agreement Moves Promising Technology Into the Marketplace
generated every day by sewage treatment plants. The pretreatment process was developed at the U.S commercialize the technology to Peak Treatment Systems, Inc. of Golden, Colo. Conventional disposal methods completely broken down. Peak Treatment Systems is using the equipment at a pilot-scale high solids anaerobic
BUFR TABLE B - WMO AND LOCAL (NCEP) DESCRIPTORS AS WELL AS THOSE AWAITING
09 Reserved 0 10 Non-coordinate location (vertical) Height, altitude, pressure and derivatives . calibration method, wind profiler mode, radiance channel combinations, hardware configurations, etc. 0 26 Non -coordinate location (time) Defines time and time derivatives that are not coordinates 0 27 Non-coordinate
Molecular Dynamics, and a suite of free energy methods such as MD Umbrella Sampling, Equilibrium Path chain on the crystal surface, and the degree of crystallinity in the substrate. We have used free energy shows the free energy results for edge, middle, and corner chains for all four types of cellulose. All
Defense Threat Reduction Agency > About > Media > News
event Apr 4 - Pacific Soldiers build relationships through radiological training taught by DTRA, 10th fighting method for DTRA Feb 10 - USAF commissions Boeing to build more MOPs - 30,000 lb. extreme bunker Malaysia may build bombs with radioactive materials Stay Connected About DoD Top Issues News Photos &
inspection methods use the natural vibration modes of the blade itself to enable vigorous shaking of blades ushers in designer wind turbine airfoils Every wing has an airfoil, which is a fancy word for the shape poor turbine performance, thanks to the fact that wind turbine blades operate in constant turbulence
New NREL Method Reduces Uncertainty in Photovoltaic Module Calibrations |
calibration traceability to certified test laboratories. This reliable calibration, in turn, determines the of a spire flash simulator, SOMS outdoor test bed, and LACSS continuous simulator. In NREL's Cell and % (k=2 coverage factor). This value is the lowest reported Pmax uncertainty of any accredited test
Photovoltaic Module Soiling Map | Photovoltaic Research | NREL
proposed in: M. Deceglie, L. Micheli, and M. Muller, "Quantifying soiling loss directly from PV yield described in: L. Micheli and M. Muller, "An investigation of the key parameters for predicting PV : M. Muller, L. Micheli, and A.A. Martinez-Morales, "A Method to Extract Soiling Loss Data from
Solar Energy Evolution and Diffusion Studies: 2014-2016 | Solar Research |
motivations for adoption A non-adopter considerer survey that targeted lost leads by installers to identify two experimental market pilots to test methods to increase lead generation and conversion using concerned individuals may need to emphasize non-environmental benefits. The results also support leveraging
conducting films of cadmium stannate: X. Wu, and T. J. Coutts (NREL IR#9545) PV devices comprising cadmium (NREL IR#9535) PV devices comprising zinc stannate buffer layer and method for making: X. Wu, P. Sheldon , and T. J. Coutts (NREL IR#9721) (filed) Publications View all NREL publications for Dr. Coutts. Awards
Energy Systems Integration News | Energy Systems Integration Facility |
distribution feeder models for use in hardware-in-the-loop (HIL) experiments. Using this method, a full feeder ; proposes an additional control loop to improve frequency support while ensuring stable operation. The and Frequency Deviation," also proposes an additional control loop, this time to smooth the wind
Announcements | High-Performance Computing | NREL
12th from 12 - 3 PM. Continue reading Data Transfer Queue March 08, 2018 A new queue "data , scratch) and MSS. Continue reading Purge-Alert January 11, 2018 We now have a notification method that . Continue reading Please Move Inactive Files Off the /projects File System January 11, 2018 The /projects
NREL Supports Innovative Offshore Wind Energy Projects | News | NREL
installation, operation, and maintenance methods for wind turbines located far from shore. Fishermen's Energy will also use the twisted-jacket foundation for the five 5-MW turbines it plans to install 3 miles off about offshore wind and investigate interactions between turbines. Principle Power will install five 6
Concentrating Solar Power Projects - Redstone Solar Thermal Power Plant |
Concentrating Solar Power | NREL Redstone Solar Thermal Power Plant Status Date: September 8 , 2016 Project Overview Project Name: Redstone Solar Thermal Power Plant Country: South Africa Location ): 100.0 MW Turbine Capacity (Net): 100.0 MW Cooling Method: Dry cooling Thermal Storage Storage Type: 2
Registration: Science Adventures
classes, students must pre-register using one of two methods: Online Form or Downloadable form Accelerator Laboratory Office of Science / U.S. Department of Energy Managed by Fermi Research Alliance, LLC
School of Mines, Golden, CO, 2009 Prior Work Experience Teaching Assistant, Colorado School of Mines (CSM . NREL/TP-6A20-63972. Sullivan, P., K. Eurek, and R. Margolis. 2014. Advanced Methods for Incorporating
NREL: U.S. Life Cycle Inventory Database - About the LCI Database Project
About the LCI Database Project The U.S. Life Cycle Inventory (LCI) Database is a publicly available data collection and analysis methods. Finding consistent and transparent LCI data for life cycle and maintain the database. The 2009 U.S. Life Cycle Inventory (LCI) Data Stakeholder meeting was an
I. I. Rabi, Nuclear Magnetic Resonance (NMR), and Radar
dropdown arrow Site Map A-Z Index Menu Synopsis I. I. Rabi, Nuclear Magnetic Resonance (NMR), and Radar Nobel Prize in Physics "for his resonance method for recording the magnetic properties of atomic the atomic clock, the laser and the diagnostic scanning of the human body by nuclear magnetic
NREL Develops Novel Method to Produce Renewable Acrylonitrile | News | NREL
to Produce Renewable Acrylonitrile December 7, 2017 Research paves the way for cost-competitive , and Eric Karp, part of the NREL team working on a cost-competitive, sustainable process for creating traditional process in terms of cost and yield. Now, new NREL research is showing promise toward achieving
Donald Glaser, the Bubble Chamber, and Elementary Particles
Effects of Ionizing Radiation on the Formation of Bubbles in Liquids Physical Review, Vol. 87, Issue 4 , 665, August 15, 1952 Characteristics of Bubble Chambers Physical Review, Vol. 97, Issue 2, 474-479 Chambers Physical Review, Vol. 102, Issue 6, 1653-1658, June 15, 1956 Methods of Particle Detection for
will be referred to as OI.v2. The most significant change for the OI.v2 is the improved simulation of SST obs from sea ice data following a technique developed at the UK Met Office. This change has developed at the Climate Prediction Center using the method of Reynolds and Smith (1995) and Smith and
Smith, Cory M; Housh, Terry J; Hill, Ethan C; Keller, Josh L; Johnson, Glen O; Schmidt, Richard J
2018-06-01
The purposes of this study were to examine: 1) the potential muscle-specific differences in voluntary electromechanical delay (EMD) and relaxation electromechanical delay (R-EMD), and 2) the effects of intensity on EMD and R-EMD during step incremental isometric muscle actions from 10 to 100% maximal voluntary isometric contraction (MVIC). EMD and R-EMD measures were calculated from the simultaneous assessments of electromyography, mechanomyography, and force production from the vastus lateralis (VL), vastus medialis (VM), and rectus femoris (RF) during step isometric muscle actions. There were no differences between the VL, VM, and RF for the voluntary EMD E-M (onsets of the electromyographic to mechanomyographic signals), EMD M-F (onsets the mechanomyographic to force production), or EMD E-F (onsets of the electromyographic signal to force production) as well as R-EMD E-M (cessation of electromyographic to mechanomyographic signal), R-EMD M-F (cessation of mechanomyographic signal to force cessation), or R-EMD E-F (cessation of electromyorgraphic signal to force cessation) at any intensity. There were decreases in all EMD and R-EMD measures with increases in intensity. The relative contributions from EMD E-M and EMD M-F to EMD E-F as well as R-EMD E-M and R-EMD M-F to R-EMD E-F remained similar across all intensities. The superficial muscles of the quadriceps femoris shared similar EMD and R-EMD measurements.
Ali, Sikander; Nawaz, Wajeeha
2016-08-01
The present research work is concerned with the biotransformation of L-tyrosine to dopamine (DA) by calcium alginate entrapped conidiospores of a mutant strain of Aspergillus oryzae. Different strains of A. oryzae were isolated from soil. Out of 13 isolated strains, isolate-2 (I-2) was found to be a better DA producer. The wild-type I-2 was chemically improved by treating it with different concentrations of ethyl methyl sulfonate (EMS). Among seven mutant variants, EMS-6 exhibiting maximal DA activity of 43 μg/ml was selected. The strain was further exposed with L-cysteine HCl to make it resistant against diversion and environmental stress. The conidiospores of selected mutant variant A. oryzae EMS-6 strain were entrapped in calcium alginate beads. Different parameters for immobilization were investigated. The activity was further improved from 44 to 62 μg/ml under optimized conditions (1.5 % sodium alginate, 2 ml inoculum, and 2 mm bead size). The best resistant mutant variable exhibited over threefold increase in DA activity (62 μg/ml) than did wild-type I-2 (21 μg/ml) in the reaction mixture. From the results presented in the study, it was observed that high titers of DA activity in vitro could effectively be achieved by the EMS-induced mutagenesis of filamentous fungus culture used.
Analytical Methods for Interconnection | Distributed Generation
; ANALYSIS Program Lead Kristen.Ardani@nrel.gov 303-384-4641 Accurately and quickly defining the effects of designed to accommodate voltage rises, bi-directional power flows, and other effects caused by distributed
NWTC Engineer Wins Prestigious International Electrotechnical Commission
IEC TC88, the technical committee responsible for writing the international standards for wind energy levels of safety and by defining test methods that provide high-quality, reproducible test results."
High Performance Structures and Materials
advanced simulation and optimization methods that can be used during the early design stages of innovative Development of Simulation Model Validation Framework for RBDO Sponsored by U.S. Army TARDEC Visit Us Contact
Romero-Brey, Inés; Bartenschlager, Ralf
2015-01-01
As obligate intracellular parasites, viruses need to hijack their cellular hosts and reprogram their machineries in order to replicate their genomes and produce new virions. For the direct visualization of the different steps of a viral life cycle (attachment, entry, replication, assembly and egress) electron microscopy (EM) methods are extremely helpful. While conventional EM has given important information about virus-host cell interactions, the development of three-dimensional EM (3D-EM) approaches provides unprecedented insights into how viruses remodel the intracellular architecture of the host cell. During the last years several 3D-EM methods have been developed. Here we will provide a description of the main approaches and examples of innovative applications. PMID:26633469
Romero-Brey, Inés; Bartenschlager, Ralf
2015-12-03
As obligate intracellular parasites, viruses need to hijack their cellular hosts and reprogram their machineries in order to replicate their genomes and produce new virions. For the direct visualization of the different steps of a viral life cycle (attachment, entry, replication, assembly and egress) electron microscopy (EM) methods are extremely helpful. While conventional EM has given important information about virus-host cell interactions, the development of three-dimensional EM (3D-EM) approaches provides unprecedented insights into how viruses remodel the intracellular architecture of the host cell. During the last years several 3D-EM methods have been developed. Here we will provide a description of the main approaches and examples of innovative applications.
Wind Curtailment and the Value of Transmission under a 2050 Wind Vision
dispatches each generating unit in the geographical footprint in the least- cost method based on many inputs just as the Wind Vision study did, in a somewhat different geographical distribution due to data distributed fairly well throughout the western U.S. The map shows kind of a different story. The map shows
Linux VPN Set Up | High-Performance Computing | NREL
methods to connect to NREL's HPC systems via the HPC VPN: one using a simple command line, and a second UserID in place of the one in the example image. Connection name: hpcvpn Gateway: hpcvpn.nrel.gov User hpcvpn option as seen in the following screen shot. Screenshot image NetworkManager will present you with
Using FastX on the Peregrine System | High-Performance Computing | NREL
with full 3D hardware acceleration. The traditional method of displaying graphics applications to a remote X server (indirect rendering) supports 3D hardware acceleration, but this approach causes all of the OpenGL commands and 3D data to be sent over the network to be rendered on the client machine. With
able to represent the anatomy of a frog in a computer in 3D space in such a way that a high school few masks to be loaded at once and enables different masks to be exclusive. When a mask becomes can not be drawn on. This method avoids overlapping masks when segmenting objects are adjacent to each
Melvin Schwartz and the Discovery of the Muon Neutrino
Schwartz was the co-winner of the 1988 Nobel Prize in Physics "for the neutrino beam method and the physics. He did so in 1991, returning to Brookhaven Lab as Associate Director for High Energy and Nuclear Physics. ... Melvin Schwartz was a member of the National Academy of Sciences and a fellow of the American
Concentrating Solar Power Projects - La Africana | Concentrating Solar
: Posadas (Córdoba) Owner(s): Ortiz/TSK/Magtel (100%) Technology: Parabolic trough Turbine Capacity: Net -Field Outlet Temp: 393°C Solar-Field Temp Difference: 100°C Power Block Turbine Capacity (Gross): 50.0 MW Turbine Capacity (Net): 50.0 MW Output Type: Steam Rankine Cooling Method: Wet cooling Thermal
NREL Research Teams Win Three R&D 100 Awards
Research Teams Win Three R&D 100 Awards Golden, Colo., Oct. 4, 2001 - Since 1982, the U.S research teams have brought that total number of awards to 31. The 2001 awards are for a solar cell that method involves applying a current to the battery for five seconds to overcharge the battery slightly
NASA Astrophysics Data System (ADS)
Grecu, M.; Tian, L.; Heymsfield, G. M.
2017-12-01
A major challenge in deriving accurate estimates of physical properties of falling snow particles from single frequency space- or airborne radar observations is that snow particles exhibit a large variety of shapes and their electromagnetic scattering characteristics are highly dependent on these shapes. Triple frequency (Ku-Ka-W) radar observations are expected to facilitate the derivation of more accurate snow estimates because specific snow particle shapes tend to have specific signatures in the associated two-dimensional dual-reflectivity-ratio (DFR) space. However, the derivation of accurate snow estimates from triple frequency radar observations is by no means a trivial task. This is because the radar observations can be subject to non-negligible attenuation (especially at W-band when super-cooled water is present), which may significantly impact the interpretation of the information in the DFR space. Moreover, the electromagnetic scattering properties of snow particles are computationally expensive to derive, which makes the derivation of reliable parameterizations usable in estimation methodologies challenging. In this study, we formulate an two-step Expectation Maximization (EM) methodology to derive accurate snow estimates in Extratropical Cyclones (ECTs) from triple frequency airborne radar observations. The Expectation (E) step consists of a least-squares triple frequency estimation procedure applied with given assumptions regarding the relationships between the density of snow particles and their sizes, while the Maximization (M) step consists of the optimization of the assumptions used in step E. The electromagnetic scattering properties of snow particles are derived using the Rayleigh-Gans approximation. The methodology is applied to triple frequency radar observations collected during the Olympic Mountains Experiment (OLYMPEX). Results show that snowfall estimates above the freezing level in ETCs consistent with the triple frequency radar observations as well as with independent rainfall estimates below the freezing level may be derived using the EM methodology formulated in the study.
Tabe-Bordbar, Shayan; Marashi, Sayed-Amir
2013-12-01
Elementary modes (EMs) are steady-state metabolic flux vectors with minimal set of active reactions. Each EM corresponds to a metabolic pathway. Therefore, studying EMs is helpful for analyzing the production of biotechnologically important metabolites. However, memory requirements for computing EMs may hamper their applicability as, in most genome-scale metabolic models, no EM can be computed due to running out of memory. In this study, we present a method for computing randomly sampled EMs. In this approach, a network reduction algorithm is used for EM computation, which is based on flux balance-based methods. We show that this approach can be used to recover the EMs in the medium- and genome-scale metabolic network models, while the EMs are sampled in an unbiased way. The applicability of such results is shown by computing “estimated” control-effective flux values in Escherichia coli metabolic network.
Identification of the focal plane wavefront control system using E-M algorithm
NASA Astrophysics Data System (ADS)
Sun, He; Kasdin, N. Jeremy; Vanderbei, Robert
2017-09-01
In a typical focal plane wavefront control (FPWC) system, such as the adaptive optics system of NASA's WFIRST mission, the efficient controllers and estimators in use are usually model-based. As a result, the modeling accuracy of the system influences the ultimate performance of the control and estimation. Currently, a linear state space model is used and calculated based on lab measurements using Fourier optics. Although the physical model is clearly defined, it is usually biased due to incorrect distance measurements, imperfect diagnoses of the optical aberrations, and our lack of knowledge of the deformable mirrors (actuator gains and influence functions). In this paper, we present a new approach for measuring/estimating the linear state space model of a FPWC system using the expectation-maximization (E-M) algorithm. Simulation and lab results in the Princeton's High Contrast Imaging Lab (HCIL) show that the E-M algorithm can well handle both the amplitude and phase errors and accurately recover the system. Using the recovered state space model, the controller creates dark holes with faster speed. The final accuracy of the model depends on the amount of data used for learning.
Zhu, Yanan; Ouyang, Qi; Mao, Youdong
2017-07-21
Single-particle cryo-electron microscopy (cryo-EM) has become a mainstream tool for the structural determination of biological macromolecular complexes. However, high-resolution cryo-EM reconstruction often requires hundreds of thousands of single-particle images. Particle extraction from experimental micrographs thus can be laborious and presents a major practical bottleneck in cryo-EM structural determination. Existing computational methods for particle picking often use low-resolution templates for particle matching, making them susceptible to reference-dependent bias. It is critical to develop a highly efficient template-free method for the automatic recognition of particle images from cryo-EM micrographs. We developed a deep learning-based algorithmic framework, DeepEM, for single-particle recognition from noisy cryo-EM micrographs, enabling automated particle picking, selection and verification in an integrated fashion. The kernel of DeepEM is built upon a convolutional neural network (CNN) composed of eight layers, which can be recursively trained to be highly "knowledgeable". Our approach exhibits an improved performance and accuracy when tested on the standard KLH dataset. Application of DeepEM to several challenging experimental cryo-EM datasets demonstrated its ability to avoid the selection of un-wanted particles and non-particles even when true particles contain fewer features. The DeepEM methodology, derived from a deep CNN, allows automated particle extraction from raw cryo-EM micrographs in the absence of a template. It demonstrates an improved performance, objectivity and accuracy. Application of this novel method is expected to free the labor involved in single-particle verification, significantly improving the efficiency of cryo-EM data processing.
NASA Astrophysics Data System (ADS)
Thiel, Stephan
2017-09-01
Hydraulic fracking is a geoengineering application designed to enhance subsurface permeability to maximize fluid and gas flow. Fracking is commonly used in enhanced geothermal systems (EGS), tight shale gas, and coal seam gas (CSG) plays and in CO_2 storage scenarios. Common monitoring methods include microseismics and mapping small earthquakes with great resolution associated with fracture opening at reservoir depth. Recently, electromagnetic (EM) methods have been employed in the field to provide an alternative way of direct detection of fluids as they are pumped in the ground. Surface magnetotelluric (MT) measurements across EGS show subtle yet detectable changes during fracking derived from time-lapse MT deployments. Changes are directional and are predominantly aligned with current stress field, dictating preferential fracture orientation, supported by microseismic monitoring of frack-related earthquakes. Modeling studies prior to the injection are crucial for survey design and feasibility of monitoring fracks. In particular, knowledge of sediment thickness plays a fundamental role in resolving subtle changes. Numerical forward modeling studies clearly favor some form of downhole measurement to enhance sensitivity; however, these have yet to be conclusively demonstrated in the field. Nevertheless, real surface-based monitoring examples do not necessarily replicate the expected magnitude of change derived from forward modeling and are larger than expected in some cases from EGS and CSG systems. It appears the injected fluid volume alone cannot account for the surface change in resistivity, but connectedness of pore space is also significantly enhanced and nonlinear. Recent numerical studies emphasize the importance of percolation threshold of the fracture network on both electrical resistivity and permeability, which may play an important role in accounting for temporal changes in surface EM measurements during hydraulic fracking.
Novel method for evaluation of eye movements in patients with narcolepsy.
Christensen, Julie A E; Kempfner, Lykke; Leonthin, Helle L; Hvidtfelt, Mathias; Nikolic, Miki; Kornum, Birgitte Rahbek; Jennum, Poul
2017-05-01
Narcolepsy causes abnormalities in the control of wake-sleep, non-rapid-eye-movement (non-REM) sleep and REM sleep, which includes specific eye movements (EMs). In this study, we aim to evaluate EM characteristics in narcolepsy as compared to controls using an automated detector. We developed a data-driven method to detect EMs during sleep based on two EOG signals recorded as part of a polysomnography (PSG). The method was optimized using the manually scored hypnograms from 36 control subjects. The detector was applied on a clinical sample with subjects suspected for central hypersomnias. Based on PSG, multiple sleep latency test and cerebrospinal fluid hypocretin-1 measures, they were divided into clinical controls (N = 20), narcolepsy type 2 (NT2, N = 19), and narcolepsy type 1 (NT1, N = 28). We investigated the distribution of EMs across sleep stages and cycles. NT1 patients had significantly less EMs during wake, N1, and N2 sleep and more EMs during REM sleep compared to clinical controls, and significantly less EMs during wake and N1 sleep compared to NT2 patients. Furthermore, NT1 patients showed less EMs during NREM sleep in the first sleep cycle and more EMs during NREM sleep in the second sleep cycle compared to clinical controls and NT2 patients. NT1 patients show an altered distribution of EMs across sleep stages and cycles compared to NT2 patients and clinical controls, suggesting that EMs are directly or indirectly controlled by the hypocretinergic system. A data-driven EM detector may contribute to the evaluation of narcolepsy and other disorders involving the control of EMs. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin
2011-06-01
We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets—consisting of 20 and 18 volumes, respectively—provided by the Internet Brain Segmentation Repository.
Wels, Michael; Zheng, Yefeng; Huber, Martin; Hornegger, Joachim; Comaniciu, Dorin
2011-06-07
We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average Dice coefficients of 0.93 ± 0.03 (WM) and 0.90 ± 0.05 (GM) on simulated mono-spectral and 0.94 ± 0.02 (WM) and 0.92 ± 0.04 (GM) on simulated multi-spectral data from the BrainWeb repository. The scores are 0.81 ± 0.09 (WM) and 0.82 ± 0.06 (GM) and 0.87 ± 0.05 (WM) and 0.83 ± 0.12 (GM) for the two collections of real-world data sets-consisting of 20 and 18 volumes, respectively-provided by the Internet Brain Segmentation Repository.
Concentrating Solar Power Projects - Lebrija 1 | Concentrating Solar Power
Turbine Capacity: Net: 50.0 MW Gross: 50.0 MW Status: Operational Start Year: 2011 Do you have more : Solel Heat-Transfer Fluid Type: Therminol VP1 Solar-Field Outlet Temp: 395°C Power Block Turbine Capacity (Gross): 50.0 MW Turbine Capacity (Net): 50.0 MW Power Cycle Pressure: 100.0 bar Cooling Method
NASA Astrophysics Data System (ADS)
Karamat, Muhammad I.; Farncombe, Troy H.
2015-10-01
Simultaneous multi-isotope Single Photon Emission Computed Tomography (SPECT) imaging has a number of applications in cardiac, brain, and cancer imaging. The major concern however, is the significant crosstalk contamination due to photon scatter between the different isotopes. The current study focuses on a method of crosstalk compensation between two isotopes in simultaneous dual isotope SPECT acquisition applied to cancer imaging using 99mTc and 111In. We have developed an iterative image reconstruction technique that simulates the photon down-scatter from one isotope into the acquisition window of a second isotope. Our approach uses an accelerated Monte Carlo (MC) technique for the forward projection step in an iterative reconstruction algorithm. The MC estimated scatter contamination of a radionuclide contained in a given projection view is then used to compensate for the photon contamination in the acquisition window of other nuclide. We use a modified ordered subset-expectation maximization (OS-EM) algorithm named simultaneous ordered subset-expectation maximization (Sim-OSEM), to perform this step. We have undertaken a number of simulation tests and phantom studies to verify this approach. The proposed reconstruction technique was also evaluated by reconstruction of experimentally acquired phantom data. Reconstruction using Sim-OSEM showed very promising results in terms of contrast recovery and uniformity of object background compared to alternative reconstruction methods implementing alternative scatter correction schemes (i.e., triple energy window or separately acquired projection data). In this study the evaluation is based on the quality of reconstructed images and activity estimated using Sim-OSEM. In order to quantitate the possible improvement in spatial resolution and signal to noise ratio (SNR) observed in this study, further simulation and experimental studies are required.
Body composition of university students by hydrostatic weighing and skinfold measurement.
Jürimäe, T; Jagomägi, G; Lepp, T
1992-12-01
The body composition of 124 male and 70 female Tartu University students was measured by three different methods: hydrostatic weighing by maximal expiration, hydrostatic weighing by maximal inspiration and subcutaneous fat thickness measurements. Our results show that the proposed body density measuring method by maximal expiration is simple, reliable and applicable not only in indoor swimming pools but in field conditions as well. The second new hydrostatic weighing apparatus is more comfortable for the subjects where the body density is measured at maximal inspiration. The mean body density of males was somewhat higher when measured by the maximal inspiration (1.066 +/- 0.012 g.ml-1) than when measured by the maximal expiration (1.063 +/- 0.009 g.ml-1, p < 0.05). For females, on the contrary, the maximal expiration method (1.044 +/- 0.010 g.ml-1) yielded a higher body density value than when measured by the maximal inspiration (1.040 +/- 0.011 g.ml-1, p > 0.05). The body fat percentage measured by skinfold thickness correlated significantly with the body fat percentage calculated by body density by maximal expiration (males r = 0.420, females r = 0.531) and inspiration (males r = 0.507, females r = 0.663). We must conclude that the presented two methods of measuring body density offer new possibilities for densitometric analysis without the need for expensive laboratory equipment.
Chitosan A was found to remove up to 3.4 log10 E. coli, a reduction of 99.96%. The removal was dose dependent; up to a point, removal of E. coli increases with increasing dose. Removal of E. coli ranged from 0% removal by a dose of 1g/L to...
Dinamarca, M Alejandro; Ibacache-Quiroga, C; Baeza, P; Galvez, S; Villarroel, M; Olivero, P; Ojeda, J
2010-04-01
The immobilization of Pseudomonas stutzeri using adsorption on different inorganic supports was studied in relation to the number of adsorbed cells, metabolic activity and biodesulfurization (BDS). The electrophoretic migration (EM) measurements and Tetrazolioum (TTC) method were used to evaluate adsorption and metabolic activity. Results indicate that maximal immobilization was obtained with an initial load of 14 x 10(8) cells mL(-1) for Al and Sep, whereas Ti requires 20 x 10(8) cells mL(-1). The highest interaction was observed in the P. stutzeri/Si and P. stutzeri/Sep biocatalysts. The IEP values and metabolic activities indicate that P. stutzeri change the surface of supports and maintains metabolic activity. A direct relation between BDS activity and the adsorption capacity of the bacterial cells was observed at the adsorption/desorption equilibrium level. The biomodification of inorganic supports by the adsorption process increases the bioavailability of sulphur substrates for bacterial cells, improving BDS activity. Copyright 2009 Elsevier Ltd. All rights reserved.
A Zero- and K-Inflated Mixture Model for Health Questionnaire Data
Finkelman, Matthew D.; Green, Jennifer Greif; Gruber, Michael J.; Zaslavsky, Alan M.
2011-01-01
In psychiatric assessment, Item Response Theory (IRT) is a popular tool to formalize the relation between the severity of a disorder and associated responses to questionnaire items. Practitioners of IRT sometimes make the assumption of normally distributed severities within a population; while convenient, this assumption is often violated when measuring psychiatric disorders. Specifically, there may be a sizable group of respondents whose answers place them at an extreme of the latent trait spectrum. In this article, a zero- and K-inflated mixture model is developed to account for the presence of such respondents. The model is fitted using an expectation-maximization (E-M) algorithm to estimate the percentage of the population at each end of the continuum, concurrently analyzing the remaining “graded component” via IRT. A method to perform factor analysis for only the graded component is introduced. In assessments of oppositional defiant disorder and conduct disorder, the zero- and K-inflated model exhibited better fit than the standard IRT model. PMID:21365673
Blaauw, Duane; Ssengooba, Freddie
2018-01-01
Background Improving the delivery of emergency obstetric care (EmNOC) remains critical in addressing direct causes of maternal mortality. United Nations (UN) agencies have promoted standard methods for evaluating the availability of EmNOC facilities although modifications have been proposed by others. This study presents an assessment of the preparedness of public health facilities to provide EmNOC using these methods in one South African district with a persistently high maternal mortality ratio. Methods Data collection took place in the final quarter of 2014. Cross-sectional surveys were conducted to classify the 7 hospitals and 8 community health centres (CHCs) in the district as either basic EmNOC (BEmNOC) or comprehensive EmNOC (CEmNOC) facilities using UN EmNOC signal functions. The required density of EmNOC facilities was calculated using UN norms. We also assessed the availability of EmNOC personnel, resuscitation equipment, drugs, fluids, and protocols at each facility. The workload of skilled EmNOC providers at hospitals and CHCs was compared. Results All 7 hospitals in the district were classified as CEmNOC facilities, but none of the 8 CHCs performed all required signal functions to be classified as BEmNOC facilities. UN norms indicated that 25 EmNOC facilities were required for the district population, 5 of which should be CEmNOCs. None of the facilities had 100% of items on the EmNOC checklists. Hospital midwives delivered an average of 36.4±14.3 deliveries each per month compared to only 7.9±3.2 for CHC midwives (p<0.001). Conclusions The analysis indicated a shortfall of EmNOC facilities in the district. Full EmNOC services were centralised to hospitals to assure patient safety even though national policy guidelines sanction more decentralisation to CHCs. Studies measuring EmNOC availability need to consider facility opening hours, capacity and staffing in addition to the demonstrated performance of signal functions. PMID:29596431
Computational Software for Fitting Seismic Data to Epidemic-Type Aftershock Sequence Models
NASA Astrophysics Data System (ADS)
Chu, A.
2014-12-01
Modern earthquake catalogs are often analyzed using spatial-temporal point process models such as the epidemic-type aftershock sequence (ETAS) models of Ogata (1998). My work introduces software to implement two of ETAS models described in Ogata (1998). To find the Maximum-Likelihood Estimates (MLEs), my software provides estimates of the homogeneous background rate parameter and the temporal and spatial parameters that govern triggering effects by applying the Expectation-Maximization (EM) algorithm introduced in Veen and Schoenberg (2008). Despite other computer programs exist for similar data modeling purpose, using EM-algorithm has the benefits of stability and robustness (Veen and Schoenberg, 2008). Spatial shapes that are very long and narrow cause difficulties in optimization convergence and problems with flat or multi-modal log-likelihood functions encounter similar issues. My program uses a robust method to preset a parameter to overcome the non-convergence computational issue. In addition to model fitting, the software is equipped with useful tools for examining modeling fitting results, for example, visualization of estimated conditional intensity, and estimation of expected number of triggered aftershocks. A simulation generator is also given with flexible spatial shapes that may be defined by the user. This open-source software has a very simple user interface. The user may execute it on a local computer, and the program also has potential to be hosted online. Java language is used for the software's core computing part and an optional interface to the statistical package R is provided.
Lewiss, Resa E; Chan, Wilma; Sheng, Alexander Y; Soto, Jorge; Castro, Alexandra; Meltzer, Andrew C; Cherney, Alan; Kumaravel, Manickam; Cody, Dianna; Chen, Esther H
2015-12-01
The appropriate selection and accurate interpretation of diagnostic imaging is a crucial skill for emergency practitioners. To date, the majority of the published literature and research on competency assessment comes from the subspecialty of point-of-care ultrasound. A group of radiologists, physicists, and emergency physicians convened at the 2015 Academic Emergency Medicine consensus conference to discuss and prioritize a research agenda related to education, assessment, and competency in ordering and interpreting diagnostic imaging. A set of questions for the continued development of an educational curriculum on diagnostic imaging for trainees and competency assessment using specific assessment methods based on current best practices was delineated. The research priorities were developed through an iterative consensus-driven process using a modified nominal group technique that culminated in an in-person breakout session. The four recommendations are: 1) develop a diagnostic imaging curriculum for emergency medicine (EM) residency training; 2) develop, study, and validate tools to assess competency in diagnostic imaging interpretation; 3) evaluate the role of simulation in education, assessment, and competency measures for diagnostic imaging; 4) study is needed regarding the American College of Radiology Appropriateness Criteria, an evidence-based peer-reviewed resource in determining the use of diagnostic imaging, to maximize its value in EM. In this article, the authors review the supporting reliability and validity evidence and make specific recommendations for future research on the education, competency, and assessment of learning diagnostic imaging. © 2015 by the Society for Academic Emergency Medicine.
PCA based clustering for brain tumor segmentation of T1w MRI images.
Kaya, Irem Ersöz; Pehlivanlı, Ayça Çakmak; Sekizkardeş, Emine Gezmez; Ibrikci, Turgay
2017-03-01
Medical images are huge collections of information that are difficult to store and process consuming extensive computing time. Therefore, the reduction techniques are commonly used as a data pre-processing step to make the image data less complex so that a high-dimensional data can be identified by an appropriate low-dimensional representation. PCA is one of the most popular multivariate methods for data reduction. This paper is focused on T1-weighted MRI images clustering for brain tumor segmentation with dimension reduction by different common Principle Component Analysis (PCA) algorithms. Our primary aim is to present a comparison between different variations of PCA algorithms on MRIs for two cluster methods. Five most common PCA algorithms; namely the conventional PCA, Probabilistic Principal Component Analysis (PPCA), Expectation Maximization Based Principal Component Analysis (EM-PCA), Generalize Hebbian Algorithm (GHA), and Adaptive Principal Component Extraction (APEX) were applied to reduce dimensionality in advance of two clustering algorithms, K-Means and Fuzzy C-Means. In the study, the T1-weighted MRI images of the human brain with brain tumor were used for clustering. In addition to the original size of 512 lines and 512 pixels per line, three more different sizes, 256 × 256, 128 × 128 and 64 × 64, were included in the study to examine their effect on the methods. The obtained results were compared in terms of both the reconstruction errors and the Euclidean distance errors among the clustered images containing the same number of principle components. According to the findings, the PPCA obtained the best results among all others. Furthermore, the EM-PCA and the PPCA assisted K-Means algorithm to accomplish the best clustering performance in the majority as well as achieving significant results with both clustering algorithms for all size of T1w MRI images. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Fault Identification by Unsupervised Learning Algorithm
NASA Astrophysics Data System (ADS)
Nandan, S.; Mannu, U.
2012-12-01
Contemporary fault identification techniques predominantly rely on the surface expression of the fault. This biased observation is inadequate to yield detailed fault structures in areas with surface cover like cities deserts vegetation etc and the changes in fault patterns with depth. Furthermore it is difficult to estimate faults structure which do not generate any surface rupture. Many disastrous events have been attributed to these blind faults. Faults and earthquakes are very closely related as earthquakes occur on faults and faults grow by accumulation of coseismic rupture. For a better seismic risk evaluation it is imperative to recognize and map these faults. We implement a novel approach to identify seismically active fault planes from three dimensional hypocenter distribution by making use of unsupervised learning algorithms. We employ K-means clustering algorithm and Expectation Maximization (EM) algorithm modified to identify planar structures in spatial distribution of hypocenter after filtering out isolated events. We examine difference in the faults reconstructed by deterministic assignment in K- means and probabilistic assignment in EM algorithm. The method is conceptually identical to methodologies developed by Ouillion et al (2008, 2010) and has been extensively tested on synthetic data. We determined the sensitivity of the methodology to uncertainties in hypocenter location, density of clustering and cross cutting fault structures. The method has been applied to datasets from two contrasting regions. While Kumaon Himalaya is a convergent plate boundary, Koyna-Warna lies in middle of the Indian Plate but has a history of triggered seismicity. The reconstructed faults were validated by examining the fault orientation of mapped faults and the focal mechanism of these events determined through waveform inversion. The reconstructed faults could be used to solve the fault plane ambiguity in focal mechanism determination and constrain the fault orientations for finite source inversions. The faults produced by the method exhibited good correlation with the fault planes obtained by focal mechanism solutions and previously mapped faults.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Victoria; Kishan, Amar U.; Cao, Minsong
2014-03-15
Purpose: To demonstrate a new method of evaluating dose response of treatment-induced lung radiographic injury post-SBRT (stereotactic body radiotherapy) treatment and the discovery of bimodal dose behavior within clinically identified injury volumes. Methods: Follow-up CT scans at 3, 6, and 12 months were acquired from 24 patients treated with SBRT for stage-1 primary lung cancers or oligometastic lesions. Injury regions in these scans were propagated to the planning CT coordinates by performing deformable registration of the follow-ups to the planning CTs. A bimodal behavior was repeatedly observed from the probability distribution for dose values within the deformed injury regions. Basedmore » on a mixture-Gaussian assumption, an Expectation-Maximization (EM) algorithm was used to obtain characteristic parameters for such distribution. Geometric analysis was performed to interpret such parameters and infer the critical dose level that is potentially inductive of post-SBRT lung injury. Results: The Gaussian mixture obtained from the EM algorithm closely approximates the empirical dose histogram within the injury volume with good consistency. The average Kullback-Leibler divergence values between the empirical differential dose volume histogram and the EM-obtained Gaussian mixture distribution were calculated to be 0.069, 0.063, and 0.092 for the 3, 6, and 12 month follow-up groups, respectively. The lower Gaussian component was located at approximately 70% prescription dose (35 Gy) for all three follow-up time points. The higher Gaussian component, contributed by the dose received by planning target volume, was located at around 107% of the prescription dose. Geometrical analysis suggests the mean of the lower Gaussian component, located at 35 Gy, as a possible indicator for a critical dose that induces lung injury after SBRT. Conclusions: An innovative and improved method for analyzing the correspondence between lung radiographic injury and SBRT treatment dose has been demonstrated. Bimodal behavior was observed in the dose distribution of lung injury after SBRT. Novel statistical and geometrical analysis has shown that the systematically quantified low-dose peak at approximately 35 Gy, or 70% prescription dose, is a good indication of a critical dose for injury. The determined critical dose of 35 Gy resembles the critical dose volume limit of 30 Gy for ipsilateral bronchus in RTOG 0618 and results from previous studies. The authors seek to further extend this improved analysis method to a larger cohort to better understand the interpatient variation in radiographic lung injury dose response post-SBRT.« less
NASA Astrophysics Data System (ADS)
Nishino, Hitoshi; Rajpoot, Subhash
2016-05-01
We present electric-magnetic (EM)-duality formulations for non-Abelian gauge groups with N =1 supersymmetry in D =3 +3 and 5 +5 space-time dimensions. We show that these systems generate self-dual N =1 supersymmetric Yang-Mills (SDSYM) theory in D =2 +2 . For a N =2 supersymmetric EM-dual system in D =3 +3 , we have the Yang-Mills multiplet (Aμ I,λA I) and a Hodge-dual multiplet (Bμν ρ I,χA I) , with an auxiliary tensors Cμν ρ σ I and Kμ ν. Here, I is the adjoint index, while A is for the doublet of S p (1 ). The EM-duality conditions are Fμν I=(1 /4 !)ɛμν ρ σ τ λGρσ τ λ I with its superpartner duality condition λA I=-χA I . Upon appropriate dimensional reduction, this system generates SDSYM in D =2 +2 . This system is further generalized to D =5 +5 with the EM-duality condition Fμν I=(1 /8 !)ɛμν ρ1⋯ρ8Gρ1⋯ρ8 I with its superpartner condition λI=-χI . Upon appropriate dimensional reduction, this theory also generates SDSYM in D =2 +2 . As long as we maintain Lorentz covariance, D =5 +5 dimensions seems to be the maximal space-time dimensions that generate SDSYM in D =2 +2 . Namely, EM-dual system in D =5 +5 serves as the Master Theory of all supersymmetric integrable models in dimensions 1 ≤D ≤3 .
A general probabilistic model for group independent component analysis and its estimation methods
Guo, Ying
2012-01-01
SUMMARY Independent component analysis (ICA) has become an important tool for analyzing data from functional magnetic resonance imaging (fMRI) studies. ICA has been successfully applied to single-subject fMRI data. The extension of ICA to group inferences in neuroimaging studies, however, is challenging due to the unavailability of a pre-specified group design matrix and the uncertainty in between-subjects variability in fMRI data. We present a general probabilistic ICA (PICA) model that can accommodate varying group structures of multi-subject spatio-temporal processes. An advantage of the proposed model is that it can flexibly model various types of group structures in different underlying neural source signals and under different experimental conditions in fMRI studies. A maximum likelihood method is used for estimating this general group ICA model. We propose two EM algorithms to obtain the ML estimates. The first method is an exact EM algorithm which provides an exact E-step and an explicit noniterative M-step. The second method is an variational approximation EM algorithm which is computationally more efficient than the exact EM. In simulation studies, we first compare the performance of the proposed general group PICA model and the existing probabilistic group ICA approach. We then compare the two proposed EM algorithms and show the variational approximation EM achieves comparable accuracy to the exact EM with significantly less computation time. An fMRI data example is used to illustrate application of the proposed methods. PMID:21517789
Automated structure refinement of macromolecular assemblies from cryo-EM maps using Rosetta.
Wang, Ray Yu-Ruei; Song, Yifan; Barad, Benjamin A; Cheng, Yifan; Fraser, James S; DiMaio, Frank
2016-09-26
Cryo-EM has revealed the structures of many challenging yet exciting macromolecular assemblies at near-atomic resolution (3-4.5Å), providing biological phenomena with molecular descriptions. However, at these resolutions, accurately positioning individual atoms remains challenging and error-prone. Manually refining thousands of amino acids - typical in a macromolecular assembly - is tedious and time-consuming. We present an automated method that can improve the atomic details in models that are manually built in near-atomic-resolution cryo-EM maps. Applying the method to three systems recently solved by cryo-EM, we are able to improve model geometry while maintaining the fit-to-density. Backbone placement errors are automatically detected and corrected, and the refinement shows a large radius of convergence. The results demonstrate that the method is amenable to structures with symmetry, of very large size, and containing RNA as well as covalently bound ligands. The method should streamline the cryo-EM structure determination process, providing accurate and unbiased atomic structure interpretation of such maps.
Berger, Moritz; Nova, Igor; Kallus, Sebastian; Ristow, Oliver; Eisenmann, Urs; Dickhaus, Hartmut; Engel, Michael; Freudlsperger, Christian; Hoffmann, Jürgen; Seeberger, Robin
2018-05-01
Reproduction of the exact preoperative proximal-mandible position after osteotomy in orthognathic surgery is difficult to achieve. This clinical pilot study evaluated an electromagnetic (EM) navigation system for condylar positioning after high-oblique sagittal split osteotomy (HSSO). After HSSO as part of 2-jaw surgery, the position of 10 condyles was intraoperatively guided by an EM navigation system. As controls, 10 proximal segments were positioned by standard manual replacement. Accuracy was measured by pre- and postoperative cone beam computed tomography imaging. Overall, EM condyle repositioning was equally accurate compared with manual repositioning (P > .05). Subdivided into 3 axes, significant differences could be identified (P < .05). Nevertheless, no significantly and clinically relevant dislocations of the proximal segment of either the EM or the manual repositioning method could be shown (P > .05). This pilot study introduces a guided method for proximal segment positioning after HSSO by applying the intraoperative EM system. The data demonstrate the high accuracy of EM navigation, although manual replacement of the condyles could not be surpassed. However, EM navigation can avoid clinically hidden, severe malpositioning of the condyles. Copyright © 2017 Elsevier Inc. All rights reserved.
Comparison of an Atomic Model and Its Cryo-EM Image at the Central Axis of a Helix
He, Jing; Zeil, Stephanie; Hallak, Hussam; McKaig, Kele; Kovacs, Julio; Wriggers, Willy
2016-01-01
Cryo-electron microscopy (cryo-EM) is an important biophysical technique that produces three-dimensional (3D) density maps at different resolutions. Because more and more models are being produced from cryo-EM density maps, validation of the models is becoming important. We propose a method for measuring local agreement between a model and the density map using the central axis of the helix. This method was tested using 19 helices from cryo-EM density maps between 5.5 Å and 7.2 Å resolution and 94 helices from simulated density maps. This method distinguished most of the well-fitting helices, although challenges exist for shorter helices. PMID:27280059
Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G
2016-05-25
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.
Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.
2016-01-01
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162
Maximum Likelihood Estimations and EM Algorithms with Length-biased Data
Qin, Jing; Ning, Jing; Liu, Hao; Shen, Yu
2012-01-01
SUMMARY Length-biased sampling has been well recognized in economics, industrial reliability, etiology applications, epidemiological, genetic and cancer screening studies. Length-biased right-censored data have a unique data structure different from traditional survival data. The nonparametric and semiparametric estimations and inference methods for traditional survival data are not directly applicable for length-biased right-censored data. We propose new expectation-maximization algorithms for estimations based on full likelihoods involving infinite dimensional parameters under three settings for length-biased data: estimating nonparametric distribution function, estimating nonparametric hazard function under an increasing failure rate constraint, and jointly estimating baseline hazards function and the covariate coefficients under the Cox proportional hazards model. Extensive empirical simulation studies show that the maximum likelihood estimators perform well with moderate sample sizes and lead to more efficient estimators compared to the estimating equation approaches. The proposed estimates are also more robust to various right-censoring mechanisms. We prove the strong consistency properties of the estimators, and establish the asymptotic normality of the semi-parametric maximum likelihood estimators under the Cox model using modern empirical processes theory. We apply the proposed methods to a prevalent cohort medical study. Supplemental materials are available online. PMID:22323840
Wang, Zhu; Ma, Shuangge; Wang, Ching-Yun
2015-09-01
In health services and outcome research, count outcomes are frequently encountered and often have a large proportion of zeros. The zero-inflated negative binomial (ZINB) regression model has important applications for this type of data. With many possible candidate risk factors, this paper proposes new variable selection methods for the ZINB model. We consider maximum likelihood function plus a penalty including the least absolute shrinkage and selection operator (LASSO), smoothly clipped absolute deviation (SCAD), and minimax concave penalty (MCP). An EM (expectation-maximization) algorithm is proposed for estimating the model parameters and conducting variable selection simultaneously. This algorithm consists of estimating penalized weighted negative binomial models and penalized logistic models via the coordinated descent algorithm. Furthermore, statistical properties including the standard error formulae are provided. A simulation study shows that the new algorithm not only has more accurate or at least comparable estimation, but also is more robust than the traditional stepwise variable selection. The proposed methods are applied to analyze the health care demand in Germany using the open-source R package mpath. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Semiparametric Time-to-Event Modeling in the Presence of a Latent Progression Event
Rice, John D.; Tsodikov, Alex
2017-01-01
Summary In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood–based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. PMID:27556886
Semiparametric time-to-event modeling in the presence of a latent progression event.
Rice, John D; Tsodikov, Alex
2017-06-01
In cancer research, interest frequently centers on factors influencing a latent event that must precede a terminal event. In practice it is often impossible to observe the latent event precisely, making inference about this process difficult. To address this problem, we propose a joint model for the unobserved time to the latent and terminal events, with the two events linked by the baseline hazard. Covariates enter the model parametrically as linear combinations that multiply, respectively, the hazard for the latent event and the hazard for the terminal event conditional on the latent one. We derive the partial likelihood estimators for this problem assuming the latent event is observed, and propose a profile likelihood-based method for estimation when the latent event is unobserved. The baseline hazard in this case is estimated nonparametrically using the EM algorithm, which allows for closed-form Breslow-type estimators at each iteration, bringing improved computational efficiency and stability compared with maximizing the marginal likelihood directly. We present simulation studies to illustrate the finite-sample properties of the method; its use in practice is demonstrated in the analysis of a prostate cancer data set. © 2016, The International Biometric Society.
A new numerical method for calculating extrema of received power for polarimetric SAR
Zhang, Y.; Zhang, Jiahua; Lu, Z.; Gong, W.
2009-01-01
A numerical method called cross-step iteration is proposed to calculate the maximal/minimal received power for polarized imagery based on a target's Kennaugh matrix. This method is much more efficient than the systematic method, which searches for the extrema of received power by varying the polarization ellipse angles of receiving and transmitting polarizations. It is also more advantageous than the Schuler method, which has been adopted by the PolSARPro package, because the cross-step iteration method requires less computation time and can derive both the maximal and minimal received powers, whereas the Schuler method is designed to work out only the maximal received power. The analytical model of received-power optimization indicates that the first eigenvalue of the Kennaugh matrix is the supremum of the maximal received power. The difference between these two parameters reflects the depolarization effect of the target's backscattering, which might be useful for target discrimination. ?? 2009 IEEE.
Live CLEM imaging to analyze nuclear structures at high resolution.
Haraguchi, Tokuko; Osakada, Hiroko; Koujin, Takako
2015-01-01
Fluorescence microscopy (FM) and electron microscopy (EM) are powerful tools for observing molecular components in cells. FM can provide temporal information about cellular proteins and structures in living cells. EM provides nanometer resolution images of cellular structures in fixed cells. We have combined FM and EM to develop a new method of correlative light and electron microscopy (CLEM), called "Live CLEM." In this method, the dynamic behavior of specific molecules of interest is first observed in living cells using fluorescence microscopy (FM) and then cellular structures in the same cell are observed using electron microscopy (EM). Following image acquisition, FM and EM images are compared to enable the fluorescent images to be correlated with the high-resolution images of cellular structures obtained using EM. As this method enables analysis of dynamic events involving specific molecules of interest in the context of specific cellular structures at high resolution, it is useful for the study of nuclear structures including nuclear bodies. Here we describe Live CLEM that can be applied to the study of nuclear structures in mammalian cells.
Ng, S K; McLachlan, G J
2003-04-15
We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright 2003 John Wiley & Sons, Ltd.
A comparison of algorithms for inference and learning in probabilistic graphical models.
Frey, Brendan J; Jojic, Nebojsa
2005-09-01
Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.
Donor selection criteria for liver transplantation in Argentina: are current standards too rigorous?
Dirchwolf, Melisa; Ruf, Andrés E; Biggins, Scott W; Bisigniano, Liliana; Hansen Krogh, Daniela; Villamil, Federico G
2015-02-01
Organ shortage is the major limitation for the growth of deceased donor liver transplant worldwide. One strategy to ameliorate this problem is to maximize the liver utilization rate. To assess predictors of liver utilization in Argentina. The national database was used to analyze transplant activity in 2010. Donor, recipient, and transplant variables were evaluated as predictors of graft utilization of number of rejected donor offers before grafting and with the occurrence of primary nonfunction (PNF) or early post-transplant mortality (EM). Of the 582 deceased donors, 293 (50.3%) were recovered for liver transplant. Variables associated with the nonrecovery of the liver were age ≥46 years, umbilical perimeter ≥92 cm, organ procurement outside Gran Buenos Aires, AST ≥42 U/l and ALT ≥29 U/l. The median number of rejected offers before grafting was 4, and in 71 patients (25%), there were ≥13. The only independent predictor for the occurrence of PNF (3.4%) or EM (5.2%) was the recipient's emergency status. During 2010 in Argentina, the liver was recovered in only half of donors. The low incidence of PNF and EM and the characteristics of the nonrecovered liver donors suggest that organ acceptance criteria should be less rigorous. © 2014 Steunstichting ESOT.
Leverage front-line expertise to maximize trauma prevention efforts.
2012-06-01
The trauma prevention program at Geisinger Wyoming Valley (GWV) Medical Center in Wilkes-Barre, PA, has enlisted the assistance of an experienced paramedic and ED tech to spend part of his time targeting prevention education toward populations that have been experiencing high rates of traumatic injuries. While community outreach has long been a priority for the trauma prevention program, the new position is enabling GWV to boost the magnitude of its prevention efforts, and to reach out to referring facilities as well. Program administrators say a similar outreach effort aimed at EMS providers has strengthened relationships and helped to improve trauma care at the facility. The new trauma injury prevention outreach coordinator has focused his first efforts on fall prevention and curbing motor vehicle accidents among very young and very mature driving populations. Data from GWV's trauma registry suggest that its fall prevention efforts are having an effect. The incidence of falls among patients over the age 65 is down by about 10% at the facility since it began targeting education at the community's senior population. Administrators say a monthly lecture series aimed at the prehospital community has gone a long way toward nurturing ties with EMS providers. Called "EMS Night Out," the series covers a range of topics, but the most popular programs involve case reviews.
Fluckiger, Jacob U; Benefield, Brandon C; Bakhos, Lara; Harris, Kathleen R; Lee, Daniel C
2015-01-01
To evaluate the impact of correcting myocardial signal saturation on the accuracy of absolute myocardial blood flow (MBF) measurements. We performed 15 dual bolus first-pass perfusion studies in 7 dogs during global coronary vasodilation and variable degrees of coronary artery stenosis. We compared microsphere MBF to MBF calculated from uncorrected and corrected MRI signal. Four correction methods were tested, two theoretical methods (Th1 and Th2) and two empirical methods (Em1 and Em2). The correlations with microsphere MBF (n = 90 segments) were: uncorrected (y = 0.47x + 1.1, r = 0.70), Th1 (y = 0.53x + 1.0, r = 0.71), Th2 (y = 0.62x + 0.86, r = 0.73), Em1 (y = 0.82x + 0.86, r = 0.77), and Em2 (y = 0.72x + 0.84, r = 0.75). All corrected methods were not significantly different from microspheres, while uncorrected MBF values were significantly lower. For the top 50% of microsphere MBF values, flows were significantly underestimated by uncorrected SI (31%), Th1 (25%), and Th2 (19%), while Em1 (1%), and Em2 (9%) were similar to microsphere MBF. Myocardial signal saturation should be corrected prior to flow modeling to avoid underestimation of MBF by MR perfusion imaging.
In vivo study of endometriosis in mice by photoacoustic microscopy.
Ding, Yichen; Zhang, Mingzhu; Lang, Jinghe; Leng, Jinhua; Ren, Qiushi; Yang, Jie; Li, Changhui
2015-01-01
Endometriosis (EM) impacts the healthcare and the quality of life for women of reproductive age. However, there is no reliable noninvasive diagnosis method for either animal study or clinical use. In this work, a novel imaging method, photoacoustic microscopy (PAM) was employed to study the EM on the mouse model. Our results demonstrated the PAM noninvasively provided the high contrast and 3D imaging of subcutaneously implanted EM tissue in the nude mouse in vivo. The statistical study also indicated PAM had high sensitivity and specificity in the diagnosis of EM in this animal study. In addition, we also discussed the potential clinical application for PAM in the diagnosis of EM. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hamamoto, Kouta; Ueda, Shuhei; Yamamoto, Yoshimasa
2015-01-01
Genotyping and characterization of bacterial isolates are essential steps in the identification and control of antibiotic-resistant bacterial infections. Recently, one novel genotyping method using three genomic guided Escherichia coli markers (GIG-EM), dinG, tonB, and dipeptide permease (DPP), was reported. Because GIG-EM has not been fully evaluated using clinical isolates, we assessed this typing method with 72 E. coli collection of reference (ECOR) environmental E. coli reference strains and 63 E. coli isolates of various genetic backgrounds. In this study, we designated 768 bp of dinG, 745 bp of tonB, and 655 bp of DPP target sequences for use in the typing method. Concatenations of the processed marker sequences were used to draw GIG-EM phylogenetic trees. E. coli isolates with identical sequence types as identified by the conventional multilocus sequence typing (MLST) method were localized to the same branch of the GIG-EM phylogenetic tree. Sixteen clinical E. coli isolates were utilized as test isolates without prior characterization by conventional MLST and phylogenetic grouping before GIG-EM typing. Of these, 14 clinical isolates were assigned to a branch including only isolates of a pandemic clone, E. coli B2-ST131-O25b, and these results were confirmed by conventional typing methods. Our results suggested that the GIG-EM typing method and its application to phylogenetic trees might be useful tools for the molecular characterization and determination of the genetic relationships among E. coli isolates. PMID:25809972
Hamamoto, Kouta; Ueda, Shuhei; Yamamoto, Yoshimasa; Hirai, Itaru
2015-06-01
Genotyping and characterization of bacterial isolates are essential steps in the identification and control of antibiotic-resistant bacterial infections. Recently, one novel genotyping method using three genomic guided Escherichia coli markers (GIG-EM), dinG, tonB, and dipeptide permease (DPP), was reported. Because GIG-EM has not been fully evaluated using clinical isolates, we assessed this typing method with 72 E. coli collection of reference (ECOR) environmental E. coli reference strains and 63 E. coli isolates of various genetic backgrounds. In this study, we designated 768 bp of dinG, 745 bp of tonB, and 655 bp of DPP target sequences for use in the typing method. Concatenations of the processed marker sequences were used to draw GIG-EM phylogenetic trees. E. coli isolates with identical sequence types as identified by the conventional multilocus sequence typing (MLST) method were localized to the same branch of the GIG-EM phylogenetic tree. Sixteen clinical E. coli isolates were utilized as test isolates without prior characterization by conventional MLST and phylogenetic grouping before GIG-EM typing. Of these, 14 clinical isolates were assigned to a branch including only isolates of a pandemic clone, E. coli B2-ST131-O25b, and these results were confirmed by conventional typing methods. Our results suggested that the GIG-EM typing method and its application to phylogenetic trees might be useful tools for the molecular characterization and determination of the genetic relationships among E. coli isolates. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Smith, Justin D.; Borckardt, Jeffrey J.; Nash, Michael R.
2013-01-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977).This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. PMID:22697454
Nizam-Uddin, N; Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors.
Elshafiey, Ibrahim
2017-01-01
This paper proposes a hybrid hyperthermia treatment system, utilizing two noninvasive modalities for treating brain tumors. The proposed system depends on focusing electromagnetic (EM) and ultrasound (US) energies. The EM hyperthermia subsystem enhances energy localization by incorporating a multichannel wideband setting and coherent-phased-array technique. A genetic algorithm based optimization tool is developed to enhance the specific absorption rate (SAR) distribution by reducing hotspots and maximizing energy deposition at tumor regions. The treatment performance is also enhanced by augmenting an ultrasonic subsystem to allow focused energy deposition into deep tumors. The therapeutic faculty of ultrasonic energy is assessed by examining the control of mechanical alignment of transducer array elements. A time reversal (TR) approach is then investigated to address challenges in energy focus in both subsystems. Simulation results of the synergetic effect of both modalities assuming a simplified model of human head phantom demonstrate the feasibility of the proposed hybrid technique as a noninvasive tool for thermal treatment of brain tumors. PMID:28840125
3D forward modeling and response analysis for marine CSEMs towed by two ships
NASA Astrophysics Data System (ADS)
Zhang, Bo; Yin, Chang-Chun; Liu, Yun-He; Ren, Xiu-Yan; Qi, Yan-Fu; Cai, Jing
2018-03-01
A dual-ship-towed marine electromagnetic (EM) system is a new marine exploration technology recently being developed in China. Compared with traditional marine EM systems, the new system tows the transmitters and receivers using two ships, rendering it unnecessary to position EM receivers at the seafloor in advance. This makes the system more flexible, allowing for different configurations (e.g., in-line, broadside, and azimuthal and concentric scanning) that can produce more detailed underwater structural information. We develop a three-dimensional goal-oriented adaptive forward modeling method for the new marine EM system and analyze the responses for four survey configurations. Oceanbottom topography has a strong effect on the marine EM responses; thus, we develop a forward modeling algorithm based on the finite-element method and unstructured grids. To satisfy the requirements for modeling the moving transmitters of a dual-ship-towed EM system, we use a single mesh for each of the transmitter locations. This mitigates the mesh complexity by refining the grids near the transmitters and minimizes the computational cost. To generate a rational mesh while maintaining the accuracy for single transmitter, we develop a goal-oriented adaptive method with separate mesh refinements for areas around the transmitting source and those far away. To test the modeling algorithm and accuracy, we compare the EM responses calculated by the proposed algorithm and semi-analytical results and from published sources. Furthermore, by analyzing the EM responses for four survey configurations, we are confirm that compared with traditional marine EM systems with only in-line array, a dual-ship-towed marine system can collect more data.
High dimensional land cover inference using remotely sensed modis data
NASA Astrophysics Data System (ADS)
Glanz, Hunter S.
Image segmentation persists as a major statistical problem, with the volume and complexity of data expanding alongside new technologies. Land cover classification, one of the most studied problems in Remote Sensing, provides an important example of image segmentation whose needs transcend the choice of a particular classification method. That is, the challenges associated with land cover classification pervade the analysis process from data pre-processing to estimation of a final land cover map. Many of the same challenges also plague the task of land cover change detection. Multispectral, multitemporal data with inherent spatial relationships have hardly received adequate treatment due to the large size of the data and the presence of missing values. In this work we propose a novel, concerted application of methods which provide a unified way to estimate model parameters, impute missing data, reduce dimensionality, classify land cover, and detect land cover changes. This comprehensive analysis adopts a Bayesian approach which incorporates prior knowledge to improve the interpretability, efficiency, and versatility of land cover classification and change detection. We explore a parsimonious, parametric model that allows for a natural application of principal components analysis to isolate important spectral characteristics while preserving temporal information. Moreover, it allows us to impute missing data and estimate parameters via expectation-maximization (EM). A significant byproduct of our framework includes a suite of training data assessment tools. To classify land cover, we employ a spanning tree approximation to a lattice Potts prior to incorporate spatial relationships in a judicious way and more efficiently access the posterior distribution of pixel labels. We then achieve exact inference of the labels via the centroid estimator. To detect land cover changes, we develop a new EM algorithm based on the same parametric model. We perform simulation studies to validate our models and methods, and conduct an extensive continental scale case study using MODIS data. The results show that we successfully classify land cover and recover the spatial patterns present in large scale data. Application of our change point method to an area in the Amazon successfully identifies the progression of deforestation through portions of the region.
Greffier, J; Van Ngoc Ty, C; Bonniaud, G; Moliner, G; Ledermann, B; Schmutz, L; Cornillet, L; Cayla, G; Beregi, J P; Pereira, F
2017-06-01
To compare the use of a dose mapping software to Gafchromic film measurement for a simplified peak skin dose (PSD) estimation in interventional cardiology procedure. The study was conducted on a total of 40 cardiac procedures (20 complex coronary angioplasty of chronic total occlusion (CTO) and 20 coronary angiography and coronary angioplasty (CA-PTCA)) conducted between January 2014 to December 2015. PSD measurement (PSD Film ) was obtained by placing XR-RV3 Gafchromic under the patient's back for each procedure. PSD (PSD em.dose ) was computed with the software em.dose©. The calculation was performed on the dose metrics collected from the private dose report of each procedure. Two calculation methods (method A: fluoroscopic kerma equally spread on cine acquisition and B: fluoroscopic kerma is added to one air Kerma cine acquisition that contributes to the PSD) were used to calculate the fluoroscopic dose contribution as fluoroscopic data were not recorded in our interventional room. Statistical analyses were carried out to compare PSD Film and PSD em.dose . The PSD Film median (1st quartile; 3rd quartile) was 0.251(0.190;0.336)Gy for CA-PTCA and 1.453(0.767;2.011)Gy for CTO. For method-A, the PSD em.dose was 0.248(0.182;0.369)Gy for CA-PTCA and 1.601(0.892;2.178)Gy for CTO, and 0.267(0.223;0.446)Gy and 1.75 (0.912;2.584)Gy for method-B, respectively. For the two methods, the correlation between PSD Film and PSD em.dose was strong. For all cardiology procedures investigated, the mean deviation between PSD Film and PSD em.dose was 3.4±21.1% for method-A and 17.3%±23.9% for method-B. The dose mapping software is convenient to calculate peak skin dose in interventional cardiology. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Millin, Michael G; Brown, Lawrence H; Schwartz, Brian
2011-01-01
With increasing demands for emergency medical services (EMS), many EMS jurisdictions are utilizing EMS provider-initiated nontransport policies as a method to offload potentially nonemergent patients from the EMS system. EMS provider determination of medical necessity, resulting in nontransport of patients, has the potential to avert unnecessary emergency department visits. However, EMS systems that utilize these policies must have additional education for the providers, a quality improvement process, and active physician oversight. In addition, EMS provider determination of nontransport for a specific situation should be supported by evidence in the peer-reviewed literature that the practice is safe. Further, EMS systems that do not utilize these programs should not be financially penalized. Payment for EMS services should be based on the prudent layperson standard. EMS systems that do utilize nontransport policies should be appropriately reimbursed, as this represents potential cost savings to the health care system.
NASA Astrophysics Data System (ADS)
Nafis, Christopher; Jensen, Vern; von Jako, Ron
2008-03-01
Electromagnetic (EM) tracking systems have been successfully used for Surgical Navigation in ENT, cranial, and spine applications for several years. Catheter sized micro EM sensors have also been used in tightly controlled cardiac mapping and pulmonary applications. EM systems have the benefit over optical navigation systems of not requiring a line-of-sight between devices. Ferrous metals or conductive materials that are transient within the EM working volume may impact tracking performance. Effective methods for detecting and reporting EM field distortions are generally well known. Distortion compensation can be achieved for objects that have a static spatial relationship to a tracking sensor. New commercially available micro EM tracking systems offer opportunities for expanded image-guided navigation procedures. It is important to know and understand how well these systems perform with different surgical tables and ancillary equipment. By their design and intended use, micro EM sensors will be located at the distal tip of tracked devices and therefore be in closer proximity to the tables. Our goal was to define a simple and portable process that could be used to estimate the EM tracker accuracy, and to vet a large number of popular general surgery and imaging tables that are used in the United States and abroad.
Feng, Xiangsong; Fu, Ziao; Kaledhonkar, Sandip; Jia, Yuan; Shah, Binita; Jin, Amy; Liu, Zheng; Sun, Ming; Chen, Bo; Grassucci, Robert A; Ren, Yukun; Jiang, Hongyuan; Frank, Joachim; Lin, Qiao
2017-04-04
We describe a spraying-plunging method for preparing cryoelectron microscopy (cryo-EM) grids with vitreous ice of controllable, highly consistent thickness using a microfluidic device. The new polydimethylsiloxane (PDMS)-based sprayer was tested with apoferritin. We demonstrate that the structure can be solved to high resolution with this method of sample preparation. Besides replacing the conventional pipetting-blotting-plunging method, one of many potential applications of the new sprayer is in time-resolved cryo-EM, as part of a PDMS-based microfluidic reaction channel to study short-lived intermediates on the timescale of 10-1,000 ms. Published by Elsevier Ltd.
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Correlative Stochastic Optical Reconstruction Microscopy and Electron Microscopy
Kim, Doory; Deerinck, Thomas J.; Sigal, Yaron M.; Babcock, Hazen P.; Ellisman, Mark H.; Zhuang, Xiaowei
2015-01-01
Correlative fluorescence light microscopy and electron microscopy allows the imaging of spatial distributions of specific biomolecules in the context of cellular ultrastructure. Recent development of super-resolution fluorescence microscopy allows the location of molecules to be determined with nanometer-scale spatial resolution. However, correlative super-resolution fluorescence microscopy and electron microscopy (EM) still remains challenging because the optimal specimen preparation and imaging conditions for super-resolution fluorescence microscopy and EM are often not compatible. Here, we have developed several experiment protocols for correlative stochastic optical reconstruction microscopy (STORM) and EM methods, both for un-embedded samples by applying EM-specific sample preparations after STORM imaging and for embedded and sectioned samples by optimizing the fluorescence under EM fixation, staining and embedding conditions. We demonstrated these methods using a variety of cellular targets. PMID:25874453
A multiple scales approach to maximal superintegrability
NASA Astrophysics Data System (ADS)
Gubbiotti, G.; Latini, D.
2018-07-01
In this paper we present a simple, algorithmic test to establish if a Hamiltonian system is maximally superintegrable or not. This test is based on a very simple corollary of a theorem due to Nekhoroshev and on a perturbative technique called the multiple scales method. If the outcome is positive, this test can be used to suggest maximal superintegrability, whereas when the outcome is negative it can be used to disprove it. This method can be regarded as a finite dimensional analog of the multiple scales method as a way to produce soliton equations. We use this technique to show that the real counterpart of a mechanical system found by Jules Drach in 1935 is, in general, not maximally superintegrable. We give some hints on how this approach could be applied to classify maximally superintegrable systems by presenting a direct proof of the well-known Bertrand’s theorem.
DARE Mission Design: Low RFI Observations from a Low-Altitude Frozen Lunar Orbit
NASA Technical Reports Server (NTRS)
Plice, Laura; Galal, Ken; Burns, Jack O.
2017-01-01
The Dark Ages Radio Explorer (DARE) seeks to study the cosmic Dark Ages approximately 80 to 420 million years after the Big Bang. Observations require truly quiet radio conditions, shielded from Sun and Earth electromagnetic (EM) emissions, on the far side of the Moon. DAREs science orbit is a frozen orbit with respect to lunar gravitational perturbations. The altitude and orientation of the orbit remain nearly fixed indefinitely, maximizing science time without the need for maintenance. DAREs observation targets avoid the galactic center and enable investigation of the universes first stars and galaxies.
Further evaluation of the constrained least squares electromagnetic compensation method
NASA Technical Reports Server (NTRS)
Smith, William T.
1991-01-01
Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.
NASA Astrophysics Data System (ADS)
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Cryo-EM visualization of the protein machine that replicates the chromosome
NASA Astrophysics Data System (ADS)
Li, Huilin
Structural knowledge is key to understanding biological functions. Cryo-EM is a physical method that uses transmission electron microscopy to visualize biological molecules that are frozen in vitreous ice. Due to recent advances in direct electron detector and image processing algorithm, cryo-EM has become a high-resolution technique. Cryo-EM field is undergoing a rapid expansion and vast majority research institutions and research universities around the world are setting up cryo-EM research. Indeed, the method is revolutionizing structural and molecular biology. We have been using cryo-EM to study the structure and mechanism of eukaryotic chromosome replication. Despite an abundance of cartoon drawings found in review articles and biology textbooks, the structure of the eukaryotic helicase that unwinds the double stranded DNA has been unknown. It has also been unknown how the helicase works with DNA polymerases to accomplish the feat of duplicating the genome. In my presentation, I will show how we have used cryo-EM to derive at structures of the eukaryotic chromosome replication machinery and describe mechanistic insights we have gleaned from the structures.
Zhang, Chun-Yun; Hu, Hui-Chao; Chai, Xin-Sheng; Pan, Lei; Xiao, Xian-Ming
2014-02-07
In this paper, we present a novel method for determining the maximal amount of ethane, a minor gas species, adsorbed in a shale sample. The method is based on the time-dependent release of ethane from shale samples measured by headspace gas chromatography (HS-GC). The study includes a mathematical model for fitting the experimental data, calculating the maximal amount gas adsorbed, and predicting results at other temperatures. The method is a more efficient alternative to the isothermal adsorption method that is in widespread use today. Copyright © 2013 Elsevier B.V. All rights reserved.
Several library independent Microbial Source Tracking methods have been developed to rapidly determine the source of fecal contamination. Thus far, none of these methods have been tested in tropical marine waters. In this study, we used a Bacteroides 16S rDNA PCR-based...
Tesija Kuna, Andrea; Dukic, Kristina; Nikolac Gabaj, Nora; Miler, Marijana; Vukasovic, Ines; Langer, Sanja; Simundic, Ana-Maria; Vrkic, Nada
2018-03-08
To compare the analytical performances of the enzymatic method (EM) and capillary electrophoresis (CE) for hemoglobin A1c (HbA1c) measurement. Imprecision, carryover, stability, linearity, method comparison, and interferences were evaluated for HbA1c via EM (Abbott Laboratories, Inc) and CE (Sebia). Both methods have shown overall within-laboratory imprecision of less than 3% for International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units (<2% National Glycohemoglobin Standardization Program [NGSP] units). Carryover effects were within acceptable criteria. The linearity of both methods has proven to be excellent (R2 = 0.999). Significant proportional and constant difference were found for EM, compared with CE, but were not clinically relevant (<5 mmol/mol; NGSP <0.5%). At the clinically relevant HbA1c concentration, stability observed with both methods was acceptable (bias, <3%). Triglyceride levels of 8.11 mmol per L or greater showed to interfere with EM and fetal hemoglobin (HbF) of 10.6% or greater with CE. The enzymatic method proved to be comparable to the CE method in analytical performances; however, certain interferences can influence the measurements of each method.
Soudzilovskaia, Nadejda A; van der Heijden, Marcel G A; Cornelissen, Johannes H C; Makarov, Mikhail I; Onipchenko, Vladimir G; Maslov, Mikhail N; Akhmetzhanova, Asem A; van Bodegom, Peter M
2015-10-01
A significant fraction of carbon stored in the Earth's soil moves through arbuscular mycorrhiza (AM) and ectomycorrhiza (EM). The impacts of AM and EM on the soil carbon budget are poorly understood. We propose a method to quantify the mycorrhizal contribution to carbon cycling, explicitly accounting for the abundance of plant-associated and extraradical mycorrhizal mycelium. We discuss the need to acquire additional data to use our method, and present our new global database holding information on plant species-by-site intensity of root colonization by mycorrhizas. We demonstrate that the degree of mycorrhizal fungal colonization has globally consistent patterns across plant species. This suggests that the level of plant species-specific root colonization can be used as a plant trait. To exemplify our method, we assessed the differential impacts of AM : EM ratio and EM shrub encroachment on carbon stocks in sub-arctic tundra. AM and EM affect tundra carbon stocks at different magnitudes, and via partly distinct dominant pathways: via extraradical mycelium (both EM and AM) and via mycorrhizal impacts on above- and belowground biomass carbon (mostly AM). Our method provides a powerful tool for the quantitative assessment of mycorrhizal impact on local and global carbon cycling processes, paving the way towards an improved understanding of the role of mycorrhizas in the Earth's carbon cycle. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
A multiscale quantum mechanics/electromagnetics method for device simulations.
Yam, ChiYung; Meng, Lingyi; Zhang, Yu; Chen, GuanHua
2015-04-07
Multiscale modeling has become a popular tool for research applying to different areas including materials science, microelectronics, biology, chemistry, etc. In this tutorial review, we describe a newly developed multiscale computational method, incorporating quantum mechanics into electronic device modeling with the electromagnetic environment included through classical electrodynamics. In the quantum mechanics/electromagnetics (QM/EM) method, the regions of the system where active electron scattering processes take place are treated quantum mechanically, while the surroundings are described by Maxwell's equations and a semiclassical drift-diffusion model. The QM model and the EM model are solved, respectively, in different regions of the system in a self-consistent manner. Potential distributions and current densities at the interface between QM and EM regions are employed as the boundary conditions for the quantum mechanical and electromagnetic simulations, respectively. The method is illustrated in the simulation of several realistic systems. In the case of junctionless field-effect transistors, transfer characteristics are obtained and a good agreement between experiments and simulations is achieved. Optical properties of a tandem photovoltaic cell are studied and the simulations demonstrate that multiple QM regions are coupled through the classical EM model. Finally, the study of a carbon nanotube-based molecular device shows the accuracy and efficiency of the QM/EM method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
The purpose of the computer program is to generate system matrices that model data acquisition process in dynamic single photon emission computed tomography (SPECT). The application is for the reconstruction of dynamic data from projection measurements that provide the time evolution of activity uptake and wash out in an organ of interest. The measurement of the time activity in the blood and organ tissue provide time-activity curves (TACs) that are used to estimate kinetic parameters. The program provides a correct model of the in vivo spatial and temporal distribution of radioactive in organs. The model accounts for the attenuation ofmore » the internal emitting radioactivity, it accounts for the vary point response of the collimators, and correctly models the time variation of the activity in the organs. One important application where the software is being used in a measuring the arterial input function (AIF) in a dynamic SPECT study where the data are acquired from a slow camera rotation. Measurement of the arterial input function (AIF) is essential to deriving quantitative estimates of regional myocardial blood flow using kinetic models. A study was performed to evaluate whether a slowly rotating SPECT system could provide accurate AIF's for myocardial perfusion imaging (MPI). Methods: Dynamic cardiac SPECT was first performed in human subjects at rest using a Phillips Precedence SPECT/CT scanner. Dynamic measurements of Tc-99m-tetrofosmin in the myocardium were obtained using an infusion time of 2 minutes. Blood input, myocardium tissue and liver TACs were estimated using spatiotemporal splines. These were fit to a one-compartment perfusion model to obtain wash-in rate parameters K1. Results: The spatiotemporal 4D ML-EM reconstructions gave more accurate reconstructions that did standard frame-by-frame 3D ML-EM reconstructions. From additional computer simulations and phantom studies, it was determined that a 1 minute infusion with a SPECT system rotation speed providing 180 degrees of projection data every 54s can produce measurements of blood pool and myocardial TACs. This has important application in the circulation of coronary flow reserve using rest/stress dynamic cardiac SPECT. They system matrices are used in maximum likelihood and maximum a posterior formulations in estimation theory where through iterative algorithms (conjugate gradient, expectation maximization, or maximum a posteriori probability algorithms) the solution is determined that maximizes a likelihood or a posteriori probability function.« less
Multi-PSF fusion in image restoration of range-gated systems
NASA Astrophysics Data System (ADS)
Wang, Canjin; Sun, Tao; Wang, Tingfeng; Miao, Xikui; Wang, Rui
2018-07-01
For the task of image restoration, an accurate estimation of degrading PSF/kernel is the premise of recovering a visually superior image. The imaging process of range-gated imaging system in atmosphere associates with lots of factors, such as back scattering, background radiation, diffraction limit and the vibration of the platform. On one hand, due to the difficulty of constructing models for all factors, the kernels from physical-model based methods are not strictly accurate and practical. On the other hand, there are few strong edges in images, which brings significant errors to most of image-feature-based methods. Since different methods focus on different formation factors of the kernel, their results often complement each other. Therefore, we propose an approach which combines physical model with image features. With an fusion strategy using GCRF (Gaussian Conditional Random Fields) framework, we get a final kernel which is closer to the actual one. Aiming at the problem that ground-truth image is difficult to obtain, we then propose a semi data-driven fusion method in which different data sets are used to train fusion parameters. Finally, a semi blind restoration strategy based on EM (Expectation Maximization) and RL (Richardson-Lucy) algorithm is proposed. Our methods not only models how the lasers transfer in the atmosphere and imaging in the ICCD (Intensified CCD) plane, but also quantifies other unknown degraded factors using image-based methods, revealing how multiple kernel elements interact with each other. The experimental results demonstrate that our method achieves better performance than state-of-the-art restoration approaches.
NASA Astrophysics Data System (ADS)
Hasan, Haliza; Ahmad, Sanizah; Osman, Balkish Mohd; Sapri, Shamsiah; Othman, Nadirah
2017-08-01
In regression analysis, missing covariate data has been a common problem. Many researchers use ad hoc methods to overcome this problem due to the ease of implementation. However, these methods require assumptions about the data that rarely hold in practice. Model-based methods such as Maximum Likelihood (ML) using the expectation maximization (EM) algorithm and Multiple Imputation (MI) are more promising when dealing with difficulties caused by missing data. Then again, inappropriate methods of missing value imputation can lead to serious bias that severely affects the parameter estimates. The main objective of this study is to provide a better understanding regarding missing data concept that can assist the researcher to select the appropriate missing data imputation methods. A simulation study was performed to assess the effects of different missing data techniques on the performance of a regression model. The covariate data were generated using an underlying multivariate normal distribution and the dependent variable was generated as a combination of explanatory variables. Missing values in covariate were simulated using a mechanism called missing at random (MAR). Four levels of missingness (10%, 20%, 30% and 40%) were imposed. ML and MI techniques available within SAS software were investigated. A linear regression analysis was fitted and the model performance measures; MSE, and R-Squared were obtained. Results of the analysis showed that MI is superior in handling missing data with highest R-Squared and lowest MSE when percent of missingness is less than 30%. Both methods are unable to handle larger than 30% level of missingness.
Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Purpose Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that calibration can be performed in the OR on demand. Methods We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration result in the OR, we integrated a tube phantom with fCalib and overlaid a virtual representation of the tube on the live video scene. Results We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggested that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, would affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s – 22.7 s). Conclusions We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand. PMID:27250853
NASA Astrophysics Data System (ADS)
Rabenok, L.; Grimalsky, V.; De La Hidalga-W., J.
2006-09-01
The report is devoted to applications of the microwave therapy. 50 patients with acute purulent-inflammatory diseases of the hand were examined with using our method of treatment with electromagnetic (EM) microwave field in an outpatient clinic. We used a portable apparatus that operates in the millimeter (mm) wave range in 4 regimes. The intensity of EM radiation was 2-10 mW/cm2. A peculiarity of the method was an absence of any antibacterial medicine during the treatment. We conclude that using EM microwave fields seems very efficient in a complex treatment of acute purulent-inflammatory diseases of the hand in an outpatient clinic. An interpretation of the obtained results is given due to the resonant character of the interaction of EM radiation with molecular and cellular structures.
X-rays in the Cryo-EM Era: Structural Biology’s Dynamic Future
Shoemaker, Susannah C.; Ando, Nozomi
2018-01-01
Over the past several years, single-particle cryo-electron microscopy (cryo-EM) has emerged as a leading method for elucidating macromolecular structures at near-atomic resolution, rivaling even the established technique of X-ray crystallography. Cryo-EM is now able to probe proteins as small as hemoglobin (64 kDa), while avoiding the crystallization bottleneck entirely. The remarkable success of cryo-EM has called into question the continuing relevance of X-ray methods, particularly crystallography. To say that the future of structural biology is either cryo-EM or crystallography, however, would be misguided. Crystallography remains better suited to yield precise atomic coordinates of macromolecules under a few hundred kDa in size, while the ability to probe larger, potentially more disordered assemblies is a distinct advantage of cryo-EM. Likewise, crystallography is better equipped to provide high-resolution dynamic information as a function of time, temperature, pressure, and other perturbations, whereas cryo-EM offers increasing insight into conformational and energy landscapes, particularly as algorithms to deconvolute conformational heterogeneity become more advanced. Ultimately, the future of both techniques depends on how their individual strengths are utilized to tackle questions on the frontiers of structural biology. Structure determination is just one piece of a much larger puzzle: a central challenge of modern structural biology is to relate structural information to biological function. In this perspective, we share insight from several leaders in the field and examine the unique and complementary ways in which X-ray methods and cryo-EM can shape the future of structural biology. PMID:29227642
Cosmic muon induced EM showers in NO$$\
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter; ...
2016-11-15
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
Cosmic muon induced EM showers in NO$$\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yadav, Nitin; Duyang, Hongyue; Shanahan, Peter
Here, the NuMI Off-Axis v e Appearance (NOvA) experiment is a ne appearance neutrino oscillation experiment at Fermilab. It identifies the ne signal from the electromagnetic (EM) showers induced by the electrons in the final state of neutrino interactions. Cosmic muon induced EM showers, dominated by bremsstrahlung, are abundant in NOvA far detector. We use the Cosmic Muon- Removal technique to get pure EM shower sample from bremsstrahlung muons in data. We also use Cosmic muon decay in flight EM showers which are highly pure EM showers.The large Cosmic-EM sample can be used, as data driven method, to characterize themore » EM shower signature and provides valuable checks of the simulation, reconstruction, particle identification algorithm, and calibration across the NOvA detector.« less
Self-assembled monolayers improve protein distribution on holey carbon cryo-EM supports
Meyerson, Joel R.; Rao, Prashant; Kumar, Janesh; Chittori, Sagar; Banerjee, Soojay; Pierson, Jason; Mayer, Mark L.; Subramaniam, Sriram
2014-01-01
Poor partitioning of macromolecules into the holes of holey carbon support grids frequently limits structural determination by single particle cryo-electron microscopy (cryo-EM). Here, we present a method to deposit, on gold-coated carbon grids, a self-assembled monolayer whose surface properties can be controlled by chemical modification. We demonstrate the utility of this approach to drive partitioning of ionotropic glutamate receptors into the holes, thereby enabling 3D structural analysis using cryo-EM methods. PMID:25403871
Patch-based Convolutional Neural Network for Whole Slide Tissue Image Classification
Hou, Le; Samaras, Dimitris; Kurc, Tahsin M.; Gao, Yi; Davis, James E.; Saltz, Joel H.
2016-01-01
Convolutional Neural Networks (CNN) are state-of-the-art models for many image classification tasks. However, to recognize cancer subtypes automatically, training a CNN on gigapixel resolution Whole Slide Tissue Images (WSI) is currently computationally impossible. The differentiation of cancer subtypes is based on cellular-level visual features observed on image patch scale. Therefore, we argue that in this situation, training a patch-level classifier on image patches will perform better than or similar to an image-level classifier. The challenge becomes how to intelligently combine patch-level classification results and model the fact that not all patches will be discriminative. We propose to train a decision fusion model to aggregate patch-level predictions given by patch-level CNNs, which to the best of our knowledge has not been shown before. Furthermore, we formulate a novel Expectation-Maximization (EM) based method that automatically locates discriminative patches robustly by utilizing the spatial relationships of patches. We apply our method to the classification of glioma and non-small-cell lung carcinoma cases into subtypes. The classification accuracy of our method is similar to the inter-observer agreement between pathologists. Although it is impossible to train CNNs on WSIs, we experimentally demonstrate using a comparable non-cancer dataset of smaller images that a patch-based CNN can outperform an image-based CNN. PMID:27795661
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
Methodology for the development of a Canadian national EMS research agenda
2011-01-01
Background Many health care disciplines use evidence-based decision making to improve patient care and system performance. While the amount and quality of emergency medical services (EMS) research in Canada has increased over the past two decades, there has not been a unified national plan to enable research, ensure efficient use of research resources, guide funding decisions and build capacity in EMS research. Other countries have used research agendas to identify barriers and opportunities in EMS research and define national research priorities. The objective of this project is to develop a national EMS research agenda for Canada that will: 1) explore what barriers to EMS research currently exist, 2) identify current strengths and opportunities that may be of benefit to advancing EMS research, 3) make recommendations to overcome barriers and capitalize on opportunities, and 4) identify national EMS research priorities. Methods/Design Paramedics, educators, EMS managers, medical directors, researchers and other key stakeholders from across Canada will be purposefully recruited to participate in this mixed methods study, which consists of three phases: 1) qualitative interviews with a selection of the study participants, who will be asked about their experience and opinions about the four study objectives, 2) a facilitated roundtable discussion, in which all participants will explore and discuss the study objectives, and 3) an online Delphi consensus survey, in which all participants will be asked to score the importance of each topic discovered during the interviews and roundtable as they relate to the study objectives. Results will be analyzed to determine the level of consensus achieved for each topic. Discussion A mixed methods approach will be used to address the four study objectives. We anticipate that the keys to success will be: 1) ensuring a representative sample of EMS stakeholders, 2) fostering an open and collaborative roundtable discussion, and 3) adhering to a predefined approach to measure consensus on each topic. Steps have been taken in the methodology to address each of these a priori concerns. PMID:21961624
Three validation metrics for automated probabilistic image segmentation of brain tumours
Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.
2005-01-01
SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482
Thwala, Siphiwe Bridget Pearl; Blaauw, Duane; Ssengooba, Freddie
2018-01-01
Improving the delivery of emergency obstetric care (EmNOC) remains critical in addressing direct causes of maternal mortality. United Nations (UN) agencies have promoted standard methods for evaluating the availability of EmNOC facilities although modifications have been proposed by others. This study presents an assessment of the preparedness of public health facilities to provide EmNOC using these methods in one South African district with a persistently high maternal mortality ratio. Data collection took place in the final quarter of 2014. Cross-sectional surveys were conducted to classify the 7 hospitals and 8 community health centres (CHCs) in the district as either basic EmNOC (BEmNOC) or comprehensive EmNOC (CEmNOC) facilities using UN EmNOC signal functions. The required density of EmNOC facilities was calculated using UN norms. We also assessed the availability of EmNOC personnel, resuscitation equipment, drugs, fluids, and protocols at each facility. The workload of skilled EmNOC providers at hospitals and CHCs was compared. All 7 hospitals in the district were classified as CEmNOC facilities, but none of the 8 CHCs performed all required signal functions to be classified as BEmNOC facilities. UN norms indicated that 25 EmNOC facilities were required for the district population, 5 of which should be CEmNOCs. None of the facilities had 100% of items on the EmNOC checklists. Hospital midwives delivered an average of 36.4±14.3 deliveries each per month compared to only 7.9±3.2 for CHC midwives (p<0.001). The analysis indicated a shortfall of EmNOC facilities in the district. Full EmNOC services were centralised to hospitals to assure patient safety even though national policy guidelines sanction more decentralisation to CHCs. Studies measuring EmNOC availability need to consider facility opening hours, capacity and staffing in addition to the demonstrated performance of signal functions.
A Case for Application Oblivious Energy-Efficient MPI Runtime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Venkatesh, Akshay; Vishnu, Abhinav; Hamidouche, Khaled
Power has become the major impediment in designing large scale high-end systems. Message Passing Interface (MPI) is the {\\em de facto} communication interface used as the back-end for designing applications, programming models and runtime for these systems. Slack --- the time spent by an MPI process in a single MPI call --- provides a potential for energy and power savings, if an appropriate power reduction technique such as core-idling/Dynamic Voltage and Frequency Scaling (DVFS) can be applied without perturbing application's execution time. Existing techniques that exploit slack for power savings assume that application behavior repeats across iterations/executions. However, an increasingmore » use of adaptive, data-dependent workloads combined with system factors (OS noise, congestion) makes this assumption invalid. This paper proposes and implements Energy Aware MPI (EAM) --- an application-oblivious energy-efficient MPI runtime. EAM uses a combination of communication models of common MPI primitives (point-to-point, collective, progress, blocking/non-blocking) and an online observation of slack for maximizing energy efficiency. Each power lever incurs time overhead, which must be amortized over slack to minimize degradation. When predicted communication time exceeds a lever overhead, the lever is used {\\em as soon as possible} --- to maximize energy efficiency. When mis-prediction occurs, the lever(s) are used automatically at specific intervals for amortization. We implement EAM using MVAPICH2 and evaluate it on ten applications using up to 4096 processes. Our performance evaluation on an InfiniBand cluster indicates that EAM can reduce energy consumption by 5--41\\% in comparison to the default approach, with negligible (less than 4\\% in all cases) performance loss.« less
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
NASA Astrophysics Data System (ADS)
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [11C]SCH23390 data, showing promising results.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions.
Novosad, Philip; Reader, Andrew J
2016-06-21
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [(18)F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel model can also be used for effective post-reconstruction denoising, through the use of an EM-like image-space algorithm. Finally, we applied the proposed algorithm to reconstruction of real high-resolution dynamic [(11)C]SCH23390 data, showing promising results.
Liu, Xinyang; Plishker, William; Zaki, George; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2016-06-01
Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.
Feedback-Driven Mode Rotation Control by Electro-Magnetic Torque
NASA Astrophysics Data System (ADS)
Okabayashi, M.; Strait, E. J.; Garofalo, A. M.; La Haye, R. J.; in, Y.; Hanson, J. M.; Shiraki, D.; Volpe, F.
2013-10-01
The recent experimental discovery of feedback-driven mode rotation control, supported by modeling, opens new approaches for avoidance of locked tearing modes that otherwise lead to disruptions. This approach is an application of electro-magnetic (EM) torque using 3D fields, routinely maximized through a simple feedback system. In DIII-D, it is observed that a feedback-applied radial field can be synchronized in phase with the poloidal field component of a large amplitude tearing mode, producing the maximum EM torque input. The mode frequency can be maintained in the 10 Hz to 100 Hz range in a well controlled manner, sustaining the discharges. Presently, in the ITER internal coils designed for edge localized mode (ELM) control can only be varied at few Hz, yet, well below the inverse wall time constant. Hence, ELM control system could in principle be used for this feedback-driven mode control in various ways. For instance, the locking of MHD modes can be avoided during the controlled shut down of multi hundreds Mega Joule EM stored energy in case of emergency. Feedback could also be useful to minimize mechanical resonances at the disruption events by forcing the MHD frequency away from dangerous ranges. Work supported by the US DOE under DE-AC02-09CH11466, DE-FC-02-04ER54698, DE-FG02-08ER85195, and DE-FG02-04ER54761.
Saving tourists: the status of emergency medical services in California's National Parks.
Heggie, Travis W; Heggie, Tracey M
2009-01-01
Providing emergency medical services (EMS) in popular tourist destinations such as National Parks requires an understanding of the availability and demand for EMS. This study examines the EMS workload, EMS transportation methods, EMS funding, and EMS provider status in California's National Park Service units. A retrospective review of data from the 2005 Annual Emergency Medical Services Report for National Park Service (NPS) units in California. Sixteen NPS units in California reported EMS activity. EMS program funding and training costs totaled USD $1,071,022. During 2005 there were 84 reported fatalities, 910 trauma incidents, 663 non-cardiac medicals, 129 cardiac incidents, and 447 first aid incidents. Sequoia and Kings Canyon National Parks, Yosemite National Park, Golden Gate National Recreation Area, and Death Valley National Park accounted for 83% of the total EMS case workload. Ground transports accounted for 85% of all EMS transports and Emergency Medical Technicians with EMT-basic (EMT-B) training made up 76% of the total 373 EMS providers. Providing EMS for tourists can be a challenging task. As tourist endeavors increase globally and move into more remote environments, the level of EMS operations in California's NPS units can serve as a model for developing EMS operations serving tourist populations.
Alam, M S; Bognar, J G; Cain, S; Yasuda, B J
1998-03-10
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Multi-period project portfolio selection under risk considerations and stochastic income
NASA Astrophysics Data System (ADS)
Tofighian, Ali Asghar; Moezzi, Hamid; Khakzar Barfuei, Morteza; Shafiee, Mahmood
2018-02-01
This paper deals with multi-period project portfolio selection problem. In this problem, the available budget is invested on the best portfolio of projects in each period such that the net profit is maximized. We also consider more realistic assumptions to cover wider range of applications than those reported in previous studies. A novel mathematical model is presented to solve the problem, considering risks, stochastic incomes, and possibility of investing extra budget in each time period. Due to the complexity of the problem, an effective meta-heuristic method hybridized with a local search procedure is presented to solve the problem. The algorithm is based on genetic algorithm (GA), which is a prominent method to solve this type of problems. The GA is enhanced by a new solution representation and well selected operators. It also is hybridized with a local search mechanism to gain better solution in shorter time. The performance of the proposed algorithm is then compared with well-known algorithms, like basic genetic algorithm (GA), particle swarm optimization (PSO), and electromagnetism-like algorithm (EM-like) by means of some prominent indicators. The computation results show the superiority of the proposed algorithm in terms of accuracy, robustness and computation time. At last, the proposed algorithm is wisely combined with PSO to improve the computing time considerably.
Arribas-Gil, Ana; De la Cruz, Rolando; Lebarbier, Emilie; Meza, Cristian
2015-06-01
We propose a classification method for longitudinal data. The Bayes classifier is classically used to determine a classification rule where the underlying density in each class needs to be well modeled and estimated. This work is motivated by a real dataset of hormone levels measured at the early stages of pregnancy that can be used to predict normal versus abnormal pregnancy outcomes. The proposed model, which is a semiparametric linear mixed-effects model (SLMM), is a particular case of the semiparametric nonlinear mixed-effects class of models (SNMM) in which finite dimensional (fixed effects and variance components) and infinite dimensional (an unknown function) parameters have to be estimated. In SNMM's maximum likelihood estimation is performed iteratively alternating parametric and nonparametric procedures. However, if one can make the assumption that the random effects and the unknown function interact in a linear way, more efficient estimation methods can be used. Our contribution is the proposal of a unified estimation procedure based on a penalized EM-type algorithm. The Expectation and Maximization steps are explicit. In this latter step, the unknown function is estimated in a nonparametric fashion using a lasso-type procedure. A simulation study and an application on real data are performed. © 2015, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Sun, Qingyang; Shu, Ting; Tang, Bin; Yu, Wenxian
2018-01-01
A method is proposed to perform target deception jamming against spaceborne synthetic aperture radar. Compared with the traditional jamming methods using deception templates to cover the target or region of interest, the proposed method aims to generate a verisimilar deceptive target in various attitude with high fidelity using the electromagnetic (EM) scattering. Based on the geometrical model for target deception jamming, the EM scattering data from the deceptive target was first simulated by applying an EM calculation software. Then, the proposed jamming frequency response (JFR) is calculated offline by further processing. Finally, the deception jamming is achieved in real time by a multiplication between the proposed JFR and the spectrum of intercepted radar signals. The practical implementation is presented. The simulation results prove the validity of the proposed method.
Understanding a reference-free impedance method using collocated piezoelectric transducers
NASA Astrophysics Data System (ADS)
Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo
2010-03-01
A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.
Pappinen, Jukka; Laukkanen-Nevala, Päivi; Mäntyselkä, Pekka; Kurola, Jouni
2018-05-15
In Finland, hospital districts (HD) are required by law to determine the level and availability of Emergency Medical Services (EMS) for each 1-km 2 sized area (cell) within their administrative area. The cells are currently categorised into five risk categories based on the predicted number of missions. Methodological defects and insufficient instructions have led to incomparability between EMS services. The aim of this study was to describe a new, nationwide method for categorising the cells, analyse EMS response time data and describe possible differences in mission profiles between the new risk category areas. National databases of EMS missions, population and buildings were combined with an existing nationwide 1-km 2 hexagon-shaped cell grid. The cells were categorised into four groups, based on the Finnish Environment Institute's (FEI) national definition of urban and rural areas, population and historical EMS mission density within each cell. The EMS mission profiles of the cell categories were compared using risk ratios with confidence intervals in 12 mission groups. In total, 87.3% of the population lives and 87.5% of missions took place in core or other urban areas, which covered only 4.7% of the HDs' surface area. Trauma mission incidence per 1000 inhabitants was higher in core urban areas (42.2) than in other urban (24.2) or dispersed settlement areas (24.6). The results were similar for non-trauma missions (134.8, 93.2 and 92.2, respectively). Each cell category had a characteristic mission profile. High-energy trauma missions and cardiac problems were more common in rural and uninhabited cells, while violence, intoxication and non-specific problems dominated in urban areas. The proposed area categories and grid-based data collection appear to be a useful method for evaluating EMS demand and availability in different parts of the country for statistical purposes. Due to a similar rural/urban area definition, the method might also be usable for comparison between the Nordic countries.
Toporisic, Rebeka; Mlakar, Anita; Hvala, Jernej; Prislan, Iztok; Zupancic-Kralj, Lucija
2010-06-05
Stress stability testing and forced degradation were used to determine the stability of enalapril maleate (EM) and to find a degradation pathway for the drug. The degradation impurities, formed under different stressed conditions, were investigated by HPLC and UPLC-MS methods. HPLC analysis showed several degradation impurities of which several were already determined, but on oxidation in the presence of magnesium monoperoxyphthalate (MMPP) several impurities of EM were observed which were not yet characterized. The HPLC methods for determination of EM were validated. The linearity of HPLC method was established in the concentration range between 0.5 and 10 microg/mL with correlation coefficient greater than 0.99. The LOD of EM was 0.2 microg/mL and LOQ was 0.5 microg/mL. The validated HPLC method was used to determine the degradation impurities in samples after stress stability testing and forced degradation of EM. In order to identify new degradation impurities of EM after forced degradation UPLC-MS/MS(n), Orbitrap has been used. It was found that new impurities are oxidation products: (S)-1-((S)-2-((S)-1-ethoxy-4-(o,m,p-hydroxyphenyl)-1-oxobutan-2-ylamino)propanoyl)pyrrolidine-2-carboxylic acid, (2S)-1-((2S)-2-((2S)-1-ethoxy-4-hydroxy-1-oxo-4-phenylbutan-2-ylamino)propanoyl)pyrrolidine-2-carboxylic acid. (S)-2-(3-phenylpropylamino)-1-(pyrrolidin-1-yl)propan-1-one was identified as a new degradation impurity. Copyright (c) 2010. Published by Elsevier B.V.
A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing
2018-01-01
To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.
Newgard, Craig D; Kampp, Michael; Nelson, Maria; Holmes, James F; Zive, Dana; Rea, Thomas; Bulger, Eileen M; Liao, Michael; Sherck, John; Hsia, Renee Y; Wang, N Ewen; Fleischman, Ross J; Barton, Erik D; Daya, Mohamud; Heineman, John; Kuppermann, Nathan
2012-05-01
"Emergency medical services (EMS) provider judgment" was recently added as a field triage criterion to the national guidelines, yet its predictive value and real world application remain unclear. We examine the use and independent predictive value of EMS provider judgment in identifying seriously injured persons. We analyzed a population-based retrospective cohort, supplemented by qualitative analysis, of injured children and adults evaluated and transported by 47 EMS agencies to 94 hospitals in five regions across the Western United States from 2006 to 2008. We used logistic regression models to evaluate the independent predictive value of EMS provider judgment for Injury Severity Score ≥ 16. EMS narratives were analyzed using qualitative methods to assess and compare common themes for each step in the triage algorithm, plus EMS provider judgment. 213,869 injured patients were evaluated and transported by EMS over the 3-year period, of whom 41,191 (19.3%) met at least one of the field triage criteria. EMS provider judgment was the most commonly used triage criterion (40.0% of all triage-positive patients; sole criterion in 21.4%). After accounting for other triage criteria and confounders, the adjusted odds ratio of Injury Severity Score ≥ 16 for EMS provider judgment was 1.23 (95% confidence interval, 1.03-1.47), although there was variability in predictive value across sites. Patients meeting EMS provider judgment had concerning clinical presentations qualitatively similar to those meeting mechanistic and other special considerations criteria. Among this multisite cohort of trauma patients, EMS provider judgment was the most commonly used field trauma triage criterion, independently associated with serious injury, and useful in identifying high-risk patients missed by other criteria. However, there was variability in predictive value between sites.
Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho
2016-01-01
[Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility was evaluated as the distance from the bladder base using ultrasound. [Results] According to exercise method, bridge exercise and abdominal curl-ups led to significantly different pelvic floor mobility. The pelvic floor muscle was elevated during the abdominal drawing-in maneuver and descended during maximal expiration. Finally, pelvic floor muscle mobility was greater during abdominal curl-up than during the bridge exercise. [Conclusion] According to these results, the abdominal drawing-in maneuver induced pelvic floor muscle contraction, and pelvic floor muscle contraction was greater during the abdominal curl-up than during the bridge exercise. PMID:27065532
Jung, Halim; Jung, Sangwoo; Joo, Sunghee; Song, Changho
2016-01-01
[Purpose] The purpose of this study was to compare changes in the mobility of the pelvic floor muscle during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. [Subjects] Thirty healthy adults participated in this study (15 men and 15 women). [Methods] All participants performed a bridge exercise and abdominal curl-up during the abdominal drawing-in maneuver, maximal expiration, and pelvic floor muscle maximal contraction. Pelvic floor mobility was evaluated as the distance from the bladder base using ultrasound. [Results] According to exercise method, bridge exercise and abdominal curl-ups led to significantly different pelvic floor mobility. The pelvic floor muscle was elevated during the abdominal drawing-in maneuver and descended during maximal expiration. Finally, pelvic floor muscle mobility was greater during abdominal curl-up than during the bridge exercise. [Conclusion] According to these results, the abdominal drawing-in maneuver induced pelvic floor muscle contraction, and pelvic floor muscle contraction was greater during the abdominal curl-up than during the bridge exercise.
Noise-enhanced convolutional neural networks.
Audhkhasi, Kartik; Osoba, Osonde; Kosko, Bart
2016-06-01
Injecting carefully chosen noise can speed convergence in the backpropagation training of a convolutional neural network (CNN). The Noisy CNN algorithm speeds training on average because the backpropagation algorithm is a special case of the generalized expectation-maximization (EM) algorithm and because such carefully chosen noise always speeds up the EM algorithm on average. The CNN framework gives a practical way to learn and recognize images because backpropagation scales with training data. It has only linear time complexity in the number of training samples. The Noisy CNN algorithm finds a special separating hyperplane in the network's noise space. The hyperplane arises from the likelihood-based positivity condition that noise-boosts the EM algorithm. The hyperplane cuts through a uniform-noise hypercube or Gaussian ball in the noise space depending on the type of noise used. Noise chosen from above the hyperplane speeds training on average. Noise chosen from below slows it on average. The algorithm can inject noise anywhere in the multilayered network. Adding noise to the output neurons reduced the average per-iteration training-set cross entropy by 39% on a standard MNIST image test set of handwritten digits. It also reduced the average per-iteration training-set classification error by 47%. Adding noise to the hidden layers can also reduce these performance measures. The noise benefit is most pronounced for smaller data sets because the largest EM hill-climbing gains tend to occur in the first few iterations. This noise effect can assist random sampling from large data sets because it allows a smaller random sample to give the same or better performance than a noiseless sample gives. Copyright © 2015 Elsevier Ltd. All rights reserved.
K+-induced alterations in airway muscle responsiveness to electrical field stimulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murlas, C.; Ehring, G.; Suszkiw, J.
1986-07-01
We investigated possible pre- and postsynaptic effects of K+-induced depolarization on ferret tracheal smooth muscle (TSM) responsiveness to cholinergic stimulation. To assess electromechanical activity, cell membrane potential (Em) and tension (Tm) were simultaneously recorded in buffer containing 6, 12, 18, or 24 mM K+ before and after electrical field stimulation (EFS) or exogenous acetylcholine (ACh). In 6 mM K+, Em was -58.1 +/- 1.0 mV (mean +/- SE). In 12 mM K+, Em was depolarized to -52.3 +/- 0.9 mV, basal Tm did not change, and both excitatory junctional potentials and contractile responses to EFS at short stimulus duration weremore » larger than in 6 mM K+. No such potentiation occurred at a higher K+, although resting Em and Tm increased progressively above 12 mM K+. The sensitivity of ferret TSM to exogenous ACh appeared unaffected by K+. To determine whether the hyperresponsiveness in 12 mM K+ was due, in part, to augmented ACh release from intramural airway nerves, experiments were done using TSM preparations incubated with (3H)choline to measure (3H)ACh release at rest and during EFS. Although resting (3H)ACh release increased progressively in higher K+, release evoked by EFS was maximal in 12 mM K+ and declined in higher concentrations. We conclude that small elevations in the extracellular K+ concentration augment responsiveness of the airways, by increasing the release of ACh both at rest and during EFS from intramural cholinergic nerve terminals. Larger increases in K+ appear to be inhibitory, possibly due to voltage-dependent effects that occur both pre- and postsynaptically.« less
Vitrification of mouse embryos using the thin plastic strip method
Hur, Yong Soo; Ann, Ji Young; Maeng, Ja Young; Park, Miji; Park, Jeong Hyun; Yoon, Jung; Yoon, San Hyun; Hur, Chang Young; Lee, Won Don; Lim, Jin Ho
2012-01-01
Objective The aim of this study was to compare vitrification optimization of mouse embryos using electron microscopy (EM) grid, cryotop, and thin plastic strip (TPS) containers by evaluating developmental competence and apoptosis rates. Methods Mouse embryos were obtained from superovulated mice. Mouse cleavage-stage, expanded, hatching-stage, and hatched-stage embryos were cryopreserved in EM grid, cryotop, and TPS containers by vitrification in 15% ethylene glycol, 15% dimethylsulfoxide, 10 µg/mL Ficoll, and 0.65 M sucrose, and 20% serum substitute supplement (SSS) with basal medium, respectively. For the three groups in which the embryos were thawed in the EM grid, cryotop, and TPS containers, the thawing solution consisted of 0.25 M sucrose, 0.125 M sucrose, and 20% SSS with basal medium, respectively. Rates of survival, re-expansion, reaching the hatched stage, and apoptosis after thawing were compared among the three groups. Results Developmental competence after thawing of vitrified expanded and hatching-stage blastocysts using cryotop and TPS methods were significantly higher than survival using the EM grid (p<0.05). Also, apoptosis positive nuclei rates after thawing of vitrified expanded blastocysts using cryotop and TPS were significantly lower than when using the EM grid (p<0.05). Conclusion The TPS vitrification method has the advantages of achieving a high developmental ability and effective preservation. PMID:23346525
Big data in cryoEM: automated collection, processing and accessibility of EM data.
Baldwin, Philip R; Tan, Yong Zi; Eng, Edward T; Rice, William J; Noble, Alex J; Negro, Carl J; Cianfrocco, Michael A; Potter, Clinton S; Carragher, Bridget
2018-06-01
The scope and complexity of cryogenic electron microscopy (cryoEM) data has greatly increased, and will continue to do so, due to recent and ongoing technical breakthroughs that have led to much improved resolutions for macromolecular structures solved using this method. This big data explosion includes single particle data as well as tomographic tilt series, both generally acquired as direct detector movies of ∼10-100 frames per image or per tilt-series. We provide a brief survey of the developments leading to the current status, and describe existing cryoEM pipelines, with an emphasis on the scope of data acquisition, methods for automation, and use of cloud storage and computing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Compression of strings with approximate repeats.
Allison, L; Edgoose, T; Dix, T I
1998-01-01
We describe a model for strings of characters that is loosely based on the Lempel Ziv model with the addition that a repeated substring can be an approximate match to the original substring; this is close to the situation of DNA, for example. Typically there are many explanations for a given string under the model, some optimal and many suboptimal. Rather than commit to one optimal explanation, we sum the probabilities over all explanations under the model because this gives the probability of the data under the model. The model has a small number of parameters and these can be estimated from the given string by an expectation-maximization (EM) algorithm. Each iteration of the EM algorithm takes O(n2) time and a few iterations are typically sufficient. O(n2) complexity is impractical for strings of more than a few tens of thousands of characters and a faster approximation algorithm is also given. The model is further extended to include approximate reverse complementary repeats when analyzing DNA strings. Tests include the recovery of parameter estimates from known sources and applications to real DNA strings.
Smith, Justin D; Borckardt, Jeffrey J; Nash, Michael R
2012-09-01
The case-based time-series design is a viable methodology for treatment outcome research. However, the literature has not fully addressed the problem of missing observations with such autocorrelated data streams. Mainly, to what extent do missing observations compromise inference when observations are not independent? Do the available missing data replacement procedures preserve inferential integrity? Does the extent of autocorrelation matter? We use Monte Carlo simulation modeling of a single-subject intervention study to address these questions. We find power sensitivity to be within acceptable limits across four proportions of missing observations (10%, 20%, 30%, and 40%) when missing data are replaced using the Expectation-Maximization Algorithm, more commonly known as the EM Procedure (Dempster, Laird, & Rubin, 1977). This applies to data streams with lag-1 autocorrelation estimates under 0.80. As autocorrelation estimates approach 0.80, the replacement procedure yields an unacceptable power profile. The implications of these findings and directions for future research are discussed. Copyright © 2011. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W.; Romberger, Jeff
The HVAC Controls Evaluation Protocol is designed to address evaluation issues for direct digital controls/energy management systems/building automation systems (DDC/EMS/BAS) that are installed to control heating, ventilation, and air-conditioning (HVAC) equipment in commercial and institutional buildings. (This chapter refers to the DDC/EMS/BAS measure as HVAC controls.) This protocol may also be applicable to industrial facilities such as clean rooms and labs, which have either significant HVAC equipment or spaces requiring special environmental conditions.
Entanglement distribution in star network based on spin chain in diamond
NASA Astrophysics Data System (ADS)
Zhu, Yuan-Ming; Ma, Lei
2018-06-01
After star network of spins was proposed, generating entanglement directly through spin interactions between distant parties became possible. We propose an architecture which involves coupled spin chains based on nitrogen-vacancy centers and nitrogen defect spins to expand star network. The numerical analysis shows that the maximally achievable entanglement Em exponentially decays with the length of spin chains M and spin noise. The entanglement capability of this configuration under the effect of disorder and spin loss is also studied. Moreover, it is shown that with this kind of architecture, star network of spins is feasible in measurement of magnetic-field gradient.
Improving zero-training brain-computer interfaces by mixing model estimators
NASA Astrophysics Data System (ADS)
Verhoeven, T.; Hübner, D.; Tangermann, M.; Müller, K. R.; Dambre, J.; Kindermans, P. J.
2017-06-01
Objective. Brain-computer interfaces (BCI) based on event-related potentials (ERP) incorporate a decoder to classify recorded brain signals and subsequently select a control signal that drives a computer application. Standard supervised BCI decoders require a tedious calibration procedure prior to every session. Several unsupervised classification methods have been proposed that tune the decoder during actual use and as such omit this calibration. Each of these methods has its own strengths and weaknesses. Our aim is to improve overall accuracy of ERP-based BCIs without calibration. Approach. We consider two approaches for unsupervised classification of ERP signals. Learning from label proportions (LLP) was recently shown to be guaranteed to converge to a supervised decoder when enough data is available. In contrast, the formerly proposed expectation maximization (EM) based decoding for ERP-BCI does not have this guarantee. However, while this decoder has high variance due to random initialization of its parameters, it obtains a higher accuracy faster than LLP when the initialization is good. We introduce a method to optimally combine these two unsupervised decoding methods, letting one method’s strengths compensate for the weaknesses of the other and vice versa. The new method is compared to the aforementioned methods in a resimulation of an experiment with a visual speller. Main results. Analysis of the experimental results shows that the new method exceeds the performance of the previous unsupervised classification approaches in terms of ERP classification accuracy and symbol selection accuracy during the spelling experiment. Furthermore, the method shows less dependency on random initialization of model parameters and is consequently more reliable. Significance. Improving the accuracy and subsequent reliability of calibrationless BCIs makes these systems more appealing for frequent use.
Estimated Financing Amount Needed for Essential Medicines in China, 2014
Xu, Wei; Xu, Zheng-Yuan; Cai, Gong-Jie; Kuo, Chiao-Yun; Li, Jing; Huang, Yi-Syuan
2016-01-01
Background: At the present time, the government is considering to establish the independent financing system for essential medicines (EMs). However, it is still in the exploration phase. The objectives of this study were to calculate and estimate financing amount of EMs in China in 2014 and to provide data evidence for establishing financing mechanism of EMs. Methods: Two approaches were adopted in this study. First, we used a retrospective research to estimate the cost of EMs in China in 2014. We identified all the 520 drugs listed in the latest national EMs list (2012) and calculated the total sales amount of these drugs in 2014. The other approach included the steps that first selecting the 109 most common diseases in China, then identifying the EMs used to treat them, and finally estimating the total cost of these drugs. Results: The results of the two methods, which showed the estimated financing amounts of EMs in China in 2014, were 17,776.44 million USD and 19,094.09 million USD, respectively. Conclusions: Comparing these two results, we concluded that the annual budget needed to provide for the EMs in China would be about 20 billion USD. Our study also indicated that the irrational drug use continued to plague the health system with intravenous fluids and antibiotics being the typical examples, as observed in other studies. PMID:26960376
Steganalysis feature improvement using expectation maximization
NASA Astrophysics Data System (ADS)
Rodriguez, Benjamin M.; Peterson, Gilbert L.; Agaian, Sos S.
2007-04-01
Images and data files provide an excellent opportunity for concealing illegal or clandestine material. Currently, there are over 250 different tools which embed data into an image without causing noticeable changes to the image. From a forensics perspective, when a system is confiscated or an image of a system is generated the investigator needs a tool that can scan and accurately identify files suspected of containing malicious information. The identification process is termed the steganalysis problem which focuses on both blind identification, in which only normal images are available for training, and multi-class identification, in which both the clean and stego images at several embedding rates are available for training. In this paper an investigation of a clustering and classification technique (Expectation Maximization with mixture models) is used to determine if a digital image contains hidden information. The steganalysis problem is for both anomaly detection and multi-class detection. The various clusters represent clean images and stego images with between 1% and 10% embedding percentage. Based on the results it is concluded that the EM classification technique is highly suitable for both blind detection and the multi-class problem.
Nonadditive entropy maximization is inconsistent with Bayesian updating.
Pressé, Steve
2014-11-01
The maximum entropy method-used to infer probabilistic models from data-is a special case of Bayes's model inference prescription which, in turn, is grounded in basic propositional logic. By contrast to the maximum entropy method, the compatibility of nonadditive entropy maximization with Bayes's model inference prescription has never been established. Here we demonstrate that nonadditive entropy maximization is incompatible with Bayesian updating and discuss the immediate implications of this finding. We focus our attention on special cases as illustrations.
NASA Astrophysics Data System (ADS)
Siegfried, M. R.; Key, K.
2017-12-01
Subglacial hydrologic systems in Antarctica and Greenland play a fundamental role in ice-sheet dynamics, yet critical aspects of these systems remain poorly understood due to a lack of observations. Ground-based electromagnetic (EM) geophysical methods are established for mapping groundwater in many environments, but have never been applied to imaging lakes beneath ice sheets. Here we study the feasibility of passive and active source EM imaging for quantifying the nature of subglacial water systems beneath ice streams, with an emphasis on the interfaces between ice and basal meltwater, as well as deeper groundwater in the underlying sediments. Specifically, we look at the passive magnetotelluric method and active-source EM methods that use a large loop transmitter and receivers that measure either frequency-domain or transient soundings. We describe a suite of model studies that exam the data sensitivity as a function of ice thickness, water conductivity and hydrologic system geometry for models representative of a subglacial lake and a grounding zone estuary. We show that EM data are directly sensitive to groundwater and can image its lateral and depth extent. By combining the conductivity obtained from EM data with ice thickness and geological structure from conventional geophysical techniques such as ground-penetrating radar and active seismic techniques, EM data have the potential to provide new insights on the interaction between ice, rock, and water at critical ice-sheet boundaries.
The polarization evolution of electromagnetic waves as a diagnostic method for a motional plasma
NASA Astrophysics Data System (ADS)
Shahrokhi, Alireza; Mehdian, Hassan; Hajisharifi, Kamal; Hasanbeigi, Ali
2017-12-01
The polarization evolution of electromagnetic (EM) radiation propagating through an electron beam-ion channel system is studied in the presence of self-magnetic field. Solving the fluid-Maxwell equations to obtain the medium dielectric tensor, the Stokes vector-Mueller matrix approach is employed to determine the polarization of the launched EM wave at any point in the propagation direction, applying the space-dependent Mueller matrix on the initial polarization vector of the wave at the plasma-vacuum interface. Results show that the polarization evolution of the wave is periodic in space along the beam axis with the specified polarization wavelength. Using the obtained results, a novel diagnostic method based on the polarization evolution of the EM waves is proposed to evaluate the electron beam density and velocity. Moreover, to use the mentioned plasma system as a polarizer, the fraction of the output radiation power transmitted through a motional plasma crossed with the input polarization is calculated. The results of the present investigation will greatly contribute to design a new EM amplifier with fixed polarization or EM polarizer, as well as a new diagnostic approach for the electron beam system where the polarimetric method is employed.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU.
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis.
Accelerating Computation of DCM for ERP in MATLAB by External Function Calls to the GPU
Wang, Wei-Jen; Hsieh, I-Fan; Chen, Chun-Chuan
2013-01-01
This study aims to improve the performance of Dynamic Causal Modelling for Event Related Potentials (DCM for ERP) in MATLAB by using external function calls to a graphics processing unit (GPU). DCM for ERP is an advanced method for studying neuronal effective connectivity. DCM utilizes an iterative procedure, the expectation maximization (EM) algorithm, to find the optimal parameters given a set of observations and the underlying probability model. As the EM algorithm is computationally demanding and the analysis faces possible combinatorial explosion of models to be tested, we propose a parallel computing scheme using the GPU to achieve a fast estimation of DCM for ERP. The computation of DCM for ERP is dynamically partitioned and distributed to threads for parallel processing, according to the DCM model complexity and the hardware constraints. The performance efficiency of this hardware-dependent thread arrangement strategy was evaluated using the synthetic data. The experimental data were used to validate the accuracy of the proposed computing scheme and quantify the time saving in practice. The simulation results show that the proposed scheme can accelerate the computation by a factor of 155 for the parallel part. For experimental data, the speedup factor is about 7 per model on average, depending on the model complexity and the data. This GPU-based implementation of DCM for ERP gives qualitatively the same results as the original MATLAB implementation does at the group level analysis. In conclusion, we believe that the proposed GPU-based implementation is very useful for users as a fast screen tool to select the most likely model and may provide implementation guidance for possible future clinical applications such as online diagnosis. PMID:23840507
ERIC Educational Resources Information Center
Patterson, P. Daniel; Probst, Janice C.; Moore, Charity G.
2006-01-01
Context: To ensure equitable access to prehospital care, as recommended by the Rural and Frontier Emergency Medical Services (EMS) Agenda for the Future, policymakers will need a uniform measure of EMS infrastructure. Purpose and Methods: This paper proposes a county-level indicator of EMS resource availability that takes into consideration…
Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy; He, Jing
2017-01-01
Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4-7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map-model pairs.
Zeil, Stephanie; Kovacs, Julio; Wriggers, Willy
2017-01-01
Abstract Three-dimensional density maps of biological specimens from cryo-electron microscopy (cryo-EM) can be interpreted in the form of atomic models that are modeled into the density, or they can be compared to known atomic structures. When the central axis of a helix is detectable in a cryo-EM density map, it is possible to quantify the agreement between this central axis and a central axis calculated from the atomic model or structure. We propose a novel arc-length association method to compare the two axes reliably. This method was applied to 79 helices in simulated density maps and six case studies using cryo-EM maps at 6.4–7.7 Å resolution. The arc-length association method is then compared to three existing measures that evaluate the separation of two helical axes: a two-way distance between point sets, the length difference between two axes, and the individual amino acid detection accuracy. The results show that our proposed method sensitively distinguishes lateral and longitudinal discrepancies between the two axes, which makes the method particularly suitable for the systematic investigation of cryo-EM map–model pairs. PMID:27936925
Unifying cost and information in information-theoretic competitive learning.
Kamimura, Ryotaro
2005-01-01
In this paper, we introduce costs into the framework of information maximization and try to maximize the ratio of information to its associated cost. We have shown that competitive learning is realized by maximizing mutual information between input patterns and competitive units. One shortcoming of the method is that maximizing information does not necessarily produce representations faithful to input patterns. Information maximizing primarily focuses on some parts of input patterns that are used to distinguish between patterns. Therefore, we introduce the cost, which represents average distance between input patterns and connection weights. By minimizing the cost, final connection weights reflect input patterns well. We applied the method to a political data analysis, a voting attitude problem and a Wisconsin cancer problem. Experimental results confirmed that, when the cost was introduced, representations faithful to input patterns were obtained. In addition, improved generalization performance was obtained within a relatively short learning time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, DongLin, E-mail: donglinliu@stu.xidian.edu.cn; Li, XiaoPing; Xie, Kai
2015-10-15
A high-speed vehicle flying through the atmosphere between 100 and 20 km may suffer from a “communication blackout.” In this paper, a low frequency system with an on-board loop antenna to receive signals is presented as a potential blackout mitigation method. Because the plasma sheath is in the near-field region of the loop antenna, the traditional scattering matrix method that is developed for the far-field region may overestimate the electromagnetic (EM) wave's attenuation. To estimate the EM wave's attenuation in the near-field region, EM interference (EMI) shielding theory is introduced. Experiments are conducted, and the results verify the EMI shielding theory'smore » effectiveness. Simulations are also conducted with different plasma parameters, and the results obtained show that the EM wave's attenuation in the near-field region is far below than that in the far-field region. The EM wave's attenuation increases with the increase in electron density and decreases with the increase in collision frequency. The higher the frequency, the larger is the EM wave's attenuation. During the entire re-entry phase of a RAM-C module, the EM wave's attenuations are below 10 dB for EM waves with a frequency of 1 MHz and below 1 dB for EM waves with a frequency of 100 kHz. Therefore, the low frequency systems (e.g., Loran-C) may provide a way to transmit some key information to high-speed vehicles even during the communication “blackout” period.« less
Kuzu, Guray; Keskin, Ozlem; Nussinov, Ruth; Gursoy, Attila
2016-10-01
The structures of protein assemblies are important for elucidating cellular processes at the molecular level. Three-dimensional electron microscopy (3DEM) is a powerful method to identify the structures of assemblies, especially those that are challenging to study by crystallography. Here, a new approach, PRISM-EM, is reported to computationally generate plausible structural models using a procedure that combines crystallographic structures and density maps obtained from 3DEM. The predictions are validated against seven available structurally different crystallographic complexes. The models display mean deviations in the backbone of <5 Å. PRISM-EM was further tested on different benchmark sets; the accuracy was evaluated with respect to the structure of the complex, and the correlation with EM density maps and interface predictions were evaluated and compared with those obtained using other methods. PRISM-EM was then used to predict the structure of the ternary complex of the HIV-1 envelope glycoprotein trimer, the ligand CD4 and the neutralizing protein m36.
2011-01-01
Background Autism spectrum disorders (ASD) are associated with complications of pregnancy that implicate fetal hypoxia (FH); the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR) identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH), were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR) and specific not-missing-at-random (NMAR) mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption). Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting. PMID:21208442
Effectiveness evaluation of whole-body electromyostimulation as a post-exercise recovery method.
DE LA Camara, Miguel A; Pardos, Ana I; Veiga, Óscar L
2018-01-04
Whole-body electromyostimulation (WB-EMS) devices are now being used in health and sports training, although there are few studies investigating their benefits. The objective of this research was to evaluate the effectiveness of WB-EMS as a post-exercise recovery method, and compare it with other methods like active and passive recovery. The study included nine trained men (age = 21 ± 1years, height = 1.77 ± 0.4 m, mass = 62 ± 7 kg). Three trials were performed in three different sessions, 1 week apart. Each trial, the participants completed the same exercise protocol and a different recovery method each time. A repeated measures design was used to check the basal reestablishing on several physiological variables [lactate, heart rate, percentage of tissue hemoglobin saturation, temperature, and neuromuscular fatigue] and to evaluate the quality of recovery. The non-parametric Wilcoxon and Friedman ANOVA tests were used to examine the differences between recovery methods. The results showed no differences between methods in the physiological and psychological variables analyzed. Although, the blood lactate concentration showed borderline statistical significance between methods (P = 0.050). Likewise, WB-EMS failed to recover baseline blood lactate concentration (P = 0.021) and percentage of tissue hemoglobin saturation (P = 0.023), in contrast to the other two methods. These findings suggest that WB-EMS is not a good recovery method because the power of reestablishing of several physiological and psychological parameters is not superior to other recovery methods like active and passive recovery.
A writer's guide to education scholarship: Qualitative education scholarship (part 2).
Chan, Teresa M; Ting, Daniel K; Hall, Andrew Koch; Murnaghan, Aleisha; Thoma, Brent; McEwen, Jill; Yarris, Lalena M
2018-03-01
Education scholarship can be conducted using a variety of methods, from quantitative experiments to qualitative studies. Qualitative methods are less commonly used in emergency medicine (EM) education research but are well-suited to explore complex educational problems and generate hypotheses. We aimed to review the literature to provide resources to guide educators who wish to conduct qualitative research in EM education. We conducted a scoping review to outline: 1) a list of journals that regularly publish qualitative educational papers; 2) an aggregate set of quality markers for qualitative educational research and scholarship; and 3) a list of quality checklists for qualitative educational research and scholarship. We found nine journals that have published more than one qualitative educational research paper in EM. From the literature, we identified 39 quality markers that were grouped into 10 themes: Initial Grounding Work (preparation, background); Goals, Problem Statement, or Question; Methods (general considerations); Sampling Techniques; Data Collection Techniques; Data Interpretation and Theory Generation; Measures to Optimize Rigour and Trustworthiness; Relevance to the Field; Evidence of Reflective Practice; Dissemination and Reporting. Lastly, five quality checklists were found for guiding educators in reporting their qualitative work. Many problems that EM educators face are well-suited to exploration using qualitative methods. The results of our scoping review provide publication venues, quality indicators, and checklists that may be useful to EM educators embarking on qualitative projects.
Speed of Gravitational Waves from Strongly Lensed Gravitational Waves and Electromagnetic Signals.
Fan, Xi-Long; Liao, Kai; Biesiada, Marek; Piórkowska-Kurpas, Aleksandra; Zhu, Zong-Hong
2017-03-03
We propose a new model-independent measurement strategy for the propagation speed of gravitational waves (GWs) based on strongly lensed GWs and their electromagnetic (EM) counterparts. This can be done in two ways: by comparing arrival times of GWs and their EM counterparts and by comparing the time delays between images seen in GWs and their EM counterparts. The lensed GW-EM event is perhaps the best way to identify an EM counterpart. Conceptually, this method does not rely on any specific theory of massive gravitons or modified gravity. Its differential setting (i.e., measuring the difference between time delays in GW and EM domains) makes it robust against lens modeling details (photons and GWs travel in the same lensing potential) and against internal time delays between GW and EM emission acts. It requires, however, that the theory of gravity is metric and predicts gravitational lensing similar to general relativity. We expect that such a test will become possible in the era of third-generation gravitational-wave detectors, when about 10 lensed GW events would be observed each year. The power of this method is mainly limited by the timing accuracy of the EM counterpart, which for kilonovae is around 10^{4} s. This uncertainty can be suppressed by a factor of ∼10^{10}, if strongly lensed transients of much shorter duration associated with the GW event can be identified. Candidates for such short transients include short γ-ray bursts and fast radio bursts.
Wright, Kikelomo; Sonoiki, Olatunji; Ilozumba, Onaedo; Ajayi, Babatunde; Okikiolu, Olawunmi; Akinola, Oluwarotimi
2017-01-01
Globally, Nigeria is the second most unsafe country to be pregnant, with Lagos, its economic nerve center having disproportionately higher maternal deaths than the national average. Emergency obstetric care (EmOC) is effective in reducing pregnancyrelated morbidities and mortalities. This mixed-methods study quantitatively assessed women’s satisfaction with EmOC received and qualitatively engaged multiple key stakeholders to better understand issues around EmOC access, availability and utilization in Lagos. Qualitative interviews revealed that regarding access, while government opined that EmOC facilities have been strategically built across Lagos, women flagged issues with difficulty in access, compounded by perceived high EmOC cost. For availability, though health workers were judged competent, they appeared insufficient, overworked and felt poorly remunerated. Infrastructure was considered inadequate and paucity of blood and blood products remained commonplace. Although pregnant women positively rated the clinical aspects of care, as confirmed by the survey, satisfaction gaps remained in the areas of service delivery, care organization and responsiveness. These areas of discordance offer insight to opportunities for improvements, which would ensure that every woman can access and use quality EmOC that is sufficiently available. PMID:29456825
NASA Astrophysics Data System (ADS)
Metzger, Andrew; Benavides, Amanda; Nopoulos, Peg; Magnotta, Vincent
2016-03-01
The goal of this project was to develop two age appropriate atlases (neonatal and one year old) that account for the rapid growth and maturational changes that occur during early development. Tissue maps from this age group were initially created by manually correcting the resulting tissue maps after applying an expectation maximization (EM) algorithm and an adult atlas to pediatric subjects. The EM algorithm classified each voxel into one of ten possible tissue types including several subcortical structures. This was followed by a novel level set segmentation designed to improve differentiation between distal cortical gray matter and white matter. To minimize the req uired manual corrections, the adult atlas was registered to the pediatric scans using high -dimensional, symmetric image normalization (SyN) registration. The subject images were then mapped to an age specific atlas space, again using SyN registration, and the resulting transformation applied to the manually corrected tissue maps. The individual maps were averaged in the age specific atlas space and blurred to generate the age appropriate anatomical priors. The resulting anatomical priors were then used by the EM algorithm to re-segment the initial training set as well as an independent testing set. The results from the adult and age-specific anatomical priors were compared to the manually corrected results. The age appropriate atlas provided superior results as compared to the adult atlas. The image analysis pipeline used in this work was built using the open source software package BRAINSTools.
Making adjustments to event annotations for improved biological event extraction.
Baek, Seung-Cheol; Park, Jong C
2016-09-16
Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.
Power maximization of a point absorber wave energy converter using improved model predictive control
NASA Astrophysics Data System (ADS)
Milani, Farideh; Moghaddam, Reihaneh Kardehi
2017-08-01
This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.
Time Domain Radar Laboratory Operating System Development and Transient EM Analysis.
1981-09-01
polarization of the return, arg used. Other similar methods use amplitude and phase differences or special properties of Rayleigh region scattering. All these...3ptias Inverse Scattering ... 19 2. "!xact" Inverse Scattering !Nethod .. 20 3. Other Methods ................... 21 C. REVIEW OF TDRL PROGRESS AT SPS...explicit independant variable in.most methods . In the past, frequency domain analysis has been the primary means of analyzing aan-monochromatic EM
Seismoelectric Effects based on Spectral-Element Method for Subsurface Fluid Characterization
NASA Astrophysics Data System (ADS)
Morency, C.
2017-12-01
Present approaches for subsurface imaging rely predominantly on seismic techniques, which alone do not capture fluid properties and related mechanisms. On the other hand, electromagnetic (EM) measurements add constraints on the fluid phase through electrical conductivity and permeability, but EM signals alone do not offer information of the solid structural properties. In the recent years, there have been many efforts to combine both seismic and EM data for exploration geophysics. The most popular approach is based on joint inversion of seismic and EM data, as decoupled phenomena, missing out the coupled nature of seismic and EM phenomena such as seismoeletric effects. Seismoelectric effects are related to pore fluid movements with respect to the solid grains. By analyzing coupled poroelastic seismic and EM signals, one can capture a pore scale behavior and access both structural and fluid properties.Here, we model the seismoelectric response by solving the governing equations derived by Pride and Garambois (1994), which correspond to Biot's poroelastic wave equations and Maxwell's electromagnetic wave equations coupled electrokinetically. We will show that these coupled wave equations can be numerically implemented by taking advantage of viscoelastic-electromagnetic mathematical equivalences. These equations will be solved using a spectral-element method (SEM). The SEM, in contrast to finite-element methods (FEM) uses high degree Lagrange polynomials. Not only does this allow the technique to handle complex geometries similarly to FEM, but it also retains exponential convergence and accuracy due to the use of high degree polynomials. Finally, we will discuss how this is a first step toward full coupled seismic-EM inversion to improve subsurface fluid characterization. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Kruglyakov, Mikhail; Kuvshinov, Alexey
2018-05-01
3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.
Inter-rater Agreement of End-of-shift Evaluations Based on a Single Encounter
Warrington, Steven; Beeson, Michael; Bradford, Amber
2017-01-01
Introduction End-of-shift evaluation (ESE) forms, also known as daily encounter cards, represent a subset of encounter-based assessment forms. Encounter cards have become prevalent for formative evaluation, with some suggesting a potential for summative evaluation. Our objective was to evaluate the inter-rater agreement of ESE forms using a single scripted encounter at a conference of emergency medicine (EM) educators. Methods Following institutional review board exemption, we created a scripted video simulating an encounter between an intern and a patient with an ankle injury. That video was shown during a lecture at the Council of EM Residency Director’s Academic Assembly with attendees asked to evaluate the “resident” using one of eight possible ESE forms randomly distributed. Descriptive statistics were used to analyze the results with Fleiss’ kappa to evaluate inter-rater agreement. Results Most of the 324 respondents were leadership in residency programs (66%), with a range of 29–47 responses per evaluation form. Few individuals (5%) felt they were experts in assessing residents based on EM milestones. Fleiss’ kappa ranged from 0.157 – 0.308 and did not perform much better in two post-hoc subgroup analyses. Conclusion The kappa ranges found show only slight to fair inter-rater agreement and raise concerns about the use of ESE forms in assessment of EM residents. Despite limitations present in this study, these results and a lack of other studies on inter-rater agreement of encounter cards should prompt further studies of such methods of assessment. Additionally, EM educators should focus research on methods to improve inter-rater agreement of ESE forms or other evaluating other methods of assessment of EM residents. PMID:28435505
NASA Astrophysics Data System (ADS)
Zhdanov, M. S.; Cuma, M.; Black, N.; Wilson, G. A.
2009-12-01
The marine controlled source electromagnetic (MCSEM) method has become widely used in offshore oil and gas exploration. Interpretation of MCSEM data is still a very challenging problem, especially if one would like to take into account the realistic 3D structure of the subsurface. The inversion of MCSEM data is complicated by the fact that the EM response of a hydrocarbon-bearing reservoir is very weak in comparison with the background EM fields generated by an electric dipole transmitter in complex geoelectrical structures formed by a conductive sea-water layer and the terranes beneath it. In this paper, we present a review of the recent developments in the area of large-scale 3D EM forward modeling and inversion. Our approach is based on using a new integral form of Maxwell’s equations allowing for an inhomogeneous background conductivity, which results in a numerically effective integral representation for 3D EM field. This representation provides an efficient tool for the solution of 3D EM inverse problems. To obtain a robust inverse model of the conductivity distribution, we apply regularization based on a focusing stabilizing functional which allows for the recovery of models with both smooth and sharp geoelectrical boundaries. The method is implemented in a fully parallel computer code, which makes it possible to run large-scale 3D inversions on grids with millions of inversion cells. This new technique can be effectively used for active EM detection and monitoring of the subsurface targets.
Preventing EMS workplace violence: A mixed-methods analysis of insights from assaulted medics.
Maguire, Brian J; O'Neill, Barbara J; O'Meara, Peter; Browne, Matthew; Dealy, Michael T
2018-05-31
To describe measures that assaulted EMS personnel believe will help prevent violence against EMS personnel. This mixed- methods study includes a thematic analysis and directed content analysis of one survey question that asked the victims of workplace violence how the incident might have been prevented. Of 1778 survey respondents, 633 reported being assaulted in the previous 12 months; 203 of them believed the incident could have been prevented and 193 of them (95%) answered this question. Six themes were identified using Haddon's Matrix as a framework. The themes included: Human factors, including specialized training related to specific populations and de-escalation techniques as well as improved situational awareness; Equipment factors, such as restraint equipment and resources; and, Operational and environment factors, including advanced warning systems. Persons who could have prevented the violence were identified as police, self, other professionals, partners and dispatchers. Restraints and training were suggested as violence-prevention tools and methods CONCLUSIONS: This is the first international study from the perspective of victimized EMS personnel, to report on ways that violence could be prevented. Ambulance agencies should consider these suggestions and work with researchers to evaluate risks at the agency level and to develop, implement and test interventions to reduce the risks of violence against EMS personnel. These teams should work together to both form an evidence-base for prevention and to publish findings so that EMS medical directors, administrators and professionals around the world can learn from each experience. Copyright © 2018 Elsevier Ltd. All rights reserved.
Anderson, Lorinda K
2017-01-01
Immunolocalization using either fluorescence for light microscopy (LM) or gold particles for electron microscopy (EM) has become a common tool to pinpoint proteins involved in recombination during meiotic prophase. Each method has its advantages and disadvantages. For example, LM immunofluorescence is comparatively easier and higher throughput compared to immunogold EM localization. In addition, immunofluorescence has the advantages that a faint signal can often be enhanced by longer exposure times and colocalization using two (or more) probes with different absorbance and emission spectra is straightforward. However, immunofluorescence is not useful if the object of interest does not label with an antibody probe and is below the resolution of the LM. In comparison, immunogold EM localization is higher resolution than immunofluorescent LM localization, and individual nuclear structures, such as recombination nodules, can be identified by EM regardless of whether they are labeled or not. However, immunogold localization has other disadvantages including comparatively low signal-to-noise ratios, more difficult colocalization using gold particles of different sizes, and the inability to evaluate labeling efficiency before examining the sample using EM (a more expensive and time-consuming technique than LM). Here we describe a method that takes advantage of the good points of both immunofluorescent LM and EM to analyze two classes of late recombination nodules (RNs), only one of which labels with antibodies to MLH1 protein, a marker of crossovers. The method can be used readily with other antibodies to analyze early recombination nodules or other prophase I structures.
Numerical simulations of imaging satellites with optical interferometry
NASA Astrophysics Data System (ADS)
Ding, Yuanyuan; Wang, Chaoyan; Chen, Zhendong
2015-08-01
Optical interferometry imaging system, which is composed of multiple sub-apertures, is a type of sensor that can break through the aperture limit and realize the high resolution imaging. This technique can be utilized to precisely measure the shapes, sizes and position of astronomical objects and satellites, it also can realize to space exploration and space debris, satellite monitoring and survey. Fizeau-Type optical aperture synthesis telescope has the advantage of short baselines, common mount and multiple sub-apertures, so it is feasible for instantaneous direct imaging through focal plane combination.Since 2002, the researchers of Shanghai Astronomical Observatory have developed the study of optical interferometry technique. For array configurations, there are two optimal array configurations proposed instead of the symmetrical circular distribution: the asymmetrical circular distribution and the Y-type distribution. On this basis, two kinds of structure were proposed based on Fizeau interferometric telescope. One is Y-type independent sub-aperture telescope, the other one is segmented mirrors telescope with common secondary mirror.In this paper, we will give the description of interferometric telescope and image acquisition. Then we will mainly concerned the simulations of image restoration based on Y-type telescope and segmented mirrors telescope. The Richardson-Lucy (RL) method, Winner method and the Ordered Subsets Expectation Maximization (OS-EM) method are studied in this paper. We will analyze the influence of different stop rules too. At the last of the paper, we will present the reconstruction results of images of some satellites.
Toeplitz Inverse Covariance-Based Clustering of Multivariate Time Series Data
Hallac, David; Vare, Sagar; Boyd, Stephen; Leskovec, Jure
2018-01-01
Subsequence clustering of multivariate time series is a useful tool for discovering repeated patterns in temporal data. Once these patterns have been discovered, seemingly complicated datasets can be interpreted as a temporal sequence of only a small number of states, or clusters. For example, raw sensor data from a fitness-tracking application can be expressed as a timeline of a select few actions (i.e., walking, sitting, running). However, discovering these patterns is challenging because it requires simultaneous segmentation and clustering of the time series. Furthermore, interpreting the resulting clusters is difficult, especially when the data is high-dimensional. Here we propose a new method of model-based clustering, which we call Toeplitz Inverse Covariance-based Clustering (TICC). Each cluster in the TICC method is defined by a correlation network, or Markov random field (MRF), characterizing the interdependencies between different observations in a typical subsequence of that cluster. Based on this graphical representation, TICC simultaneously segments and clusters the time series data. We solve the TICC problem through alternating minimization, using a variation of the expectation maximization (EM) algorithm. We derive closed-form solutions to efficiently solve the two resulting subproblems in a scalable way, through dynamic programming and the alternating direction method of multipliers (ADMM), respectively. We validate our approach by comparing TICC to several state-of-the-art baselines in a series of synthetic experiments, and we then demonstrate on an automobile sensor dataset how TICC can be used to learn interpretable clusters in real-world scenarios. PMID:29770257
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, L; Yin, F; Cai, J
Purpose: To develop a methodology of constructing physiological-based virtual thorax phantom based on hyperpolarized (HP) gas tagging MRI for evaluating deformable image registration (DIR). Methods: Three healthy subjects were imaged at both the end-of-inhalation (EOI) and the end-of-exhalation (EOE) phases using a high-resolution (2.5mm isovoxel) 3D proton MRI, as well as a hybrid MRI which combines HP gas tagging MRI and a low-resolution (4.5mm isovoxel) proton MRI. A sparse tagging displacement vector field (tDVF) was derived from the HP gas tagging MRI by tracking the displacement of tagging grids between EOI and EOE. Using the tDVF and the high-resolution MRmore » images, we determined the motion model of the entire thorax in the following two steps: 1) the DVF inside of lungs was estimated based on the sparse tDVF using a novel multi-step natural neighbor interpolation method; 2) the DVF outside of lungs was estimated from the DIR between the EOI and EOE images (Velocity AI). The derived motion model was then applied to the high-resolution EOI image to create a deformed EOE image, forming the virtual phantom where the motion model provides the ground truth of deformation. Five DIR methods were evaluated using the developed virtual phantom. Errors in DVF magnitude (Em) and angle (Ea) were determined and compared for each DIR method. Results: Among the five DIR methods, free form deformation produced DVF results that are most closely resembling the ground truth (Em=1.04mm, Ea=6.63°). The two DIR methods based on B-spline produced comparable results (Em=2.04mm, Ea=13.66°; and Em =2.62mm, Ea=17.67°), and the two optical-flow methods produced least accurate results (Em=7.8mm; Ea=53.04°; Em=4.45mm, Ea=31.02°). Conclusion: A methodology for constructing physiological-based virtual thorax phantom based on HP gas tagging MRI has been developed. Initial evaluation demonstrated its potential as an effective tool for robust evaluation of DIR in the lung.« less
NASA Astrophysics Data System (ADS)
The present conference discusses topics in EM shielding effectiveness, system-level EMC, EMP effects, circuit-level EMI testing, EMI control, analysis techniques for system-level EMC, EMP protective measures, EMI test methods, electrostatic-discharge testing, printed circuit-board design for EMC, and EM environment effects. Also discussed are EMI measurement procedures, EM spectrum-management issues for the 21st century, antenna and propagation effects on EMI testing, EMI control in cables, socioeconomic aspects of EMC, systemwide EMI controls, and EM radiation and coupling.
HMM for hyperspectral spectrum representation and classification with endmember entropy vectors
NASA Astrophysics Data System (ADS)
Arabi, Samir Y. W.; Fernandes, David; Pizarro, Marco A.
2015-10-01
The Hyperspectral images due to its good spectral resolution are extensively used for classification, but its high number of bands requires a higher bandwidth in the transmission data, a higher data storage capability and a higher computational capability in processing systems. This work presents a new methodology for hyperspectral data classification that can work with a reduced number of spectral bands and achieve good results, comparable with processing methods that require all hyperspectral bands. The proposed method for hyperspectral spectra classification is based on the Hidden Markov Model (HMM) associated to each Endmember (EM) of a scene and the conditional probabilities of each EM belongs to each other EM. The EM conditional probability is transformed in EM vector entropy and those vectors are used as reference vectors for the classes in the scene. The conditional probability of a spectrum that will be classified is also transformed in a spectrum entropy vector, which is classified in a given class by the minimum ED (Euclidian Distance) among it and the EM entropy vectors. The methodology was tested with good results using AVIRIS spectra of a scene with 13 EM considering the full 209 bands and the reduced spectral bands of 128, 64 and 32. For the test area its show that can be used only 32 spectral bands instead of the original 209 bands, without significant loss in the classification process.