Sample records for alternative margin algorithm

  1. Maximum Margin Clustering of Hyperspectral Data

    NASA Astrophysics Data System (ADS)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2013-09-01

    In recent decades, large margin methods such as Support Vector Machines (SVMs) are supposed to be the state-of-the-art of supervised learning methods for classification of hyperspectral data. However, the results of these algorithms mainly depend on the quality and quantity of available training data. To tackle down the problems associated with the training data, the researcher put effort into extending the capability of large margin algorithms for unsupervised learning. One of the recent proposed algorithms is Maximum Margin Clustering (MMC). The MMC is an unsupervised SVMs algorithm that simultaneously estimates both the labels and the hyperplane parameters. Nevertheless, the optimization of the MMC algorithm is a non-convex problem. Most of the existing MMC methods rely on the reformulating and the relaxing of the non-convex optimization problem as semi-definite programs (SDP), which are computationally very expensive and only can handle small data sets. Moreover, most of these algorithms are two-class classification, which cannot be used for classification of remotely sensed data. In this paper, a new MMC algorithm is used that solve the original non-convex problem using Alternative Optimization method. This algorithm is also extended for multi-class classification and its performance is evaluated. The results of the proposed algorithm show that the algorithm has acceptable results for hyperspectral data clustering.

  2. A weighted belief-propagation algorithm for estimating volume-related properties of random polytopes

    NASA Astrophysics Data System (ADS)

    Font-Clos, Francesc; Massucci, Francesco Alessandro; Pérez Castillo, Isaac

    2012-11-01

    In this work we introduce a novel weighted message-passing algorithm based on the cavity method for estimating volume-related properties of random polytopes, properties which are relevant in various research fields ranging from metabolic networks, to neural networks, to compressed sensing. We propose, as opposed to adopting the usual approach consisting in approximating the real-valued cavity marginal distributions by a few parameters, using an algorithm to faithfully represent the entire marginal distribution. We explain various alternatives for implementing the algorithm and benchmarking the theoretical findings by showing concrete applications to random polytopes. The results obtained with our approach are found to be in very good agreement with the estimates produced by the Hit-and-Run algorithm, known to produce uniform sampling.

  3. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    NASA Astrophysics Data System (ADS)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  4. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets.

    PubMed

    Gruber, Susan; Logan, Roger W; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A

    2015-01-15

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However, a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V-fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. Copyright © 2014 John Wiley & Sons, Ltd.

  5. Ensemble learning of inverse probability weights for marginal structural modeling in large observational datasets

    PubMed Central

    Gruber, Susan; Logan, Roger W.; Jarrín, Inmaculada; Monge, Susana; Hernán, Miguel A.

    2014-01-01

    Inverse probability weights used to fit marginal structural models are typically estimated using logistic regression. However a data-adaptive procedure may be able to better exploit information available in measured covariates. By combining predictions from multiple algorithms, ensemble learning offers an alternative to logistic regression modeling to further reduce bias in estimated marginal structural model parameters. We describe the application of two ensemble learning approaches to estimating stabilized weights: super learning (SL), an ensemble machine learning approach that relies on V -fold cross validation, and an ensemble learner (EL) that creates a single partition of the data into training and validation sets. Longitudinal data from two multicenter cohort studies in Spain (CoRIS and CoRIS-MD) were analyzed to estimate the mortality hazard ratio for initiation versus no initiation of combined antiretroviral therapy among HIV positive subjects. Both ensemble approaches produced hazard ratio estimates further away from the null, and with tighter confidence intervals, than logistic regression modeling. Computation time for EL was less than half that of SL. We conclude that ensemble learning using a library of diverse candidate algorithms offers an alternative to parametric modeling of inverse probability weights when fitting marginal structural models. With large datasets, EL provides a rich search over the solution space in less time than SL with comparable results. PMID:25316152

  6. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    NASA Astrophysics Data System (ADS)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength conversion under the same RWA algorithm.

  7. A binary search approach to whole-genome data analysis.

    PubMed

    Brodsky, Leonid; Kogan, Simon; Benjacob, Eshel; Nevo, Eviatar

    2010-09-28

    A sequence analysis-oriented binary search-like algorithm was transformed to a sensitive and accurate analysis tool for processing whole-genome data. The advantage of the algorithm over previous methods is its ability to detect the margins of both short and long genome fragments, enriched by up-regulated signals, at equal accuracy. The score of an enriched genome fragment reflects the difference between the actual concentration of up-regulated signals in the fragment and the chromosome signal baseline. The "divide-and-conquer"-type algorithm detects a series of nonintersecting fragments of various lengths with locally optimal scores. The procedure is applied to detected fragments in a nested manner by recalculating the lower-than-baseline signals in the chromosome. The algorithm was applied to simulated whole-genome data, and its sensitivity/specificity were compared with those of several alternative algorithms. The algorithm was also tested with four biological tiling array datasets comprising Arabidopsis (i) expression and (ii) histone 3 lysine 27 trimethylation CHIP-on-chip datasets; Saccharomyces cerevisiae (iii) spliced intron data and (iv) chromatin remodeling factor binding sites. The analyses' results demonstrate the power of the algorithm in identifying both the short up-regulated fragments (such as exons and transcription factor binding sites) and the long--even moderately up-regulated zones--at their precise genome margins. The algorithm generates an accurate whole-genome landscape that could be used for cross-comparison of signals across the same genome in evolutionary and general genomic studies.

  8. 77 FR 76318 - Self-Regulatory Organizations; ICE Clear Europe Limited; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-27

    ... Methodology is an enhancement to the SPAN for the ICE Margining algorithm employed to calculate Original... Margining algorithm employed to calculate Original Margin and was designed to optimize and improve margin... framework algorithm. The enhancement will be additionally applied to: GOA: Gas Oil 1-Month CSO; BRZ: Brent...

  9. 77 FR 76316 - Self-Regulatory Organizations; ICE Clear Europe Limited; Notice of Filing and Immediate...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-27

    ... enhancement to the SPAN for the ICE Margining algorithm employed to calculate Original Margin. All capitalized... Allocation Methodology is an enhancement to the SPAN[supreg] \\6\\ for the ICE Margining algorithm employed to... the SPAN margin calculation algorithm itself has not been changed. As of August 30, 2011, Position...

  10. 30 CFR 204.3 - What alternatives are available for marginal properties?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What alternatives are available for marginal... MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES General Provisions § 204.3 What alternatives are available for marginal properties? If you have production from a marginal property, MMS and...

  11. Margin based ontology sparse vector learning algorithm and applied in biology science.

    PubMed

    Gao, Wei; Qudair Baig, Abdul; Ali, Haidar; Sajjad, Wasim; Reza Farahani, Mohammad

    2017-01-01

    In biology field, the ontology application relates to a large amount of genetic information and chemical information of molecular structure, which makes knowledge of ontology concepts convey much information. Therefore, in mathematical notation, the dimension of vector which corresponds to the ontology concept is often very large, and thus improves the higher requirements of ontology algorithm. Under this background, we consider the designing of ontology sparse vector algorithm and application in biology. In this paper, using knowledge of marginal likelihood and marginal distribution, the optimized strategy of marginal based ontology sparse vector learning algorithm is presented. Finally, the new algorithm is applied to gene ontology and plant ontology to verify its efficiency.

  12. Comparison of algorithms to generate event times conditional on time-dependent covariates.

    PubMed

    Sylvestre, Marie-Pierre; Abrahamowicz, Michal

    2008-06-30

    The Cox proportional hazards model with time-dependent covariates (TDC) is now a part of the standard statistical analysis toolbox in medical research. As new methods involving more complex modeling of time-dependent variables are developed, simulations could often be used to systematically assess the performance of these models. Yet, generating event times conditional on TDC requires well-designed and efficient algorithms. We compare two classes of such algorithms: permutational algorithms (PAs) and algorithms based on a binomial model. We also propose a modification of the PA to incorporate a rejection sampler. We performed a simulation study to assess the accuracy, stability, and speed of these algorithms in several scenarios. Both classes of algorithms generated data sets that, once analyzed, provided virtually unbiased estimates with comparable variances. In terms of computational efficiency, the PA with the rejection sampler reduced the time necessary to generate data by more than 50 per cent relative to alternative methods. The PAs also allowed more flexibility in the specification of the marginal distributions of event times and required less calibration.

  13. Marginal Consistency: Upper-Bounding Partition Functions over Commutative Semirings.

    PubMed

    Werner, Tomás

    2015-07-01

    Many inference tasks in pattern recognition and artificial intelligence lead to partition functions in which addition and multiplication are abstract binary operations forming a commutative semiring. By generalizing max-sum diffusion (one of convergent message passing algorithms for approximate MAP inference in graphical models), we propose an iterative algorithm to upper bound such partition functions over commutative semirings. The iteration of the algorithm is remarkably simple: change any two factors of the partition function such that their product remains the same and their overlapping marginals become equal. In many commutative semirings, repeating this iteration for different pairs of factors converges to a fixed point when the overlapping marginals of every pair of factors coincide. We call this state marginal consistency. During that, an upper bound on the partition function monotonically decreases. This abstract algorithm unifies several existing algorithms, including max-sum diffusion and basic constraint propagation (or local consistency) algorithms in constraint programming. We further construct a hierarchy of marginal consistencies of increasingly higher levels and show than any such level can be enforced by adding identity factors of higher arity (order). Finally, we discuss instances of the framework for several semirings, including the distributive lattice and the max-sum and sum-product semirings.

  14. Learning to rank using user clicks and visual features for image retrieval.

    PubMed

    Yu, Jun; Tao, Dacheng; Wang, Meng; Rui, Yong

    2015-04-01

    The inconsistency between textual features and visual contents can cause poor image search results. To solve this problem, click features, which are more reliable than textual information in justifying the relevance between a query and clicked images, are adopted in image ranking model. However, the existing ranking model cannot integrate visual features, which are efficient in refining the click-based search results. In this paper, we propose a novel ranking model based on the learning to rank framework. Visual features and click features are simultaneously utilized to obtain the ranking model. Specifically, the proposed approach is based on large margin structured output learning and the visual consistency is integrated with the click features through a hypergraph regularizer term. In accordance with the fast alternating linearization method, we design a novel algorithm to optimize the objective function. This algorithm alternately minimizes two different approximations of the original objective function by keeping one function unchanged and linearizing the other. We conduct experiments on a large-scale dataset collected from the Microsoft Bing image search engine, and the results demonstrate that the proposed learning to rank models based on visual features and user clicks outperforms state-of-the-art algorithms.

  15. Deep Marginalized Sparse Denoising Auto-Encoder for Image Denoising

    NASA Astrophysics Data System (ADS)

    Ma, Hongqiang; Ma, Shiping; Xu, Yuelei; Zhu, Mingming

    2018-01-01

    Stacked Sparse Denoising Auto-Encoder (SSDA) has been successfully applied to image denoising. As a deep network, the SSDA network with powerful data feature learning ability is superior to the traditional image denoising algorithms. However, the algorithm has high computational complexity and slow convergence rate in the training. To address this limitation, we present a method of image denoising based on Deep Marginalized Sparse Denoising Auto-Encoder (DMSDA). The loss function of Sparse Denoising Auto-Encoder is marginalized so that it satisfies both sparseness and marginality. The experimental results show that the proposed algorithm can not only outperform SSDA in the convergence speed and training time, but also has better denoising performance than the current excellent denoising algorithms, including both the subjective and objective evaluation of image denoising.

  16. Theoretical and Empirical Analysis of a Spatial EA Parallel Boosting Algorithm.

    PubMed

    Kamath, Uday; Domeniconi, Carlotta; De Jong, Kenneth

    2018-01-01

    Many real-world problems involve massive amounts of data. Under these circumstances learning algorithms often become prohibitively expensive, making scalability a pressing issue to be addressed. A common approach is to perform sampling to reduce the size of the dataset and enable efficient learning. Alternatively, one customizes learning algorithms to achieve scalability. In either case, the key challenge is to obtain algorithmic efficiency without compromising the quality of the results. In this article we discuss a meta-learning algorithm (PSBML) that combines concepts from spatially structured evolutionary algorithms (SSEAs) with concepts from ensemble and boosting methodologies to achieve the desired scalability property. We present both theoretical and empirical analyses which show that PSBML preserves a critical property of boosting, specifically, convergence to a distribution centered around the margin. We then present additional empirical analyses showing that this meta-level algorithm provides a general and effective framework that can be used in combination with a variety of learning classifiers. We perform extensive experiments to investigate the trade-off achieved between scalability and accuracy, and robustness to noise, on both synthetic and real-world data. These empirical results corroborate our theoretical analysis, and demonstrate the potential of PSBML in achieving scalability without sacrificing accuracy.

  17. An Efficient MCMC Algorithm to Sample Binary Matrices with Fixed Marginals

    ERIC Educational Resources Information Center

    Verhelst, Norman D.

    2008-01-01

    Uniform sampling of binary matrices with fixed margins is known as a difficult problem. Two classes of algorithms to sample from a distribution not too different from the uniform are studied in the literature: importance sampling and Markov chain Monte Carlo (MCMC). Existing MCMC algorithms converge slowly, require a long burn-in period and yield…

  18. Deformable Dose Reconstruction to Optimize the Planning and Delivery of Liver Cancer Radiotherapy

    NASA Astrophysics Data System (ADS)

    Velec, Michael

    The precise delivery of radiation to liver cancer patients results in improved control with higher tumor doses and minimized normal tissues doses. A margin of normal tissue around the tumor requires irradiation however to account for treatment delivery uncertainties. Daily image-guidance allows targeting of the liver, a surrogate for the tumor, to reduce geometric errors. However poor direct tumor visualization, anatomical deformation and breathing motion introduce uncertainties between the planned dose, calculated on a single pre-treatment computed tomography image, and the dose that is delivered. A novel deformable image registration algorithm based on tissue biomechanics was applied to previous liver cancer patients to track targets and surrounding organs during radiotherapy. Modeling these daily anatomic variations permitted dose accumulation, thereby improving calculations of the delivered doses. The accuracy of the algorithm to track dose was validated using imaging from a deformable, 3-dimensional dosimeter able to optically track absorbed dose. Reconstructing the delivered dose revealed that 70% of patients had substantial deviations from the initial planned dose. An alternative image-guidance technique using respiratory-correlated imaging was simulated, which reduced both the residual tumor targeting errors and the magnitude of the delivered dose deviations. A planning and delivery strategy for liver radiotherapy was then developed that minimizes the impact of breathing motion, and applied a margin to account for the impact of liver deformation during treatment. This margin is 38% smaller on average than the margin used clinically, and permitted an average dose-escalation to liver tumors of 9% for the same risk of toxicity. Simulating the delivered dose with deformable dose reconstruction demonstrated the plans with smaller margins were robust as 90% of patients' tumors received the intended dose. This strategy can be readily implemented with widely available technologies and thus can potentially improve local control for liver cancer patients receiving radiotherapy.

  19. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  20. Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.

    2012-06-15

    In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less

  1. Large Margin Multi-Modal Multi-Task Feature Extraction for Image Classification.

    PubMed

    Yong Luo; Yonggang Wen; Dacheng Tao; Jie Gui; Chao Xu

    2016-01-01

    The features used in many image analysis-based applications are frequently of very high dimension. Feature extraction offers several advantages in high-dimensional cases, and many recent studies have used multi-task feature extraction approaches, which often outperform single-task feature extraction approaches. However, most of these methods are limited in that they only consider data represented by a single type of feature, even though features usually represent images from multiple modalities. We, therefore, propose a novel large margin multi-modal multi-task feature extraction (LM3FE) framework for handling multi-modal features for image classification. In particular, LM3FE simultaneously learns the feature extraction matrix for each modality and the modality combination coefficients. In this way, LM3FE not only handles correlated and noisy features, but also utilizes the complementarity of different modalities to further help reduce feature redundancy in each modality. The large margin principle employed also helps to extract strongly predictive features, so that they are more suitable for prediction (e.g., classification). An alternating algorithm is developed for problem optimization, and each subproblem can be efficiently solved. Experiments on two challenging real-world image data sets demonstrate the effectiveness and superiority of the proposed method.

  2. Application of fermionic marginal constraints to hybrid quantum algorithms

    NASA Astrophysics Data System (ADS)

    Rubin, Nicholas C.; Babbush, Ryan; McClean, Jarrod

    2018-05-01

    Many quantum algorithms, including recently proposed hybrid classical/quantum algorithms, make use of restricted tomography of the quantum state that measures the reduced density matrices, or marginals, of the full state. The most straightforward approach to this algorithmic step estimates each component of the marginal independently without making use of the algebraic and geometric structure of the marginals. Within the field of quantum chemistry, this structure is termed the fermionic n-representability conditions, and is supported by a vast amount of literature on both theoretical and practical results related to their approximations. In this work, we introduce these conditions in the language of quantum computation, and utilize them to develop several techniques to accelerate and improve practical applications for quantum chemistry on quantum computers. As a general result, we demonstrate how these marginals concentrate to diagonal quantities when measured on random quantum states. We also show that one can use fermionic n-representability conditions to reduce the total number of measurements required by more than an order of magnitude for medium sized systems in chemistry. As a practical demonstration, we simulate an efficient restoration of the physicality of energy curves for the dilation of a four qubit diatomic hydrogen system in the presence of three distinct one qubit error channels, providing evidence these techniques are useful for pre-fault tolerant quantum chemistry experiments.

  3. Study of 201 non-small cell lung cancer patients given stereotactic ablative radiation therapy shows local control dependence on dose calculation algorithm.

    PubMed

    Latifi, Kujtim; Oliver, Jasmine; Baker, Ryan; Dilling, Thomas J; Stevens, Craig W; Kim, Jongphil; Yue, Binglin; Demarco, Marylou; Zhang, Geoffrey G; Moros, Eduardo G; Feygelman, Vladimir

    2014-04-01

    Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treated on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99GITV = 7.4 Gy, ΔD99PTV = 10.4 Gy, ΔV90GITV = 13.7%, ΔV90PTV = 37.6%, ΔD95PTV = 9.8 Gy, and ΔDISO = 3.4 Gy. GITV = gross internal tumor volume. Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm. Copyright © 2014 Elsevier Inc. All rights reserved.

  4. Fast computation of the multivariable stability margin for real interrelated uncertain parameters

    NASA Technical Reports Server (NTRS)

    Sideris, Athanasios; Sanchez Pena, Ricardo S.

    1988-01-01

    A novel algorithm for computing the multivariable stability margin for checking the robust stability of feedback systems with real parametric uncertainty is proposed. This method eliminates the need for the frequency search involved in another given algorithm by reducing it to checking a finite number of conditions. These conditions have a special structure, which allows a significant improvement on the speed of computations.

  5. Multiple Ordinal Regression by Maximizing the Sum of Margins

    PubMed Central

    Hamsici, Onur C.; Martinez, Aleix M.

    2016-01-01

    Human preferences are usually measured using ordinal variables. A system whose goal is to estimate the preferences of humans and their underlying decision mechanisms requires to learn the ordering of any given sample set. We consider the solution of this ordinal regression problem using a Support Vector Machine algorithm. Specifically, the goal is to learn a set of classifiers with common direction vectors and different biases correctly separating the ordered classes. Current algorithms are either required to solve a quadratic optimization problem, which is computationally expensive, or are based on maximizing the minimum margin (i.e., a fixed margin strategy) between a set of hyperplanes, which biases the solution to the closest margin. Another drawback of these strategies is that they are limited to order the classes using a single ranking variable (e.g., perceived length). In this paper, we define a multiple ordinal regression algorithm based on maximizing the sum of the margins between every consecutive class with respect to one or more rankings (e.g., perceived length and weight). We provide derivations of an efficient, easy-to-implement iterative solution using a Sequential Minimal Optimization procedure. We demonstrate the accuracy of our solutions in several datasets. In addition, we provide a key application of our algorithms in estimating human subjects’ ordinal classification of attribute associations to object categories. We show that these ordinal associations perform better than the binary one typically employed in the literature. PMID:26529784

  6. Improving and Evaluating Nested Sampling Algorithm for Marginal Likelihood Estimation

    NASA Astrophysics Data System (ADS)

    Ye, M.; Zeng, X.; Wu, J.; Wang, D.; Liu, J.

    2016-12-01

    With the growing impacts of climate change and human activities on the cycle of water resources, an increasing number of researches focus on the quantification of modeling uncertainty. Bayesian model averaging (BMA) provides a popular framework for quantifying conceptual model and parameter uncertainty. The ensemble prediction is generated by combining each plausible model's prediction, and each model is attached with a model weight which is determined by model's prior weight and marginal likelihood. Thus, the estimation of model's marginal likelihood is crucial for reliable and accurate BMA prediction. Nested sampling estimator (NSE) is a new proposed method for marginal likelihood estimation. The process of NSE is accomplished by searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm is often used for local sampling. However, M-H is not an efficient sampling algorithm for high-dimensional or complicated parameter space. For improving the efficiency of NSE, it could be ideal to incorporate the robust and efficient sampling algorithm - DREAMzs into the local sampling of NSE. The comparison results demonstrated that the improved NSE could improve the efficiency of marginal likelihood estimation significantly. However, both improved and original NSEs suffer from heavy instability. In addition, the heavy computation cost of huge number of model executions is overcome by using an adaptive sparse grid surrogates.

  7. Face recognition using total margin-based adaptive fuzzy support vector machines.

    PubMed

    Liu, Yi-Hung; Chen, Yen-Ting

    2007-01-01

    This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.

  8. Research of facial feature extraction based on MMC

    NASA Astrophysics Data System (ADS)

    Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun

    2017-07-01

    Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Latifi, Kujtim, E-mail: Kujtim.Latifi@Moffitt.org; Oliver, Jasmine; Department of Physics, University of South Florida, Tampa, Florida

    Purpose: Pencil beam (PB) and collapsed cone convolution (CCC) dose calculation algorithms differ significantly when used in the thorax. However, such differences have seldom been previously directly correlated with outcomes of lung stereotactic ablative body radiation (SABR). Methods and Materials: Data for 201 non-small cell lung cancer patients treated with SABR were analyzed retrospectively. All patients were treated with 50 Gy in 5 fractions of 10 Gy each. The radiation prescription mandated that 95% of the planning target volume (PTV) receive the prescribed dose. One hundred sixteen patients were planned with BrainLab treatment planning software (TPS) with the PB algorithm and treatedmore » on a Novalis unit. The other 85 were planned on the Pinnacle TPS with the CCC algorithm and treated on a Varian linac. Treatment planning objectives were numerically identical for both groups. The median follow-up times were 24 and 17 months for the PB and CCC groups, respectively. The primary endpoint was local/marginal control of the irradiated lesion. Gray's competing risk method was used to determine the statistical differences in local/marginal control rates between the PB and CCC groups. Results: Twenty-five patients planned with PB and 4 patients planned with the CCC algorithms to the same nominal doses experienced local recurrence. There was a statistically significant difference in recurrence rates between the PB and CCC groups (hazard ratio 3.4 [95% confidence interval: 1.18-9.83], Gray's test P=.019). The differences (Δ) between the 2 algorithms for target coverage were as follows: ΔD99{sub GITV} = 7.4 Gy, ΔD99{sub PTV} = 10.4 Gy, ΔV90{sub GITV} = 13.7%, ΔV90{sub PTV} = 37.6%, ΔD95{sub PTV} = 9.8 Gy, and ΔD{sub ISO} = 3.4 Gy. GITV = gross internal tumor volume. Conclusions: Local control in patients receiving who were planned to the same nominal dose with PB and CCC algorithms were statistically significantly different. Possible alternative explanations are described in the report, although they are not thought likely to explain the difference. We conclude that the difference is due to relative dosimetric underdosing of tumors with the PB algorithm.« less

  10. Peri-operative imaging of cancer margins with reflectance confocal microscopy during Mohs micrographic surgery: feasibility of a video-mosaicing algorithm

    NASA Astrophysics Data System (ADS)

    Flores, Eileen; Yelamos, Oriol; Cordova, Miguel; Kose, Kivanc; Phillips, William; Rossi, Anthony; Nehal, Kishwer; Rajadhyaksha, Milind

    2017-02-01

    Reflectance confocal microscopy (RCM) imaging shows promise for guiding surgical treatment of skin cancers. Recent technological advancements such as the introduction of the handheld version of the reflectance confocal microscope, video acquisition and video-mosaicing have improved RCM as an emerging tool to evaluate cancer margins during routine surgical skin procedures such as Mohs micrographic surgery (MMS). Detection of residual non-melanoma skin cancer (NMSC) tumor during MMS is feasible, as demonstrated by the introduction of real-time perioperative imaging on patients in the surgical setting. Our study is currently testing the feasibility of a new mosaicing algorithm for perioperative RCM imaging of NMSC cancer margins on patients during MMS. We report progress toward imaging and image analysis on forty-five patients, who presented for MMS at the MSKCC Dermatology service. The first 10 patients were used as a training set to establish an RCM imaging algorithm, which was implemented on the remaining test set of 35 patients. RCM imaging, using 35% AlCl3 for nuclear contrast, was performed pre- and intra-operatively with the Vivascope 3000 (Caliber ID). Imaging was performed in quadrants in the wound, to simulate the Mohs surgeon's examination of pathology. Videos were taken at the epidermal and deep dermal margins. Our Mohs surgeons assessed all videos and video-mosaics for quality and correlation to histology. Overall, our RCM video-mosaicing algorithm is feasible. RCM videos and video-mosaics of the epidermal and dermal margins were found to be of clinically acceptable quality. Assessment of cancer margins was affected by type of NMSC, size and location. Among the test set of 35 patients, 83% showed acceptable imaging quality, resolution and contrast. Visualization of nuclear and cellular morphology of residual BCC/SCC tumor and normal skin features could be detected in the peripheral and deep dermal margins. We observed correlation between the RCM videos/video-mosaics and the corresponding histology in 32 lesions. Peri-operative RCM imaging shows promise for improved and faster detection of cancer margins and guiding MMS in the surgical setting.

  11. Instances selection algorithm by ensemble margin

    NASA Astrophysics Data System (ADS)

    Saidi, Meryem; Bechar, Mohammed El Amine; Settouti, Nesma; Chikh, Mohamed Amine

    2018-05-01

    The main limit of data mining algorithms is their inability to deal with the huge amount of available data in a reasonable processing time. A solution of producing fast and accurate results is instances and features selection. This process eliminates noisy or redundant data in order to reduce the storage and computational cost without performances degradation. In this paper, a new instance selection approach called Ensemble Margin Instance Selection (EMIS) algorithm is proposed. This approach is based on the ensemble margin. To evaluate our approach, we have conducted several experiments on different real-world classification problems from UCI Machine learning repository. The pixel-based image segmentation is a field where the storage requirement and computational cost of applied model become higher. To solve these limitations we conduct a study based on the application of EMIS and other instance selection techniques for the segmentation and automatic recognition of white blood cells WBC (nucleus and cytoplasm) in cytological images.

  12. 30 CFR 204.4 - What is a marginal property under this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What is a marginal property under this part... REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES General Provisions § 204.4 What is a marginal property under this part? (a) To qualify as a marginal property eligible for royalty prepayment or...

  13. Stan : A Probabilistic Programming Language

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  14. Stan : A Probabilistic Programming Language

    DOE PAGES

    Carpenter, Bob; Gelman, Andrew; Hoffman, Matthew D.; ...

    2017-01-01

    Stan is a probabilistic programming language for specifying statistical models. A Stan program imperatively defines a log probability function over parameters conditioned on specified data and constants. As of version 2.14.0, Stan provides full Bayesian inference for continuous-variable models through Markov chain Monte Carlo methods such as the No-U-Turn sampler, an adaptive form of Hamiltonian Monte Carlo sampling. Penalized maximum likelihood estimates are calculated using optimization methods such as the limited memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Stan is also a platform for computing log densities and their gradients and Hessians, which can be used in alternative algorithms such as variational Bayes, expectationmore » propagation, and marginal inference using approximate integration. To this end, Stan is set up so that the densities, gradients, and Hessians, along with intermediate quantities of the algorithm such as acceptance probabilities, are easily accessible. Stan can also be called from the command line using the cmdstan package, through R using the rstan package, and through Python using the pystan package. All three interfaces support sampling and optimization-based inference with diagnostics and posterior analysis. rstan and pystan also provide access to log probabilities, gradients, Hessians, parameter transforms, and specialized plotting.« less

  15. High-Dimensional Exploratory Item Factor Analysis by a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Cai, Li

    2010-01-01

    A Metropolis-Hastings Robbins-Monro (MH-RM) algorithm for high-dimensional maximum marginal likelihood exploratory item factor analysis is proposed. The sequence of estimates from the MH-RM algorithm converges with probability one to the maximum likelihood solution. Details on the computer implementation of this algorithm are provided. The…

  16. Marginalized Student Access to Technology Education

    NASA Astrophysics Data System (ADS)

    Kurtcu, Wanda M.

    The purpose of this paper is to investigate how a teacher can disrupt an established curriculum that continues the cycle of inequity of access to science, technology, engineering, and math (STEM) curriculum by students in alternative education. For this paper, I will focus on the technology components of the STEM curriculum. Technology in the United States, if not the world economy, is developing at a rapid pace. Many areas of day to day living, from applying for a job to checking one's bank account online, involve a component of science and technology. The 'gap' in technology education is emphasized between the 'haves and have-nots', which is delineated along socio-economic lines. Marginalized students in alternative education programs use this equipment for little else than remedial programs and credit recovery. This level of inequity further widens in alternative education programs and affects the achievement of marginalized students in credit recovery or alternative education classes instead of participation technology classes. For the purposes of this paper I focus on how can I decrease the inequity of student access to 21st century technology education in an alternative education program by addressing the established curriculum of the program and modifying structural barriers of marginalized student access to a technology focused curriculum.

  17. A Novel Automatic Detection System for ECG Arrhythmias Using Maximum Margin Clustering with Immune Evolutionary Algorithm

    PubMed Central

    Zhu, Bohui; Ding, Yongsheng; Hao, Kuangrong

    2013-01-01

    This paper presents a novel maximum margin clustering method with immune evolution (IEMMC) for automatic diagnosis of electrocardiogram (ECG) arrhythmias. This diagnostic system consists of signal processing, feature extraction, and the IEMMC algorithm for clustering of ECG arrhythmias. First, raw ECG signal is processed by an adaptive ECG filter based on wavelet transforms, and waveform of the ECG signal is detected; then, features are extracted from ECG signal to cluster different types of arrhythmias by the IEMMC algorithm. Three types of performance evaluation indicators are used to assess the effect of the IEMMC method for ECG arrhythmias, such as sensitivity, specificity, and accuracy. Compared with K-means and iterSVR algorithms, the IEMMC algorithm reflects better performance not only in clustering result but also in terms of global search ability and convergence ability, which proves its effectiveness for the detection of ECG arrhythmias. PMID:23690875

  18. An algorithm of improving speech emotional perception for hearing aid

    NASA Astrophysics Data System (ADS)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  19. Bayesian Estimation of Multidimensional Item Response Models. A Comparison of Analytic and Simulation Algorithms

    ERIC Educational Resources Information Center

    Martin-Fernandez, Manuel; Revuelta, Javier

    2017-01-01

    This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…

  20. Site-specific range uncertainties caused by dose calculation algorithms for proton therapy

    NASA Astrophysics Data System (ADS)

    Schuemann, J.; Dowdell, S.; Grassberger, C.; Min, C. H.; Paganetti, H.

    2014-08-01

    The purpose of this study was to assess the possibility of introducing site-specific range margins to replace current generic margins in proton therapy. Further, the goal was to study the potential of reducing margins with current analytical dose calculations methods. For this purpose we investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict the range of proton fields. Dose distributions predicted by an analytical pencil-beam algorithm were compared with those obtained using Monte Carlo (MC) simulations (TOPAS). A total of 508 passively scattered treatment fields were analyzed for seven disease sites (liver, prostate, breast, medulloblastoma-spine, medulloblastoma-whole brain, lung and head and neck). Voxel-by-voxel comparisons were performed on two-dimensional distal dose surfaces calculated by pencil-beam and MC algorithms to obtain the average range differences and root mean square deviation for each field for the distal position of the 90% dose level (R90) and the 50% dose level (R50). The average dose degradation of the distal falloff region, defined as the distance between the distal position of the 80% and 20% dose levels (R80-R20), was also analyzed. All ranges were calculated in water-equivalent distances. Considering total range uncertainties and uncertainties from dose calculation alone, we were able to deduce site-specific estimations. For liver, prostate and whole brain fields our results demonstrate that a reduction of currently used uncertainty margins is feasible even without introducing MC dose calculations. We recommend range margins of 2.8% + 1.2 mm for liver and prostate treatments and 3.1% + 1.2 mm for whole brain treatments, respectively. On the other hand, current margins seem to be insufficient for some breast, lung and head and neck patients, at least if used generically. If no case specific adjustments are applied, a generic margin of 6.3% + 1.2 mm would be needed for breast, lung and head and neck treatments. We conclude that the currently used generic range uncertainty margins in proton therapy should be redefined site specific and that complex geometries may require a field specific adjustment. Routine verifications of treatment plans using MC simulations are recommended for patients with heterogeneous geometries.

  1. Marginal Fisher analysis and its variants for human gait recognition and content- based image retrieval.

    PubMed

    Xu, Dong; Yan, Shuicheng; Tao, Dacheng; Lin, Stephen; Zhang, Hong-Jiang

    2007-11-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for human gait recognition and content-based image retrieval (CBIR). In this paper, we present extensions of our recently proposed marginal Fisher analysis (MFA) to address these problems. For human gait recognition, we first present a direct application of MFA, then inspired by recent advances in matrix and tensor-based dimensionality reduction algorithms, we present matrix-based MFA for directly handling 2-D input in the form of gray-level averaged images. For CBIR, we deal with the relevance feedback problem by extending MFA to marginal biased analysis, in which within-class compactness is characterized only by the distances between each positive sample and its neighboring positive samples. In addition, we present a new technique to acquire a direct optimal solution for MFA without resorting to objective function modification as done in many previous algorithms. We conduct comprehensive experiments on the USF HumanID gait database and the Corel image retrieval database. Experimental results demonstrate that MFA and its extensions outperform related algorithms in both applications.

  2. Object-oriented feature-tracking algorithms for SAR images of the marginal ice zone

    NASA Technical Reports Server (NTRS)

    Daida, Jason; Samadani, Ramin; Vesecky, John F.

    1990-01-01

    An unsupervised method that chooses and applies the most appropriate tracking algorithm from among different sea-ice tracking algorithms is reported. In contrast to current unsupervised methods, this method chooses and applies an algorithm by partially examining a sequential image pair to draw inferences about what was examined. Based on these inferences the reported method subsequently chooses which algorithm to apply to specific areas of the image pair where that algorithm should work best.

  3. The Short- and Long-Run Marginal Cost Curves: An Alternative Explanation.

    ERIC Educational Resources Information Center

    Boyd, Laura A.; Boyd, David W.

    1994-01-01

    Discusses issues related to short-run marginal cost and long-run marginal cost in economic theory. Asserts that few economics textbooks deal with important aspects of this concept. Includes four figures illustrating the approach suggested by the authors. (CFR)

  4. Niche harmony search algorithm for detecting complex disease associated high-order SNP combinations.

    PubMed

    Tuo, Shouheng; Zhang, Junying; Yuan, Xiguo; He, Zongzhen; Liu, Yajun; Liu, Zhaowen

    2017-09-14

    Genome-wide association study is especially challenging in detecting high-order disease-causing models due to model diversity, possible low or even no marginal effect of the model, and extraordinary search and computations. In this paper, we propose a niche harmony search algorithm where joint entropy is utilized as a heuristic factor to guide the search for low or no marginal effect model, and two computationally lightweight scores are selected to evaluate and adapt to diverse of disease models. In order to obtain all possible suspected pathogenic models, niche technique merges with HS, which serves as a taboo region to avoid HS trapping into local search. From the resultant set of candidate SNP-combinations, we use G-test statistic for testing true positives. Experiments were performed on twenty typical simulation datasets in which 12 models are with marginal effect and eight ones are with no marginal effect. Our results indicate that the proposed algorithm has very high detection power for searching suspected disease models in the first stage and it is superior to some typical existing approaches in both detection power and CPU runtime for all these datasets. Application to age-related macular degeneration (AMD) demonstrates our method is promising in detecting high-order disease-causing models.

  5. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    ERIC Educational Resources Information Center

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  6. Voices of the poor from the margins of Bengal: structural inequities and health.

    PubMed

    Dutta, Mohan J; Dutta, Uttaran

    2013-01-01

    In opposition to the traditional approaches to health communication that treat the subaltern sectors as passive recipients of messages of enlightenment configured in top-down interventions, the culture-centered approach foregrounds the importance of listening to subaltern communities at the margins through dialogue. We build on earlier culture-centered projects in rural communities of West Bengal, India, to develop participatory research strategies for understanding the local processes through which the structural marginalization of the poor plays out in rural Bengal. Study results point toward the marginalization of the poor both communicatively and economically, attending to the ways in which communicative marginalization lies at the heart of economic oppressions. Through locally articulated concepts of "health as shortage" and "communication as shortage," community members put forth alternative rationalities of health that highlight structural resources at the heart of health. These local articulations of shortage offer an alternative rationality for organizing health promotion efforts in the rural margins of Bengal through the foregrounding of discourses of shortage.

  7. CONORBIT: constrained optimization by radial basis function interpolation in trust regions

    DOE PAGES

    Regis, Rommel G.; Wild, Stefan M.

    2016-09-26

    Here, this paper presents CONORBIT (CONstrained Optimization by Radial Basis function Interpolation in Trust regions), a derivative-free algorithm for constrained black-box optimization where the objective and constraint functions are computationally expensive. CONORBIT employs a trust-region framework that uses interpolating radial basis function (RBF) models for the objective and constraint functions, and is an extension of the ORBIT algorithm. It uses a small margin for the RBF constraint models to facilitate the generation of feasible iterates, and extensive numerical tests confirm that such a margin is helpful in improving performance. CONORBIT is compared with other algorithms on 27 test problems, amore » chemical process optimization problem, and an automotive application. Numerical results show that CONORBIT performs better than COBYLA, a sequential penalty derivative-free method, an augmented Lagrangian method, a direct search method, and another RBF-based algorithm on the test problems and on the automotive application.« less

  8. An efficient algorithm to compute marginal posterior genotype probabilities for every member of a pedigree with loops

    PubMed Central

    2009-01-01

    Background Marginal posterior genotype probabilities need to be computed for genetic analyses such as geneticcounseling in humans and selective breeding in animal and plant species. Methods In this paper, we describe a peeling based, deterministic, exact algorithm to compute efficiently genotype probabilities for every member of a pedigree with loops without recourse to junction-tree methods from graph theory. The efficiency in computing the likelihood by peeling comes from storing intermediate results in multidimensional tables called cutsets. Computing marginal genotype probabilities for individual i requires recomputing the likelihood for each of the possible genotypes of individual i. This can be done efficiently by storing intermediate results in two types of cutsets called anterior and posterior cutsets and reusing these intermediate results to compute the likelihood. Examples A small example is used to illustrate the theoretical concepts discussed in this paper, and marginal genotype probabilities are computed at a monogenic disease locus for every member in a real cattle pedigree. PMID:19958551

  9. 30 CFR 204.1 - What is the purpose of this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES General Provisions § 204.1 What is the purpose of this part... marginal properties. This part does not apply to production from Indian leases, even if the Indian lease is within an agreement that qualifies as a marginal property. ...

  10. Marginal abatement cost curves for NOx incorporating both controls and alternative measures

    EPA Science Inventory

    A marginal abatement cost curve (MACC) traces out the efficient marginal abatement cost level for any aggregate emissions target when a least cost approach is implemented. In order for it to represent the efficient MAC level, all abatement opportunities across all sectors and loc...

  11. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  12. Digital Timing Recovery for High Speed Optical Drives

    NASA Astrophysics Data System (ADS)

    Ko, Seok Jun; Kim, Pan Soo; Choi, Hyung Jin; Lee, Jae-Wook

    2002-03-01

    A new digital timing recovery scheme for the optical drive system is presented. By comparative simulations using digital versatile disc (DVD) patterns with marginal input conditions, the proposed algorithm shows enhanced performances in jitter variance and signal-to-noise ratio (SNR) margin by four times and 3 [dB], respectively.

  13. IRT Item Parameter Recovery with Marginal Maximum Likelihood Estimation Using Loglinear Smoothing Models

    ERIC Educational Resources Information Center

    Casabianca, Jodi M.; Lewis, Charles

    2015-01-01

    Loglinear smoothing (LLS) estimates the latent trait distribution while making fewer assumptions about its form and maintaining parsimony, thus leading to more precise item response theory (IRT) item parameter estimates than standard marginal maximum likelihood (MML). This article provides the expectation-maximization algorithm for MML estimation…

  14. Applied Time Domain Stability Margin Assessment for Nonlinear Time-Varying Systems

    NASA Technical Reports Server (NTRS)

    Kiefer, J. M.; Johnson, M. D.; Wall, J. H.; Dominguez, A.

    2016-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation. This technique was implemented by using the Stability Aerospace Vehicle Analysis Tool (SAVANT) computer simulation to evaluate the stability of the SLS system with the Adaptive Augmenting Control (AAC) active and inactive along its ascent trajectory. The gains for which the vehicle maintains apparent time-domain stability defines the gain margins, and the time delay similarly defines the phase margin. This method of extracting the control stability margins from the time-domain simulation is relatively straightforward and the resultant margins can be compared to the linearized system results. The sections herein describe the techniques employed to extract the time-domain margins, compare the results between these nonlinear and the linear methods, and provide explanations for observed discrepancies. The SLS ascent trajectory was simulated with SAVANT and the classical linear stability margins were evaluated at one second intervals. The linear analysis was performed with the AAC algorithm disabled to attain baseline stability margins. At each time point, the system was linearized about the current operating point using Simulink's built-in solver. Each linearized system in time was evaluated for its rigid-body gain margin (high frequency gain margin), rigid-body phase margin, and aero gain margin (low frequency gain margin) for each control axis. Using the stability margins derived from the baseline linearization approach, the time domain derived stability margins were determined by executing time domain simulations in which axis-specific incremental gain and phase adjustments were made to the nominal system about the expected neutral stability point at specific flight times. The baseline stability margin time histories were used to shift the system gain to various values around the zero margin point such that a precise amount of expected gain margin was maintained throughout flight. When assessing the gain margins, the gain was applied starting at the time point under consideration, thereafter following the variation in the margin found in the linear analysis. When assessing the rigid-body phase margin, a constant time delay was applied to the system starting at the time point under consideration. If the baseline stability margins were correctly determined via the linear analysis, the time domain simulation results should contain unstable behavior at certain gain and phase values. Examples will be shown from repeated simulations with variable added gain and phase lag. Faithfulness of margins calculated from the linear analysis to the nonlinear system will be demonstrated.

  15. Estimates of the atmospheric parameters of M-type stars: a machine-learning perspective

    NASA Astrophysics Data System (ADS)

    Sarro, L. M.; Ordieres-Meré, J.; Bello-García, A.; González-Marcos, A.; Solano, E.

    2018-05-01

    Estimating the atmospheric parameters of M-type stars has been a difficult task due to the lack of simple diagnostics in the stellar spectra. We aim at uncovering good sets of predictive features of stellar atmospheric parameters (Teff, log (g), [M/H]) in spectra of M-type stars. We define two types of potential features (equivalent widths and integrated flux ratios) able to explain the atmospheric physical parameters. We search the space of feature sets using a genetic algorithm that evaluates solutions by their prediction performance in the framework of the BT-Settl library of stellar spectra. Thereafter, we construct eight regression models using different machine-learning techniques and compare their performances with those obtained using the classical χ2 approach and independent component analysis (ICA) coefficients. Finally, we validate the various alternatives using two sets of real spectra from the NASA Infrared Telescope Facility (IRTF) and Dwarf Archives collections. We find that the cross-validation errors are poor measures of the performance of regression models in the context of physical parameter prediction in M-type stars. For R ˜ 2000 spectra with signal-to-noise ratios typical of the IRTF and Dwarf Archives, feature selection with genetic algorithms or alternative techniques produces only marginal advantages with respect to representation spaces that are unconstrained in wavelength (full spectrum or ICA). We make available the atmospheric parameters for the two collections of observed spectra as online material.

  16. Estimating inverse probability weights using super learner when weight-model specification is unknown in a marginal structural Cox model context.

    PubMed

    Karim, Mohammad Ehsanul; Platt, Robert W

    2017-06-15

    Correct specification of the inverse probability weighting (IPW) model is necessary for consistent inference from a marginal structural Cox model (MSCM). In practical applications, researchers are typically unaware of the true specification of the weight model. Nonetheless, IPWs are commonly estimated using parametric models, such as the main-effects logistic regression model. In practice, assumptions underlying such models may not hold and data-adaptive statistical learning methods may provide an alternative. Many candidate statistical learning approaches are available in the literature. However, the optimal approach for a given dataset is impossible to predict. Super learner (SL) has been proposed as a tool for selecting an optimal learner from a set of candidates using cross-validation. In this study, we evaluate the usefulness of a SL in estimating IPW in four different MSCM simulation scenarios, in which we varied the specification of the true weight model specification (linear and/or additive). Our simulations show that, in the presence of weight model misspecification, with a rich and diverse set of candidate algorithms, SL can generally offer a better alternative to the commonly used statistical learning approaches in terms of MSE as well as the coverage probabilities of the estimated effect in an MSCM. The findings from the simulation studies guided the application of the MSCM in a multiple sclerosis cohort from British Columbia, Canada (1995-2008), to estimate the impact of beta-interferon treatment in delaying disability progression. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  17. 30 CFR 204.2 - What definitions apply to this part?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES General Provisions § 204.2 What definitions apply to this... preceding the calendar year for which you take or request marginal property relief. For example, if you... equivalent production means the total of all oil and gas production for the marginal property, stated in BOE...

  18. 30 CFR 204.209 - What if a property ceases to qualify for relief obtained under this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and...) A marginal property must qualify for relief under this subpart for each calendar year based on... of your interest in a marginal property during the calendar year, your relief terminates as of the...

  19. Estimation of Contextual Effects through Nonlinear Multilevel Latent Variable Modeling with a Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Yang, Ji Seung; Cai, Li

    2014-01-01

    The main purpose of this study is to improve estimation efficiency in obtaining maximum marginal likelihood estimates of contextual effects in the framework of nonlinear multilevel latent variable model by adopting the Metropolis-Hastings Robbins-Monro algorithm (MH-RM). Results indicate that the MH-RM algorithm can produce estimates and standard…

  20. The Prognostic Value of Varying Definitions of Positive Resection Margin in Patients with Colorectal Cancer Liver Metastases.

    PubMed

    Wang, Jane; Margonis, Georgios Antonios; Amini, Neda; Andreatos, Nikolaos; Yuan, Chunhui; Damaskos, Christos; Antoniou, Efstathios; Garmpis, Nikolaos; Buettner, Stefan; Barbon, Carlotta; Deshwar, Amar; He, Jin; Burkhart, Richard; Pawlik, Timothy M; Wolfgang, Christopher L; Weiss, Matthew J

    2018-04-09

    Varying definitions of resection margin clearance are currently employed among patients with colorectal cancer liver metastases (CRLM). Specifically, a microscopically positive margin (R1) has alternatively been equated with an involved margin (margin width = 0 mm) or a margin width < 1 mm. Consequently, patients with a margin width of 0-1 mm (sub-mm) are inconsistently classified in either the R0 or R1 categories, thus obscuring the prognostic implications of sub-mm margins. Six hundred thirty-three patients who underwent resection of CRLM were identified. Both R1 definitions were alternatively employed and multivariable analysis was used to determine the predictive power of each definition, as well as the prognostic implications of a sub-mm margin. Five hundred thirty-nine (85.2%) patients had a margin width ≥ 1 mm, 42 had a sub-mm margin width, and 52 had an involved margin (0 mm). A margin width ≥ 1 mm was associated with improved survival vs. a sub-mm margin (65 vs. 36 months; P = 0.03) or an involved margin (65 vs. 33 months; P < 0.001). No significant difference in survival was detected between patients with involved vs. sub-mm margins (P = 0.31). A sub-mm margin and an involved margin were both independent predictors of worse OS (HR 1.66, 1.04-2.67; P = 0.04, and HR 2.14, 1.46-3.16; P < 0.001, respectively) in multivariable analysis. Importantly, after combining the two definitions, patients with either an involved margin or a sub-mm margin were associated with worse OS in multivariable analysis (HR 1.94, 1.41-2.65; P < 0.001). Patients with involved or sub-mm margins demonstrated a similar inferior OS vs. patients with a margin width > 1 mm. Consequently, a uniform definition of R1 as a margin width < 1 mm should perhaps be employed by future studies.

  1. A general program to compute the multivariable stability margin for systems with parametric uncertainty

    NASA Technical Reports Server (NTRS)

    Sanchez Pena, Ricardo S.; Sideris, Athanasios

    1988-01-01

    A computer program implementing an algorithm for computing the multivariable stability margin to check the robust stability of feedback systems with real parametric uncertainty is proposed. The authors present in some detail important aspects of the program. An example is presented using lateral directional control system.

  2. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  3. Multirate sampled-data yaw-damper and modal suppression system design

    NASA Technical Reports Server (NTRS)

    Berg, Martin C.; Mason, Gregory S.

    1990-01-01

    A multirate control law synthesized algorithm based on an infinite-time quadratic cost function, was developed along with a method for analyzing the robustness of multirate systems. A generalized multirate sampled-data control law structure (GMCLS) was introduced. A new infinite-time-based parameter optimization multirate sampled-data control law synthesis method and solution algorithm were developed. A singular-value-based method for determining gain and phase margins for multirate systems was also developed. The finite-time-based parameter optimization multirate sampled-data control law synthesis algorithm originally intended to be applied to the aircraft problem was instead demonstrated by application to a simpler problem involving the control of the tip position of a two-link robot arm. The GMCLS, the infinite-time-based parameter optimization multirate control law synthesis method and solution algorithm, and the singular-value based method for determining gain and phase margins were all demonstrated by application to the aircraft control problem originally proposed for this project.

  4. 30 CFR 204.202 - What is the cumulative royalty reports and payments relief option?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and... produced from the marginal property (not just your share of the production) is 1,000 BOE or less during the... allowances on Form MMS-2014 on the same annual basis as the royalties for your marginal property production...

  5. Flight assessment of the onboard propulsion system model for the Performance Seeking Control algorithm on an F-15 aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.; Schkolnik, Gerard S.

    1995-01-01

    Performance Seeking Control (PSC), an onboard, adaptive, real-time optimization algorithm, relies upon an onboard propulsion system model. Flight results illustrated propulsion system performance improvements as calculated by the model. These improvements were subject to uncertainty arising from modeling error. Thus to quantify uncertainty in the PSC performance improvements, modeling accuracy must be assessed. A flight test approach to verify PSC-predicted increases in thrust (FNP) and absolute levels of fan stall margin is developed and applied to flight test data. Application of the excess thrust technique shows that increases of FNP agree to within 3 percent of full-scale measurements for most conditions. Accuracy to these levels is significant because uncertainty bands may now be applied to the performance improvements provided by PSC. Assessment of PSC fan stall margin modeling accuracy was completed with analysis of in-flight stall tests. Results indicate that the model overestimates the stall margin by between 5 to 10 percent. Because PSC achieves performance gains by using available stall margin, this overestimation may represent performance improvements to be recovered with increased modeling accuracy. Assessment of thrust and stall margin modeling accuracy provides a critical piece for a comprehensive understanding of PSC's capabilities and limitations.

  6. Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers

    NASA Astrophysics Data System (ADS)

    Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen

    2017-04-01

    Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.

  7. A Search Algorithm for Generating Alternative Process Plans in Flexible Manufacturing System

    NASA Astrophysics Data System (ADS)

    Tehrani, Hossein; Sugimura, Nobuhiro; Tanimizu, Yoshitaka; Iwamura, Koji

    Capabilities and complexity of manufacturing systems are increasing and striving for an integrated manufacturing environment. Availability of alternative process plans is a key factor for integration of design, process planning and scheduling. This paper describes an algorithm for generation of alternative process plans by extending the existing framework of the process plan networks. A class diagram is introduced for generating process plans and process plan networks from the viewpoint of the integrated process planning and scheduling systems. An incomplete search algorithm is developed for generating and searching the process plan networks. The benefit of this algorithm is that the whole process plan network does not have to be generated before the search algorithm starts. This algorithm is applicable to large and enormous process plan networks and also to search wide areas of the network based on the user requirement. The algorithm can generate alternative process plans and to select a suitable one based on the objective functions.

  8. Estimation of Model's Marginal likelihood Using Adaptive Sparse Grid Surrogates in Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Zeng, X.

    2015-12-01

    A large number of model executions are required to obtain alternative conceptual models' predictions and their posterior probabilities in Bayesian model averaging (BMA). The posterior model probability is estimated through models' marginal likelihood and prior probability. The heavy computation burden hinders the implementation of BMA prediction, especially for the elaborated marginal likelihood estimator. For overcoming the computation burden of BMA, an adaptive sparse grid (SG) stochastic collocation method is used to build surrogates for alternative conceptual models through the numerical experiment of a synthetical groundwater model. BMA predictions depend on model posterior weights (or marginal likelihoods), and this study also evaluated four marginal likelihood estimators, including arithmetic mean estimator (AME), harmonic mean estimator (HME), stabilized harmonic mean estimator (SHME), and thermodynamic integration estimator (TIE). The results demonstrate that TIE is accurate in estimating conceptual models' marginal likelihoods. The BMA-TIE has better predictive performance than other BMA predictions. TIE has high stability for estimating conceptual model's marginal likelihood. The repeated estimated conceptual model's marginal likelihoods by TIE have significant less variability than that estimated by other estimators. In addition, the SG surrogates are efficient to facilitate BMA predictions, especially for BMA-TIE. The number of model executions needed for building surrogates is 4.13%, 6.89%, 3.44%, and 0.43% of the required model executions of BMA-AME, BMA-HME, BMA-SHME, and BMA-TIE, respectively.

  9. An Alternative Route to Teaching Fraction Division: Abstraction of Common Denominator Algorithm

    ERIC Educational Resources Information Center

    Zembat, Ismail Özgür

    2015-01-01

    From a curricular stand point, the traditional invert and multiply algorithm for division of fractions provides few affordances for linking to a rich understanding of fractions. On the other hand, an alternative algorithm, called common denominator algorithm, has many such affordances. The current study serves as an argument for shifting…

  10. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification

    PubMed Central

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs). PMID:26985826

  11. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    PubMed

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  12. Fast Identification of Biological Pathways Associated with a Quantitative Trait Using Group Lasso with Overlaps

    PubMed Central

    Silver, Matt; Montana, Giovanni

    2012-01-01

    Where causal SNPs (single nucleotide polymorphisms) tend to accumulate within biological pathways, the incorporation of prior pathways information into a statistical model is expected to increase the power to detect true associations in a genetic association study. Most existing pathways-based methods rely on marginal SNP statistics and do not fully exploit the dependence patterns among SNPs within pathways. We use a sparse regression model, with SNPs grouped into pathways, to identify causal pathways associated with a quantitative trait. Notable features of our “pathways group lasso with adaptive weights” (P-GLAW) algorithm include the incorporation of all pathways in a single regression model, an adaptive pathway weighting procedure that accounts for factors biasing pathway selection, and the use of a bootstrap sampling procedure for the ranking of important pathways. P-GLAW takes account of the presence of overlapping pathways and uses a novel combination of techniques to optimise model estimation, making it fast to run, even on whole genome datasets. In a comparison study with an alternative pathways method based on univariate SNP statistics, our method demonstrates high sensitivity and specificity for the detection of important pathways, showing the greatest relative gains in performance where marginal SNP effect sizes are small. PMID:22499682

  13. Salinity tolerance of germinating alternative oilseeds

    USDA-ARS?s Scientific Manuscript database

    Integrating oilseed crops into rotations can improve soil health benefits, nutrient retention, and pollinator provisions. Field margins represent areas where incorporation of oilseeds is feasible. However in the northern Great Plains, field margins can oftentimes be areas of saline soil, which can i...

  14. Deletion Diagnostics for Alternating Logistic Regressions

    PubMed Central

    Preisser, John S.; By, Kunthel; Perin, Jamie; Qaqish, Bahjat F.

    2013-01-01

    Deletion diagnostics are introduced for the regression analysis of clustered binary outcomes estimated with alternating logistic regressions, an implementation of generalized estimating equations (GEE) that estimates regression coefficients in a marginal mean model and in a model for the intracluster association given by the log odds ratio. The diagnostics are developed within an estimating equations framework that recasts the estimating functions for association parameters based upon conditional residuals into equivalent functions based upon marginal residuals. Extensions of earlier work on GEE diagnostics follow directly, including computational formulae for one-step deletion diagnostics that measure the influence of a cluster of observations on the estimated regression parameters and on the overall marginal mean or association model fit. The diagnostic formulae are evaluated with simulations studies and with an application concerning an assessment of factors associated with health maintenance visits in primary care medical practices. The application and the simulations demonstrate that the proposed cluster-deletion diagnostics for alternating logistic regressions are good approximations of their exact fully iterated counterparts. PMID:22777960

  15. Simultaneous and semi-alternating projection algorithms for solving split equality problems.

    PubMed

    Dong, Qiao-Li; Jiang, Dan

    2018-01-01

    In this article, we first introduce two simultaneous projection algorithms for solving the split equality problem by using a new choice of the stepsize, and then propose two semi-alternating projection algorithms. The weak convergence of the proposed algorithms is analyzed under standard conditions. As applications, we extend the results to solve the split feasibility problem. Finally, a numerical example is presented to illustrate the efficiency and advantage of the proposed algorithms.

  16. Retargeted Least Squares Regression Algorithm.

    PubMed

    Zhang, Xu-Yao; Wang, Lingfeng; Xiang, Shiming; Liu, Cheng-Lin

    2015-09-01

    This brief presents a framework of retargeted least squares regression (ReLSR) for multicategory classification. The core idea is to directly learn the regression targets from data other than using the traditional zero-one matrix as regression targets. The learned target matrix can guarantee a large margin constraint for the requirement of correct classification for each data point. Compared with the traditional least squares regression (LSR) and a recently proposed discriminative LSR models, ReLSR is much more accurate in measuring the classification error of the regression model. Furthermore, ReLSR is a single and compact model, hence there is no need to train two-class (binary) machines that are independent of each other. The convex optimization problem of ReLSR is solved elegantly and efficiently with an alternating procedure including regression and retargeting as substeps. The experimental evaluation over a range of databases identifies the validity of our method.

  17. Impact of respiratory-correlated CT sorting algorithms on the choice of margin definition for free-breathing lung radiotherapy treatments.

    PubMed

    Thengumpallil, Sheeba; Germond, Jean-François; Bourhis, Jean; Bochud, François; Moeckli, Raphaël

    2016-06-01

    To investigate the impact of Toshiba phase- and amplitude-sorting algorithms on the margin strategies for free-breathing lung radiotherapy treatments in the presence of breathing variations. 4D CT of a sphere inside a dynamic thorax phantom was acquired. The 4D CT was reconstructed according to the phase- and amplitude-sorting algorithms. The phantom was moved by reproducing amplitude, frequency, and a mix of amplitude and frequency variations. Artefact analysis was performed for Mid-Ventilation and ITV-based strategies on the images reconstructed by phase- and amplitude-sorting algorithms. The target volume deviation was assessed by comparing the target volume acquired during irregular motion to the volume acquired during regular motion. The amplitude-sorting algorithm shows reduced artefacts for only amplitude variations while the phase-sorting algorithm for only frequency variations. For amplitude and frequency variations, both algorithms perform similarly. Most of the artefacts are blurring and incomplete structures. We found larger artefacts and volume differences for the Mid-Ventilation with respect to the ITV strategy, resulting in a higher relative difference of the surface distortion value which ranges between maximum 14.6% and minimum 4.1%. The amplitude- is superior to the phase-sorting algorithm in the reduction of motion artefacts for amplitude variations while phase-sorting for frequency variations. A proper choice of 4D CT sorting algorithm is important in order to reduce motion artefacts, especially if Mid-Ventilation strategy is used. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. TH-A-19A-06: Site-Specific Comparison of Analytical and Monte Carlo Based Dose Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schuemann, J; Grassberger, C; Paganetti, H

    2014-06-15

    Purpose: To investigate the impact of complex patient geometries on the capability of analytical dose calculation algorithms to accurately predict dose distributions and to verify currently used uncertainty margins in proton therapy. Methods: Dose distributions predicted by an analytical pencilbeam algorithm were compared with Monte Carlo simulations (MCS) using TOPAS. 79 complete patient treatment plans were investigated for 7 disease sites (liver, prostate, breast, medulloblastoma spine and whole brain, lung and head and neck). A total of 508 individual passively scattered treatment fields were analyzed for field specific properties. Comparisons based on target coverage indices (EUD, D95, D90 and D50)more » were performed. Range differences were estimated for the distal position of the 90% dose level (R90) and the 50% dose level (R50). Two-dimensional distal dose surfaces were calculated and the root mean square differences (RMSD), average range difference (ARD) and average distal dose degradation (ADD), the distance between the distal position of the 80% and 20% dose levels (R80- R20), were analyzed. Results: We found target coverage indices calculated by TOPAS to generally be around 1–2% lower than predicted by the analytical algorithm. Differences in R90 predicted by TOPAS and the planning system can be larger than currently applied range margins in proton therapy for small regions distal to the target volume. We estimate new site-specific range margins (R90) for analytical dose calculations considering total range uncertainties and uncertainties from dose calculation alone based on the RMSD. Our results demonstrate that a reduction of currently used uncertainty margins is feasible for liver, prostate and whole brain fields even without introducing MC dose calculations. Conclusion: Analytical dose calculation algorithms predict dose distributions within clinical limits for more homogeneous patients sites (liver, prostate, whole brain). However, we recommend treatment plan verification using Monte Carlo simulations for patients with complex geometries.« less

  19. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    NASA Astrophysics Data System (ADS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.

  20. Graph embedding and extensions: a general framework for dimensionality reduction.

    PubMed

    Yan, Shuicheng; Xu, Dong; Zhang, Benyu; Zhang, Hong-Jiang; Yang, Qiang; Lin, Stephen

    2007-01-01

    Over the past few decades, a large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called Marginal Fisher Analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional Linear Discriminant Analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions.

  1. Convergence analysis of the alternating RGLS algorithm for the identification of the reduced complexity Volterra model.

    PubMed

    Laamiri, Imen; Khouaja, Anis; Messaoud, Hassani

    2015-03-01

    In this paper we provide a convergence analysis of the alternating RGLS (Recursive Generalized Least Square) algorithm used for the identification of the reduced complexity Volterra model describing stochastic non-linear systems. The reduced Volterra model used is the 3rd order SVD-PARAFC-Volterra model provided using the Singular Value Decomposition (SVD) and the Parallel Factor (PARAFAC) tensor decomposition of the quadratic and the cubic kernels respectively of the classical Volterra model. The Alternating RGLS (ARGLS) algorithm consists on the execution of the classical RGLS algorithm in alternating way. The ARGLS convergence was proved using the Ordinary Differential Equation (ODE) method. It is noted that the algorithm convergence canno׳t be ensured when the disturbance acting on the system to be identified has specific features. The ARGLS algorithm is tested in simulations on a numerical example by satisfying the determined convergence conditions. To raise the elegies of the proposed algorithm, we proceed to its comparison with the classical Alternating Recursive Least Squares (ARLS) presented in the literature. The comparison has been built on a non-linear satellite channel and a benchmark system CSTR (Continuous Stirred Tank Reactor). Moreover the efficiency of the proposed identification approach is proved on an experimental Communicating Two Tank system (CTTS). Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Semisupervised kernel marginal Fisher analysis for face recognition.

    PubMed

    Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun

    2013-01-01

    Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  3. ECG Denoising Using Marginalized Particle Extended Kalman Filter With an Automatic Particle Weighting Strategy.

    PubMed

    Hesar, Hamed Danandeh; Mohebbi, Maryam

    2017-05-01

    In this paper, a model-based Bayesian filtering framework called the "marginalized particle-extended Kalman filter (MP-EKF) algorithm" is proposed for electrocardiogram (ECG) denoising. This algorithm does not have the extended Kalman filter (EKF) shortcoming in handling non-Gaussian nonstationary situations because of its nonlinear framework. In addition, it has less computational complexity compared with particle filter. This filter improves ECG denoising performance by implementing marginalized particle filter framework while reducing its computational complexity using EKF framework. An automatic particle weighting strategy is also proposed here that controls the reliance of our framework to the acquired measurements. We evaluated the proposed filter on several normal ECGs selected from MIT-BIH normal sinus rhythm database. To do so, artificial white Gaussian and colored noises as well as nonstationary real muscle artifact (MA) noise over a range of low SNRs from 10 to -5 dB were added to these normal ECG segments. The benchmark methods were the EKF and extended Kalman smoother (EKS) algorithms which are the first model-based Bayesian algorithms introduced in the field of ECG denoising. From SNR viewpoint, the experiments showed that in the presence of Gaussian white noise, the proposed framework outperforms the EKF and EKS algorithms in lower input SNRs where the measurements and state model are not reliable. Owing to its nonlinear framework and particle weighting strategy, the proposed algorithm attained better results at all input SNRs in non-Gaussian nonstationary situations (such as presence of pink noise, brown noise, and real MA). In addition, the impact of the proposed filtering method on the distortion of diagnostic features of the ECG was investigated and compared with EKF/EKS methods using an ECG diagnostic distortion measure called the "Multi-Scale Entropy Based Weighted Distortion Measure" or MSEWPRD. The results revealed that our proposed algorithm had the lowest MSEPWRD for all noise types at low input SNRs. Therefore, the morphology and diagnostic information of ECG signals were much better conserved compared with EKF/EKS frameworks, especially in non-Gaussian nonstationary situations.

  4. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    Results are presented from the evaluation of the performance seeking control (PSC) optimization algorithm developed by Smith et al. (1990) for F-15 aircraft, which optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. Comparisons are presented between the load cell measurements, PSC onboard model thrust calculations, and posttest state variable model computations. Actual performance improvements using the PSC algorithm are presented for its various modes. The results of using PSC algorithm are compared with similar test case results using the HIDEC algorithm.

  5. Performance metrics and variance partitioning reveal sources of uncertainty in species distribution models

    USGS Publications Warehouse

    Watling, James I.; Brandt, Laura A.; Bucklin, David N.; Fujisaki, Ikuko; Mazzotti, Frank J.; Romañach, Stephanie; Speroterra, Carolina

    2015-01-01

    Species distribution models (SDMs) are widely used in basic and applied ecology, making it important to understand sources and magnitudes of uncertainty in SDM performance and predictions. We analyzed SDM performance and partitioned variance among prediction maps for 15 rare vertebrate species in the southeastern USA using all possible combinations of seven potential sources of uncertainty in SDMs: algorithms, climate datasets, model domain, species presences, variable collinearity, CO2 emissions scenarios, and general circulation models. The choice of modeling algorithm was the greatest source of uncertainty in SDM performance and prediction maps, with some additional variation in performance associated with the comprehensiveness of the species presences used for modeling. Other sources of uncertainty that have received attention in the SDM literature such as variable collinearity and model domain contributed little to differences in SDM performance or predictions in this study. Predictions from different algorithms tended to be more variable at northern range margins for species with more northern distributions, which may complicate conservation planning at the leading edge of species' geographic ranges. The clear message emerging from this work is that researchers should use multiple algorithms for modeling rather than relying on predictions from a single algorithm, invest resources in compiling a comprehensive set of species presences, and explicitly evaluate uncertainty in SDM predictions at leading range margins.

  6. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  7. Independent Component Analysis of Textures

    NASA Technical Reports Server (NTRS)

    Manduchi, Roberto; Portilla, Javier

    2000-01-01

    A common method for texture representation is to use the marginal probability densities over the outputs of a set of multi-orientation, multi-scale filters as a description of the texture. We propose a technique, based on Independent Components Analysis, for choosing the set of filters that yield the most informative marginals, meaning that the product over the marginals most closely approximates the joint probability density function of the filter outputs. The algorithm is implemented using a steerable filter space. Experiments involving both texture classification and synthesis show that compared to Principal Components Analysis, ICA provides superior performance for modeling of natural and synthetic textures.

  8. Tumour border configuration in colorectal cancer: proposal for an alternative scoring system based on the percentage of infiltrating margin.

    PubMed

    Karamitopoulou, Eva; Zlobec, Inti; Koelzer, Viktor Hendrik; Langer, Rupert; Dawson, Heather; Lugli, Alessandro

    2015-10-01

    Information on tumour border configuration (TBC) in colorectal cancer (CRC) is currently not included in most pathology reports, owing to lack of reproducibility and/or established evaluation systems. The aim of this study was to investigate whether an alternative scoring system based on the percentage of the infiltrating component may represent a reliable method for assessing TBC. Two hundred and fifteen CRCs with complete clinicopathological data were evaluated by two independent observers, both 'traditionally' by assigning the tumours into pushing/infiltrating/mixed categories, and alternatively by scoring the percentage of infiltrating margin. With the pushing/infiltrating/mixed pattern method, interobserver agreement (IOA) was moderate (κ = 0.58), whereas with the percentage of infiltrating margins method, IOA was excellent (intraclass correlation coefficient of 0.86). A higher percentage of infiltrating margin correlated with adverse features such as higher grade (P = 0.0025), higher pT (P = 0.0007), pN (P = 0.0001) and pM classification (P = 0.0063), high-grade tumour budding (P < 0.0001), lymphatic invasion (P < 0.0001), vascular invasion (P = 0.0032), and shorter survival (P = 0.0008), and was significantly associated with an increased probability of lymph node metastasis (P < 0.001). Information on TBC gives additional prognostic value to pathology reports on CRC. The novel proposed scoring system, by using the percentage of infiltrating margin, outperforms the 'traditional' way of reporting TBC. Additionally, it is reproducible and simple to apply, and can therefore be easily integrated into daily diagnostic practice. © 2015 John Wiley & Sons Ltd.

  9. Max-margin weight learning for medical knowledge network.

    PubMed

    Jiang, Jingchi; Xie, Jing; Zhao, Chao; Su, Jia; Guan, Yi; Yu, Qiubin

    2018-03-01

    The application of medical knowledge strongly affects the performance of intelligent diagnosis, and method of learning the weights of medical knowledge plays a substantial role in probabilistic graphical models (PGMs). The purpose of this study is to investigate a discriminative weight-learning method based on a medical knowledge network (MKN). We propose a training model called the maximum margin medical knowledge network (M 3 KN), which is strictly derived for calculating the weight of medical knowledge. Using the definition of a reasonable margin, the weight learning can be transformed into a margin optimization problem. To solve the optimization problem, we adopt a sequential minimal optimization (SMO) algorithm and the clique property of a Markov network. Ultimately, M 3 KN not only incorporates the inference ability of PGMs but also deals with high-dimensional logic knowledge. The experimental results indicate that M 3 KN obtains a higher F-measure score than the maximum likelihood learning algorithm of MKN for both Chinese Electronic Medical Records (CEMRs) and Blood Examination Records (BERs). Furthermore, the proposed approach is obviously superior to some classical machine learning algorithms for medical diagnosis. To adequately manifest the importance of domain knowledge, we numerically verify that the diagnostic accuracy of M 3 KN is gradually improved as the number of learned CEMRs increase, which contain important medical knowledge. Our experimental results show that the proposed method performs reliably for learning the weights of medical knowledge. M 3 KN outperforms other existing methods by achieving an F-measure of 0.731 for CEMRs and 0.4538 for BERs. This further illustrates that M 3 KN can facilitate the investigations of intelligent healthcare. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. The Local Minima Problem in Hierarchical Classes Analysis: An Evaluation of a Simulated Annealing Algorithm and Various Multistart Procedures

    ERIC Educational Resources Information Center

    Ceulemans, Eva; Van Mechelen, Iven; Leenen, Iwin

    2007-01-01

    Hierarchical classes models are quasi-order retaining Boolean decomposition models for N-way N-mode binary data. To fit these models to data, rationally started alternating least squares (or, equivalently, alternating least absolute deviations) algorithms have been proposed. Extensive simulation studies showed that these algorithms succeed quite…

  11. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm. CRESST Report 834

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2013-01-01

    In Ramsay curve item response theory (RC-IRT, Woods & Thissen, 2006) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's (1981) EM algorithm, which yields maximum marginal likelihood estimates. This method, however,…

  12. Estimation of a Ramsay-Curve Item Response Theory Model by the Metropolis-Hastings Robbins-Monro Algorithm

    ERIC Educational Resources Information Center

    Monroe, Scott; Cai, Li

    2014-01-01

    In Ramsay curve item response theory (RC-IRT) modeling, the shape of the latent trait distribution is estimated simultaneously with the item parameters. In its original implementation, RC-IRT is estimated via Bock and Aitkin's EM algorithm, which yields maximum marginal likelihood estimates. This method, however, does not produce the…

  13. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    NASA Astrophysics Data System (ADS)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with respect to other commonly used approaches in the literature.

  14. A method to improve visual similarity of breast masses for an interactive computer-aided diagnosis environment.

    PubMed

    Zheng, Bin; Lu, Amy; Hardesty, Lara A; Sumkin, Jules H; Hakim, Christiane M; Ganott, Marie A; Gur, David

    2006-01-01

    The purpose of this study was to develop and test a method for selecting "visually similar" regions of interest depicting breast masses from a reference library to be used in an interactive computer-aided diagnosis (CAD) environment. A reference library including 1000 malignant mass regions and 2000 benign and CAD-generated false-positive regions was established. When a suspicious mass region is identified, the scheme segments the region and searches for similar regions from the reference library using a multifeature based k-nearest neighbor (KNN) algorithm. To improve selection of reference images, we added an interactive step. All actual masses in the reference library were subjectively rated on a scale from 1 to 9 as to their "visual margins speculations". When an observer identifies a suspected mass region during a case interpretation he/she first rates the margins and the computerized search is then limited only to regions rated as having similar levels of spiculation (within +/-1 scale difference). In an observer preference study including 85 test regions, two sets of the six "similar" reference regions selected by the KNN with and without the interactive step were displayed side by side with each test region. Four radiologists and five nonclinician observers selected the more appropriate ("similar") reference set in a two alternative forced choice preference experiment. All four radiologists and five nonclinician observers preferred the sets of regions selected by the interactive method with an average frequency of 76.8% and 74.6%, respectively. The overall preference for the interactive method was highly significant (p < 0.001). The study demonstrated that a simple interactive approach that includes subjectively perceived ratings of one feature alone namely, a rating of margin "spiculation," could substantially improve the selection of "visually similar" reference images.

  15. On exact correlation functions of chiral ring operators in 2 d N=(2, 2) SCFTs via localization

    NASA Astrophysics Data System (ADS)

    Chen, Jin

    2018-03-01

    We study the extremal correlation functions of (twisted) chiral ring operators via superlocalization in N=(2, 2) superconformal field theories (SCFTs) with central charge c ≥ 3, especially for SCFTs with Calabi-Yau geometric phases. We extend the method in arXiv: 1602.05971 with mild modifications, so that it is applicable to disentangle operators mixing on S 2 in nilpotent (twisted) chiral rings of 2 d SCFTs. With the extended algorithm and technique of localization, we compute exactly the extremal correlators in 2 d N=(2, 2) (twisted) chiral rings as non-holomorphic functions of marginal parameters of the theories. Especially in the context of Calabi-Yau geometries, we give an explicit geometric interpretation to our algorithm as the Griffiths transversality with projection on the Hodge bundle over Calabi-Yau complex moduli. We also apply the method to compute extremal correlators in Kähler moduli, or say twisted chiral rings, of several interesting Calabi-Yau manifolds. In the case of complete intersections in toric varieties, we provide an alternative formalism for extremal correlators via localization onto Higgs branch. In addition, as a spinoff we find that, from the extremal correlators of the top element in twisted chiral rings, one can extract chiral correlators in A-twisted topological theories.

  16. Development of a social-hydrological-health framework for understanding risks of occurrence of diarrheal diseases

    NASA Astrophysics Data System (ADS)

    Khan, M. R. H.; Jutla, A.; Colwell, R. R.

    2015-12-01

    Diarrheal diseases continue to pose a severe health threat in regions where sanitation facilities remain marginal and are prone to destruction. With limited efficacy of vaccines, it is important to device alternate methods to determine environmental conditions favorable for diarrheal diseases. Several vibrios (V. cholerae., V. vulnificus, V. parahaemolyticus) have characteristic signatures that are associated with large scale climatic processes. The interactions of vibrios with humans eventually lead to outbreak of diseases. Here, using cholera as one of the signature diarrheal disease, we present a framework coupling social, hydrological and microbiological understanding with satellite remote sensing data to predict environmental conditions associated with outbreak of disease in several regions of sub-Saharan Africa. Hydroclimatic processes, primarily precipitation and temperature are found to be strongly associated with epidemic and episodic outbreak of cholera. We will present an algorithm to classify regions susceptible to risks of outbreak cholera using profile method in five epidemic regions of Mozambique, Central African Republic, Cameroon, South Sudan and Rwanda. Conditions for occurrence of cholera were detectable at least one month in advance. Using spatial land surface temperature (LST) data from satellites along with water accessibility data and population data, the implementation of the algorithm aid in classification of cholera risk regions.

  17. Algorithms for detecting antibodies to HIV-1: results from a rural Ugandan cohort.

    PubMed

    Nunn, A J; Biryahwaho, B; Downing, R G; van der Groen, G; Ojwiya, A; Mulder, D W

    1993-08-01

    To evaluate an algorithm using two enzyme immunoassays (EIA) for anti-HIV-1 antibodies in a rural African population and to assess alternative simplified algorithms. Sera obtained from 7895 individuals in a rural population survey were tested using an algorithm based on two different EIA systems: Recombigen HIV-1 EIA and Wellcozyme HIV-1 Recombinant. Alternative algorithms were assessed using negative or confirmed positive sera. None of the 227 sera classified as unequivocably negative by the two assays were positive by Western blot. Of 192 sera unequivocably positive by both assays, four were seronegative by Western blot. The possibility of technical error cannot be ruled out in three of these. One of the alternative algorithms assessed classified all borderline or discordant assay results as negative had a specificity of 100% and a sensitivity of 98.4%. The cost of this algorithm is one-third that of the conventional algorithm. Our evaluation suggests that high specificity and sensitivity can be obtained without using Western blot and at a considerable reduction in cost.

  18. Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization

    PubMed Central

    Spielman, Seth E.; Folch, David C.

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. PMID:25723176

  19. Reducing uncertainty in the american community survey through data-driven regionalization.

    PubMed

    Spielman, Seth E; Folch, David C

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold.

  20. A novel double fine guide sensor design on space telescope

    NASA Astrophysics Data System (ADS)

    Zhang, Xu-xu; Yin, Da-yi

    2018-02-01

    To get high precision attitude for space telescope, a double marginal FOV (field of view) FGS (Fine Guide Sensor) is proposed. It is composed of two large area APS CMOS sensors and both share the same lens in main light of sight. More star vectors can be get by two FGS and be used for high precision attitude determination. To improve star identification speed, the vector cross product in inter-star angles for small marginal FOV different from traditional way is elaborated and parallel processing method is applied to pyramid algorithm. The star vectors from two sensors are then used to attitude fusion with traditional QUEST algorithm. The simulation results show that the system can get high accuracy three axis attitudes and the scheme is feasibility.

  1. Active/passive microwave sensor comparison of MIZ-ice concentration estimates. [Marginal Ice Zone (MIZ)

    NASA Technical Reports Server (NTRS)

    Burns, B. A.; Cavalieri, D. J.; Keller, M. R.

    1986-01-01

    Active and passive microwave data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait (MIZEX 84) are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) data to those obtained from passive microwave imagery at several frequencies. The comparison is carried out to evaluate SAR performance against the more established passive microwave technique, and to investigate discrepancies in terms of how ice surface conditions, imaging geometry, and choice of algorithm parameters affect each sensor. Active and passive estimates of ice concentration agree on average to within 12%. Estimates from the multichannel passive microwave data show best agreement with the SAR estimates because the multichannel algorithm effectively accounts for the range in ice floe brightness temperatures observed in the MIZ.

  2. Space Launch System Implementation of Adaptive Augmenting Control

    NASA Technical Reports Server (NTRS)

    Wall, John H.; Orr, Jeb S.; VanZwieten, Tannen S.

    2014-01-01

    Given the complex structural dynamics, challenging ascent performance requirements, and rigorous flight certification constraints owing to its manned capability, the NASA Space Launch System (SLS) launch vehicle requires a proven thrust vector control algorithm design with highly optimized parameters to provide stable and high-performance flight. On its development path to Preliminary Design Review (PDR), the SLS flight control system has been challenged by significant vehicle flexibility, aerodynamics, and sloshing propellant. While the design has been able to meet all robust stability criteria, it has done so with little excess margin. Through significant development work, an Adaptive Augmenting Control (AAC) algorithm has been shown to extend the envelope of failures and flight anomalies the SLS control system can accommodate while maintaining a direct link to flight control stability criteria such as classical gain and phase margin. In this paper, the work performed to mature the AAC algorithm as a baseline component of the SLS flight control system is presented. The progress to date has brought the algorithm design to the PDR level of maturity. The algorithm has been extended to augment the full SLS digital 3-axis autopilot, including existing load-relief elements, and the necessary steps for integration with the production flight software prototype have been implemented. Several updates which have been made to the adaptive algorithm to increase its performance, decrease its sensitivity to expected external commands, and safeguard against limitations in the digital implementation are discussed with illustrating results. Monte Carlo simulations and selected stressing case results are also shown to demonstrate the algorithm's ability to increase the robustness of the integrated SLS flight control system.

  3. Learning Structured Classifiers with Dual Coordinate Ascent

    DTIC Science & Technology

    2010-06-01

    stochastic gradient descent (SGD) [LeCun et al., 1998], and the margin infused relaxed algorithm (MIRA) [ Crammer et al., 2006]. This paper presents a...evaluate these methods on the Prague Dependency Treebank us- ing online large-margin learning tech- niques ( Crammer et al., 2003; McDonald et al., 2005...between two kinds of factors: hard constraint factors, which are used to rule out forbidden par- tial assignments by mapping them to zero potential values

  4. Mapping and Assessing Variability in the Antarctic Marginal Ice Zone, the Pack Ice and Coastal Polynyas

    NASA Astrophysics Data System (ADS)

    Stroeve, Julienne; Jenouvrier, Stephanie

    2016-04-01

    Sea ice variability within the marginal ice zone (MIZ) and polynyas plays an important role for phytoplankton productivity and krill abundance. Therefore mapping their spatial extent, seasonal and interannual variability is essential for understanding how current and future changes in these biological active regions may impact the Antarctic marine ecosystem. Knowledge of the distribution of different ice types to the total Antarctic sea ice cover may also help to shed light on the factors contributing towards recent expansion of the Antarctic ice cover in some regions and contraction in others. The long-term passive microwave satellite data record provides the longest and most consistent data record for assessing different ice types. However, estimates of the amount of MIZ, consolidated pack ice and polynyas depends strongly on what sea ice algorithm is used. This study uses two popular passive microwave sea ice algorithms, the NASA Team and Bootstrap to evaluate the distribution and variability in the MIZ, the consolidated pack ice and coastal polynyas. Results reveal the NASA Team algorithm has on average twice the MIZ and half the consolidated pack ice area as the Bootstrap algorithm. Polynya area is also larger in the NASA Team algorithm, and the timing of maximum polynya area may differ by as much as 5 months between algorithms. These differences lead to different relationships between sea ice characteristics and biological processes, as illustrated here with the breeding success of an Antarctic seabird.

  5. 30 CFR 204.200 - What is the purpose of this subpart?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.200 What is the purpose... auditing relief for your Federal onshore or OCS lease production from a marginal property. The two types of accounting and auditing relief that you can receive under this subpart are cumulative reports and payment...

  6. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  7. Automating digital leaf measurement: the tooth, the whole tooth, and nothing but the tooth.

    PubMed

    Corney, David P A; Tang, H Lilian; Clark, Jonathan Y; Hu, Yin; Jin, Jing

    2012-01-01

    Many species of plants produce leaves with distinct teeth around their margins. The presence and nature of these teeth can often help botanists to identify species. Moreover, it has long been known that more species native to colder regions have teeth than species native to warmer regions. It has therefore been suggested that fossilized remains of leaves can be used as a proxy for ancient climate reconstruction. Similar studies on living plants can help our understanding of the relationships. The required analysis of leaves typically involves considerable manual effort, which in practice limits the number of leaves that are analyzed, potentially reducing the power of the results. In this work, we describe a novel algorithm to automate the marginal tooth analysis of leaves found in digital images. We demonstrate our methods on a large set of images of whole herbarium specimens collected from Tilia trees (also known as lime, linden or basswood). We chose the genus Tilia as its constituent species have toothed leaves of varied size and shape. In a previous study we extracted c.1600 leaves automatically from a set of c.1100 images. Our new algorithm locates teeth on the margins of such leaves and extracts features such as each tooth's area, perimeter and internal angles, as well as counting them. We evaluate an implementation of our algorithm's performance against a manually analyzed subset of the images. We found that the algorithm achieves an accuracy of 85% for counting teeth and 75% for estimating tooth area. We also demonstrate that the automatically extracted features are sufficient to identify different species of Tilia using a simple linear discriminant analysis, and that the features relating to teeth are the most useful.

  8. Multivariate quantile mapping bias correction: an N-dimensional probability density function transform for climate model simulations of multiple variables

    NASA Astrophysics Data System (ADS)

    Cannon, Alex J.

    2018-01-01

    Most bias correction algorithms used in climatology, for example quantile mapping, are applied to univariate time series. They neglect the dependence between different variables. Those that are multivariate often correct only limited measures of joint dependence, such as Pearson or Spearman rank correlation. Here, an image processing technique designed to transfer colour information from one image to another—the N-dimensional probability density function transform—is adapted for use as a multivariate bias correction algorithm (MBCn) for climate model projections/predictions of multiple climate variables. MBCn is a multivariate generalization of quantile mapping that transfers all aspects of an observed continuous multivariate distribution to the corresponding multivariate distribution of variables from a climate model. When applied to climate model projections, changes in quantiles of each variable between the historical and projection period are also preserved. The MBCn algorithm is demonstrated on three case studies. First, the method is applied to an image processing example with characteristics that mimic a climate projection problem. Second, MBCn is used to correct a suite of 3-hourly surface meteorological variables from the Canadian Centre for Climate Modelling and Analysis Regional Climate Model (CanRCM4) across a North American domain. Components of the Canadian Forest Fire Weather Index (FWI) System, a complicated set of multivariate indices that characterizes the risk of wildfire, are then calculated and verified against observed values. Third, MBCn is used to correct biases in the spatial dependence structure of CanRCM4 precipitation fields. Results are compared against a univariate quantile mapping algorithm, which neglects the dependence between variables, and two multivariate bias correction algorithms, each of which corrects a different form of inter-variable correlation structure. MBCn outperforms these alternatives, often by a large margin, particularly for annual maxima of the FWI distribution and spatiotemporal autocorrelation of precipitation fields.

  9. Listing triangles in expected linear time on a class of power law graphs.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nordman, Daniel J.; Wilson, Alyson G.; Phillips, Cynthia Ann

    Enumerating triangles (3-cycles) in graphs is a kernel operation for social network analysis. For example, many community detection methods depend upon finding common neighbors of two related entities. We consider Cohen's simple and elegant solution for listing triangles: give each node a 'bucket.' Place each edge into the bucket of its endpoint of lowest degree, breaking ties consistently. Each node then checks each pair of edges in its bucket, testing for the adjacency that would complete that triangle. Cohen presents an informal argument that his algorithm should run well on real graphs. We formalize this argument by providing an analysismore » for the expected running time on a class of random graphs, including power law graphs. We consider a rigorously defined method for generating a random simple graph, the erased configuration model (ECM). In the ECM each node draws a degree independently from a marginal degree distribution, endpoints pair randomly, and we erase self loops and multiedges. If the marginal degree distribution has a finite second moment, it follows immediately that Cohen's algorithm runs in expected linear time. Furthermore, it can still run in expected linear time even when the degree distribution has such a heavy tail that the second moment is not finite. We prove that Cohen's algorithm runs in expected linear time when the marginal degree distribution has finite 4/3 moment and no vertex has degree larger than {radical}n. In fact we give the precise asymptotic value of the expected number of edge pairs per bucket. A finite 4/3 moment is required; if it is unbounded, then so is the number of pairs. The marginal degree distribution of a power law graph has bounded 4/3 moment when its exponent {alpha} is more than 7/3. Thus for this class of power law graphs, with degree at most {radical}n, Cohen's algorithm runs in expected linear time. This is precisely the value of {alpha} for which the clustering coefficient tends to zero asymptotically, and it is in the range that is relevant for the degree distribution of the World-Wide Web.« less

  10. AMLSA Algorithm for Hybrid Precoding in Millimeter Wave MIMO Systems

    NASA Astrophysics Data System (ADS)

    Liu, Fulai; Sun, Zhenxing; Du, Ruiyan; Bai, Xiaoyu

    2017-10-01

    In this paper, an effective algorithm will be proposed for hybrid precoding in mmWave MIMO systems, referred to as alternating minimization algorithm with the least squares amendment (AMLSA algorithm). To be specific, for the fully-connected structure, the presented algorithm is exploited to minimize the classical objective function and obtain the hybrid precoding matrix. It introduces an orthogonal constraint to the digital precoding matrix which is amended subsequently by the least squares after obtaining its alternating minimization iterative result. Simulation results confirm that the achievable spectral efficiency of our proposed algorithm is better to some extent than that of the existing algorithm without the least squares amendment. Furthermore, the number of iterations is reduced slightly via improving the initialization procedure.

  11. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  12. Predicting Sediment Thickness on Vanished Ocean Crust Since 200 Ma

    NASA Astrophysics Data System (ADS)

    Dutkiewicz, A.; Müller, R. D.; Wang, X.; O'Callaghan, S.; Cannon, J.; Wright, N. M.

    2017-12-01

    Tracing sedimentation through time on existing and vanished seafloor is imperative for constraining long-term eustasy and for calculating volumes of subducted deep-sea sediments that contribute to global geochemical cycles. We present regression algorithms that incorporate the age of the ocean crust and the mean distance to the nearest passive margin to predict sediment thicknesses and long-term decompacted sedimentation rates since 200 Ma. The mean sediment thickness decreases from ˜220 m at 200 Ma to a minimum of ˜140 m at 130 Ma, reflecting the replacement of old Panthalassic ocean floor with young sediment-poor mid-ocean ridges, followed by an increase to ˜365 m at present-day. This increase reflects the accumulation of sediments on ageing abyssal plains proximal to passive margins, coupled with a decrease in the mean distance of any parcel of ocean crust to the nearest passive margin by over 700 km, and a doubling of the total passive margin length at present-day. Mean long-term sedimentation rates increase from ˜0.5 cm/ky at 160 Ma to over 0.8 cm/ky today, caused by enhanced terrigenous sediment influx along lengthened passive margins, superimposed by the onset of ocean-wide carbonate sedimentation. Our predictive algorithms, coupled to a plate tectonic model, provide a framework for constraining the seafloor sediment-driven eustatic sea-level component, which has grown from ˜80 to 210 m since 120 Ma. This implies a long-term sea-level rise component of 130 m, partly counteracting the contemporaneous increase in ocean basin depth due to progressive crustal ageing.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rottmann, J; Berbeco, R; Keall, P

    Purpose: To maximize normal tissue sparing for treatments requiring motion encompassing margins. Motion mitigation techniques including DMLC or couch tracking can freeze tumor motion within the treatment aperture potentially allowing for smaller treatment margins and thus better sparing of normal tissue. To enable for a safe application of this concept in the clinic we propose adapting margins dynamically in real-time during radiotherapy delivery based on personalized tumor localization confidence. To demonstrate technical feasibility we present a phantom study. Methods: We utilize a realistic anthropomorphic dynamic thorax phantom with a lung tumor model embedded close to the spine. The tumor, amore » 3D-printout of a patient's GTV, is moved 15mm peak-to-peak by diaphragm compression and monitored by continuous EPID imaging in real-time. Two treatment apertures are created for each beam, one representing ITV -based and the other GTV-based margin expansion. A soft tissue localization (STiL) algorithm utilizing the continuous EPID images is employed to freeze tumor motion within the treatment aperture by means of DMLC tracking. Depending on a tracking confidence measure (TCM), the treatment aperture is adjusted between the ITV and the GTV leaf. Results: We successfully demonstrate real-time personalized margin adjustment in a phantom study. We measured a system latency of about 250 ms which we compensated by utilizing a respiratory motion prediction algorithm (ridge regression). With prediction in place we observe tracking accuracies better than 1mm. For TCM=0 (as during startup) an ITV-based treatment aperture is chosen, for TCM=1 a GTV-based aperture and for 0« less

  14. Atmospheric Profiles, Clouds and the Evolution of Sea Ice Cover in the Beaufort and Chukchi Seas

    DTIC Science & Technology

    2014-09-30

    developed by incorporating the proposed IR sensors and ground-sky temperature difference algorithm into a tethered balloon borne payload (Figure 3...into the cloud base. RESULTS FROM FY 2014 • A second flight of the tethered balloon -borne IR cloud margin sensor was conducted in Colorado on...Figure 3: Tethered balloon -borne IR sensing payload IR Cloud Margin Sensor Figure 4: First successful flight validation of the IR cloud

  15. Re-Engaging Marginalized Youth through Digital Music Production: Performance, Audience and Evaluation

    ERIC Educational Resources Information Center

    Brader, Andy; Luke, Allan

    2013-01-01

    This article presents two case studies of marginalized youth experimenting with digital music production in flexible education settings. The cases were drawn from a 3-year study of alternative assessment in flexible learning centres for youth who have left formal schooling in Queensland, Australia. The educational issues are framed by reference to…

  16. The Continental Margins Program in Georgia

    USGS Publications Warehouse

    Cocker, M.D.; Shapiro, E.A.

    1999-01-01

    From 1984 to 1993, the Georgia Geologic Survey (GGS) participated in the Minerals Management Service-funded Continental Margins Program. Geological and geophysical data acquisition focused on offshore stratigraphic framework studies, phosphate-bearing Miocene-age strata, distribution of heavy minerals, near-surface alternative sources of groundwater, and development of a PC-based Coastal Geographic Information System (GIS). Seven GGS publications document results of those investigations. In addition to those publications, direct benefits of the GGS's participation include an impetus to the GGS's investigations of economic minerals on the Georgia coast, establishment of a GIS that includes computer hardware and software, and seeds for additional investigations through the information and training acquired as a result of the Continental Margins Program. These addtional investigations are quite varied in scope, and many were made possible because of GIS expertise gained as a result of the Continental Margins Program. Future investigations will also reap the benefits of the Continental Margins Program.From 1984 to 1993, the Georgia Geologic Survey (GGS) participated in the Minerals Management Service-funded Continental Margins Program. Geological and geophysical data acquisition focused on offshore stratigraphic framework studies, phosphate-bearing Miocene-age strata, distribution of heavy minerals, near-surface alternative sources of groundwater, and development of a PC-based Coastal Geographic Information System (GIS). Seven GGS publications document results of those investigations. In addition to those publications, direct benefits of the GGS's participation include an impetus to the GGS's investigations of economic minerals on the Georgia coast, establishment of a GIS that includes computer hardware and software, and seeds for additional investigations through the information and training acquired as a result of the Continental Margins Program. These additional investigations are quite varied in scope, and many were made possible because of GIS expertise gained as a result of the Continental Margins Program. Future investigations will also reap the benefits of the Continental Margins Program.

  17. Quantitative segmentation of fluorescence microscopy images of heterogeneous tissue: Approach for tuning algorithm parameters

    NASA Astrophysics Data System (ADS)

    Mueller, Jenna L.; Harmany, Zachary T.; Mito, Jeffrey K.; Kennedy, Stephanie A.; Kim, Yongbaek; Dodd, Leslie; Geradts, Joseph; Kirsch, David G.; Willett, Rebecca M.; Brown, J. Quincy; Ramanujam, Nimmi

    2013-02-01

    The combination of fluorescent contrast agents with microscopy is a powerful technique to obtain real time images of tissue histology without the need for fixing, sectioning, and staining. The potential of this technology lies in the identification of robust methods for image segmentation and quantitation, particularly in heterogeneous tissues. Our solution is to apply sparse decomposition (SD) to monochrome images of fluorescently-stained microanatomy to segment and quantify distinct tissue types. The clinical utility of our approach is demonstrated by imaging excised margins in a cohort of mice after surgical resection of a sarcoma. Representative images of excised margins were used to optimize the formulation of SD and tune parameters associated with the algorithm. Our results demonstrate that SD is a robust solution that can advance vital fluorescence microscopy as a clinically significant technology.

  18. A method for obtaining reduced-order control laws for high-order systems using optimization techniques

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1981-01-01

    A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

  19. The 2009 Influenza A(H1N1) Outbreak: Selected Legal Issues

    DTIC Science & Technology

    2009-05-06

    used primarily against marginalized, nonwhite persons underscores the need for legal oversight —if only so that affected communities can be assured of...traditional or alternative settings. Potential strategies and or guidance addressing telecommuting , alternative schedules, or modified operating hours

  20. Marginal semi-supervised sub-manifold projections with informative constraints for dimensionality reduction and recognition.

    PubMed

    Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S

    2012-12-01

    In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. A Goal Seeking Strategy for Constructing Systems from Alternative Components

    NASA Technical Reports Server (NTRS)

    Valentine, Mark E.

    1999-01-01

    This paper describes a methodology to efficiently construct feasible systems then modify feasible systems to meet successive goals by selecting from alternative components, a problem recognized to be n-p complete. The methodology provides a means to catalog and model alternative components. A presented system modeling Structure is robust enough to model a wide variety of systems and provides a means to compare and evaluate alternative systems. These models act as input to a methodology for selecting alternative components to construct feasible systems and modify feasible systems to meet design goals and objectives. The presented algorithm's ability to find a restricted solution, as defined by a unique set of requirements, is demonstrated against an exhaustive search of a sample of proposed shuttle modifications. The utility of the algorithm is demonstrated by comparing results from the algorithm with results from three NASA shuttle evolution studies using their value systems and assumptions.

  2. Space Launch System Implementation of Adaptive Augmenting Control

    NASA Technical Reports Server (NTRS)

    VanZwieten, Tannen S.; Wall, John H.; Orr, Jeb S.

    2014-01-01

    Given the complex structural dynamics, challenging ascent performance requirements, and rigorous flight certification constraints owing to its manned capability, the NASA Space Launch System (SLS) launch vehicle requires a proven thrust vector control algorithm design with highly optimized parameters to robustly demonstrate stable and high performance flight. On its development path to preliminary design review (PDR), the stability of the SLS flight control system has been challenged by significant vehicle flexibility, aerodynamics, and sloshing propellant dynamics. While the design has been able to meet all robust stability criteria, it has done so with little excess margin. Through significant development work, an adaptive augmenting control (AAC) algorithm previously presented by Orr and VanZwieten, has been shown to extend the envelope of failures and flight anomalies for which the SLS control system can accommodate while maintaining a direct link to flight control stability criteria (e.g. gain & phase margin). In this paper, the work performed to mature the AAC algorithm as a baseline component of the SLS flight control system is presented. The progress to date has brought the algorithm design to the PDR level of maturity. The algorithm has been extended to augment the SLS digital 3-axis autopilot, including existing load-relief elements, and necessary steps for integration with the production flight software prototype have been implemented. Several updates to the adaptive algorithm to increase its performance, decrease its sensitivity to expected external commands, and safeguard against limitations in the digital implementation are discussed with illustrating results. Monte Carlo simulations and selected stressing case results are shown to demonstrate the algorithm's ability to increase the robustness of the integrated SLS flight control system.

  3. Tool Wear Feature Extraction Based on Hilbert Marginal Spectrum

    NASA Astrophysics Data System (ADS)

    Guan, Shan; Song, Weijie; Pang, Hongyang

    2017-09-01

    In the metal cutting process, the signal contains a wealth of tool wear state information. A tool wear signal’s analysis and feature extraction method based on Hilbert marginal spectrum is proposed. Firstly, the tool wear signal was decomposed by empirical mode decomposition algorithm and the intrinsic mode functions including the main information were screened out by the correlation coefficient and the variance contribution rate. Secondly, Hilbert transform was performed on the main intrinsic mode functions. Hilbert time-frequency spectrum and Hilbert marginal spectrum were obtained by Hilbert transform. Finally, Amplitude domain indexes were extracted on the basis of the Hilbert marginal spectrum and they structured recognition feature vector of tool wear state. The research results show that the extracted features can effectively characterize the different wear state of the tool, which provides a basis for monitoring tool wear condition.

  4. Orion MPCV Touchdown Detection Threshold Development and Testing

    NASA Technical Reports Server (NTRS)

    Daum, Jared; Gay, Robert

    2013-01-01

    A robust method of detecting Orion Multi ]Purpose Crew Vehicle (MPCV) splashdown is necessary to ensure crew and hardware safety during descent and after touchdown. The proposed method uses a triple redundant system to inhibit Reaction Control System (RCS) thruster firings, detach parachute risers from the vehicle, and transition to the post ]landing segment of the Flight Software (FSW). The vehicle crew is the prime input for touchdown detection, followed by an autonomous FSW algorithm, and finally a strictly time based backup timer. RCS thrusters must be inhibited before submersion in water to protect against possible damage due to firing these jets under water. In addition, neglecting to declare touchdown will not allow the vehicle to transition to post ]landing activities such as activating the Crew Module Up ]righting System (CMUS), resulting in possible loss of communication and difficult recovery. A previous AIAA paper gAssessment of an Automated Touchdown Detection Algorithm for the Orion Crew Module h concluded that a strictly Inertial Measurement Unit (IMU) based detection method using an acceleration spike algorithm had the highest safety margins and shortest detection times of other methods considered. That study utilized finite element simulations of vehicle splashdown, generated by LS ]DYNA, which were expanded to a larger set of results using a Kriging surface fit. The study also used the Decelerator Systems Simulation (DSS) to generate flight dynamics during vehicle descent under parachutes. Proto ]type IMU and FSW MATLAB models provided the basis for initial algorithm development and testing. This paper documents an in ]depth trade study, using the same dynamics data and MATLAB simulations as the earlier work, to further develop the acceleration detection method. By studying the combined effects of data rate, filtering on the rotational acceleration correction, data persistence limits and values of acceleration thresholds, an optimal configuration was determined. The lever arm calculation, which removes the centripetal acceleration caused by vehicle rotation, requires that the vehicle angular acceleration be derived from vehicle body rates, necessitating the addition of a 2nd order filter to smooth the data. It was determined that using 200 Hz data directly from the vehicle IMU outperforms the 40 Hz FSW data rate. Data persistence counter values and acceleration thresholds were balanced in order to meet desired safety and performance. The algorithm proved to exhibit ample safety margin against early detection while under parachutes, and adequate performance upon vehicle splashdown. Fall times from algorithm initiation were also studied, and a backup timer length was chosen to provide a large safety margin, yet still trigger detection before CMUS inflation. This timer serves as a backup to the primary acceleration detection method. Additionally, these parameters were tested for safety on actual flight test data, demonstrating expected safety margins.

  5. Understanding and Managing Propagation on Large Networks - Theory, Algorithms, and Models

    DTIC Science & Technology

    2012-09-01

    utility function f(x) with a diminishing marginal returns property typical of infection-control techniques (c.f. [ZIM+09]). Also note the inherent...were coded in C++. We use f(x) = 0.50x and r = 6 for all our experiments. The choice of the function f(x) captures the diminishing marginal utility of...to Praveen Mone , for introducing me to the thrills and mysterious rhythms of the Tabla. Above all, I wish to thank my family for being there for me

  6. Quantifying disbond area

    NASA Astrophysics Data System (ADS)

    Lowden, D. W.

    1992-10-01

    Disbonds simulated in a composite helicopter rotor blade were profiled using eddy currents. The method is inherently accurate and reproducible. An algorithm is described for calculating disbond margin. Disbond area is estimated assuming in-service disbondments exhibit circular geometry.

  7. 17 CFR 242.301 - Requirements for alternative trading systems.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 17 Commodity and Securities Exchanges 4 2014-04-01 2014-04-01 false Requirements for alternative trading systems. 242.301 Section 242.301 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY...

  8. 17 CFR 242.301 - Requirements for alternative trading systems.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 17 Commodity and Securities Exchanges 3 2012-04-01 2012-04-01 false Requirements for alternative trading systems. 242.301 Section 242.301 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY...

  9. 17 CFR 242.301 - Requirements for alternative trading systems.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 17 Commodity and Securities Exchanges 3 2013-04-01 2013-04-01 false Requirements for alternative trading systems. 242.301 Section 242.301 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY...

  10. 17 CFR 242.301 - Requirements for alternative trading systems.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 17 Commodity and Securities Exchanges 3 2011-04-01 2011-04-01 false Requirements for alternative trading systems. 242.301 Section 242.301 Commodity and Securities Exchanges SECURITIES AND EXCHANGE COMMISSION (CONTINUED) REGULATIONS M, SHO, ATS, AC, AND NMS AND CUSTOMER MARGIN REQUIREMENTS FOR SECURITY...

  11. Controller certification: The generalized stability margin inference for a large number of MIMO controllers

    NASA Astrophysics Data System (ADS)

    Park, Jisang

    In this dissertation, we investigate MIMO stability margin inference of a large number of controllers using pre-established stability margins of a small number of nu-gap-wise adjacent controllers. The generalized stability margin and the nu-gap metric are inherently able to handle MIMO system analysis without the necessity of repeating multiple channel-by-channel SISO analyses. This research consists of three parts: (i) development of a decision support tool for inference of the stability margin, (ii) computational considerations for yielding the maximal stability margin with the minimal nu-gap metric in a less conservative manner, and (iii) experiment design for estimating the generalized stability margin with an assured error bound. A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for an infinite set. In doing this, the nu-gap metric will be the main tool. We provide a theorem restricting a radius of a ball in the parameter space so that the controller can guarantee a prescribed level of stability and performance if parameters of the controllers are contained in the ball. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations. We consider methods for the computation of; maximal MIMO stability margins bP̂,C, minimal nu-gap metrics deltanu , and the maximal difference between these two values, through the use of scaling and weighting functions. We propose simultaneous scaling selections that attempt to maximize the generalized stability margin and minimize the nu-gap. The minimization of the nu-gap by scaling involves a non-convex optimization. We modify the XY-centering algorithm to handle this non-convexity. This is done for applications in controller certification. Estimating the generalized stability margin with an accurate error bound has significant impact on controller certification. We analyze an error bound of the generalized stability margin as the infinity norm of the MIMO empirical transfer function estimate (ETFE). Input signal design to reduce the error on the estimate is also studied. We suggest running the system for a certain amount of time prior to recording of each output data set. The assured upper bound of estimation error can be tuned by the amount of the pre-experiment.

  12. Understanding Division of Fractions: An Alternative View

    ERIC Educational Resources Information Center

    Fredua-Kwarteng, E.; Ahia, Francis

    2006-01-01

    The purpose of this paper is to offer three alternatives to patterns or visualization used to justify division of fraction "algorithm" invert and multiply". The three main approaches are historical, similar denominators and algebraic, that teachers could use to justify the standard algorithm of division of fraction. The historical approach uses…

  13. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  14. Robust Face Detection from Still Images

    DTIC Science & Technology

    2014-01-01

    significant change in false acceptance rates. Keywords— face detection; illumination; skin color variation; Haar-like features; OpenCV I. INTRODUCTION... OpenCV and an algorithm which used histogram equalization. The test is performed against 17 subjects under 576 viewing conditions from the extended Yale...original OpenCV algorithm proved the least accurate, having a hit rate of only 75.6%. It also had the lowest FAR but only by a slight margin at 25.2

  15. Wavelength routing beyond the standard graph coloring approach

    NASA Astrophysics Data System (ADS)

    Blankenhorn, Thomas

    2004-04-01

    When lightpaths are routed in the planning stage of transparent optical networks, the textbook approach is to use algorithms that try to minimize the overall number of wavelengths used in the . We demonstrate that this method cannot be expected to minimize actual costs when the marginal cost of instlling more wavelengths is a declining function of the number of wavelengths already installed, as is frequently the case. We further demonstrate how cost optimization can theoretically be improved with algorithms based on Prim"s algorithm. Finally, we test this theory with simulaion on a series of actual network topologies, which confirm the theoretical analysis.

  16. Using Alternative Multiplication Algorithms to "Offload" Cognition

    ERIC Educational Resources Information Center

    Jazby, Dan; Pearn, Cath

    2015-01-01

    When viewed through a lens of embedded cognition, algorithms may enable aspects of the cognitive work of multi-digit multiplication to be "offloaded" to the environmental structure created by an algorithm. This study analyses four multiplication algorithms by viewing different algorithms as enabling cognitive work to be distributed…

  17. Advancing the LSST Operations Simulator

    NASA Astrophysics Data System (ADS)

    Saha, Abhijit; Ridgway, S. T.; Cook, K. H.; Delgado, F.; Chandrasekharan, S.; Petry, C. E.; Operations Simulator Group

    2013-01-01

    The Operations Simulator for the Large Synoptic Survey Telescope (LSST; http://lsst.org) allows the planning of LSST observations that obey explicit science driven observing specifications, patterns, schema, and priorities, while optimizing against the constraints placed by design-specific opto-mechanical system performance of the telescope facility, site specific conditions (including weather and seeing), as well as additional scheduled and unscheduled downtime. A simulation run records the characteristics of all observations (e.g., epoch, sky position, seeing, sky brightness) in a MySQL database, which can be queried for any desired purpose. Derivative information digests of the observing history database are made with an analysis package called Simulation Survey Tools for Analysis and Reporting (SSTAR). Merit functions and metrics have been designed to examine how suitable a specific simulation run is for several different science applications. This poster reports recent work which has focussed on an architectural restructuring of the code that will allow us to a) use "look-ahead" strategies that avoid cadence sequences that cannot be completed due to observing constraints; and b) examine alternate optimization strategies, so that the most efficient scheduling algorithm(s) can be identified and used: even few-percent efficiency gains will create substantive scientific opportunity. The enhanced simulator will be used to assess the feasibility of desired observing cadences, study the impact of changing science program priorities, and assist with performance margin investigations of the LSST system.

  18. Multisensor comparison of ice concentration estimates in the marginal ice zone

    NASA Technical Reports Server (NTRS)

    Burns, B. A.; Cavalieri, D. J.; Gloersen, P.; Keller, M. R.; Campbell, W. J.

    1987-01-01

    Aircraft remote sensing data collected during the 1984 summer Marginal Ice Zone Experiment in the Fram Strait are used to compare ice concentration estimates derived from synthetic aperture radar (SAR) imagery, passive microwave imagery at several frequencies, aerial photography, and spectral photometer data. The comparison is carried out not only to evaluate SAR performance against more established techniques but also to investigate how ice surface conditions, imaging geometry, and choice of algorithm parameters affect estimates made by each sensor.Active and passive microwave sensor estimates of ice concentration derived using similar algorithms show an rms difference of 13 percent. Agreement between each microwave sensor and near-simultaneous aerial photography is approximately the same (14 percent). The availability of high-resolution microwave imagery makes it possible to ascribe the discrepancies in the concentration estimates to variations in ice surface signatures in the scene.

  19. The lived experience of girl-to-girl aggression in marginalized girls.

    PubMed

    Zenz Adamshick, Pamela

    2010-04-01

    Girl-to-girl aggression is increasingly being recognized as a health problem, and the number of teenage girls involved in serious fighting is on the rise. Research on the experiences of girl-to-girl aggression in marginalized girls who are out of the mainstream because of poor relationship skills and physical aggression is notably absent, yet this group is at heightened risk for persistent violence. In this study I used the interpretive phenomenological approach to study the lived experience of girl-to-girl aggression in girls who were marginalized and attending an alternative school because of physically aggressive behavior. Data were collected over a 4-month period by means of in-depth interviews and field notes. For this population, girl-to-girl aggression provided self-protection, expressed girls' identity, and was also a means to finding attachment, connection, and friendship. These findings have multidisciplinary implications for interventions with physically aggressive girls, including mentoring programs, in-school support groups, and exploration of a paradigm shift in the use of alternative schools.

  20. Robust boosting via convex optimization

    NASA Astrophysics Data System (ADS)

    Rätsch, Gunnar

    2001-12-01

    In this work we consider statistical learning problems. A learning machine aims to extract information from a set of training examples such that it is able to predict the associated label on unseen examples. We consider the case where the resulting classification or regression rule is a combination of simple rules - also called base hypotheses. The so-called boosting algorithms iteratively find a weighted linear combination of base hypotheses that predict well on unseen data. We address the following issues: o The statistical learning theory framework for analyzing boosting methods. We study learning theoretic guarantees on the prediction performance on unseen examples. Recently, large margin classification techniques emerged as a practical result of the theory of generalization, in particular Boosting and Support Vector Machines. A large margin implies a good generalization performance. Hence, we analyze how large the margins in boosting are and find an improved algorithm that is able to generate the maximum margin solution. o How can boosting methods be related to mathematical optimization techniques? To analyze the properties of the resulting classification or regression rule, it is of high importance to understand whether and under which conditions boosting converges. We show that boosting can be used to solve large scale constrained optimization problems, whose solutions are well characterizable. To show this, we relate boosting methods to methods known from mathematical optimization, and derive convergence guarantees for a quite general family of boosting algorithms. o How to make Boosting noise robust? One of the problems of current boosting techniques is that they are sensitive to noise in the training sample. In order to make boosting robust, we transfer the soft margin idea from support vector learning to boosting. We develop theoretically motivated regularized algorithms that exhibit a high noise robustness. o How to adapt boosting to regression problems? Boosting methods are originally designed for classification problems. To extend the boosting idea to regression problems, we use the previous convergence results and relations to semi-infinite programming to design boosting-like algorithms for regression problems. We show that these leveraging algorithms have desirable theoretical and practical properties. o Can boosting techniques be useful in practice? The presented theoretical results are guided by simulation results either to illustrate properties of the proposed algorithms or to show that they work well in practice. We report on successful applications in a non-intrusive power monitoring system, chaotic time series analysis and a drug discovery process. --- Anmerkung: Der Autor ist Träger des von der Mathematisch-Naturwissenschaftlichen Fakultät der Universität Potsdam vergebenen Michelson-Preises für die beste Promotion des Jahres 2001/2002. In dieser Arbeit werden statistische Lernprobleme betrachtet. Lernmaschinen extrahieren Informationen aus einer gegebenen Menge von Trainingsmustern, so daß sie in der Lage sind, Eigenschaften von bisher ungesehenen Mustern - z.B. eine Klassenzugehörigkeit - vorherzusagen. Wir betrachten den Fall, bei dem die resultierende Klassifikations- oder Regressionsregel aus einfachen Regeln - den Basishypothesen - zusammengesetzt ist. Die sogenannten Boosting Algorithmen erzeugen iterativ eine gewichtete Summe von Basishypothesen, die gut auf ungesehenen Mustern vorhersagen. Die Arbeit behandelt folgende Sachverhalte: o Die zur Analyse von Boosting-Methoden geeignete Statistische Lerntheorie. Wir studieren lerntheoretische Garantien zur Abschätzung der Vorhersagequalität auf ungesehenen Mustern. Kürzlich haben sich sogenannte Klassifikationstechniken mit großem Margin als ein praktisches Ergebnis dieser Theorie herausgestellt - insbesondere Boosting und Support-Vektor-Maschinen. Ein großer Margin impliziert eine hohe Vorhersagequalität der Entscheidungsregel. Deshalb wird analysiert, wie groß der Margin bei Boosting ist und ein verbesserter Algorithmus vorgeschlagen, der effizient Regeln mit maximalem Margin erzeugt. o Was ist der Zusammenhang von Boosting und Techniken der konvexen Optimierung? Um die Eigenschaften der entstehenden Klassifikations- oder Regressionsregeln zu analysieren, ist es sehr wichtig zu verstehen, ob und unter welchen Bedingungen iterative Algorithmen wie Boosting konvergieren. Wir zeigen, daß solche Algorithmen benutzt werden koennen, um sehr große Optimierungsprobleme mit Nebenbedingungen zu lösen, deren Lösung sich gut charakterisieren laesst. Dazu werden Verbindungen zum Wissenschaftsgebiet der konvexen Optimierung aufgezeigt und ausgenutzt, um Konvergenzgarantien für eine große Familie von Boosting-ähnlichen Algorithmen zu geben. o Kann man Boosting robust gegenüber Meßfehlern und Ausreissern in den Daten machen? Ein Problem bisheriger Boosting-Methoden ist die relativ hohe Sensitivität gegenüber Messungenauigkeiten und Meßfehlern in der Trainingsdatenmenge. Um dieses Problem zu beheben, wird die sogenannte 'Soft-Margin' Idee, die beim Support-Vector Lernen schon benutzt wird, auf Boosting übertragen. Das führt zu theoretisch gut motivierten, regularisierten Algorithmen, die ein hohes Maß an Robustheit aufweisen. o Wie kann man die Anwendbarkeit von Boosting auf Regressionsprobleme erweitern? Boosting-Methoden wurden ursprünglich für Klassifikationsprobleme entwickelt. Um die Anwendbarkeit auf Regressionsprobleme zu erweitern, werden die vorherigen Konvergenzresultate benutzt und neue Boosting-ähnliche Algorithmen zur Regression entwickelt. Wir zeigen, daß diese Algorithmen gute theoretische und praktische Eigenschaften haben. o Ist Boosting praktisch anwendbar? Die dargestellten theoretischen Ergebnisse werden begleitet von Simulationsergebnissen, entweder, um bestimmte Eigenschaften von Algorithmen zu illustrieren, oder um zu zeigen, daß sie in der Praxis tatsächlich gut funktionieren und direkt einsetzbar sind. Die praktische Relevanz der entwickelten Methoden wird in der Analyse chaotischer Zeitreihen und durch industrielle Anwendungen wie ein Stromverbrauch-Überwachungssystem und bei der Entwicklung neuer Medikamente illustriert.

  1. First-order convex feasibility algorithms for x-ray CT

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob S.; Pan, Xiaochuan

    2013-01-01

    Purpose: Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process. Methods: An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization. Results: The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems. Conclusions: Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application. PMID:23464295

  2. 2009 Carolyn Wood Sherif Award Address: Riding Trojan Horses from Symbolism to Structural Change--In Feminist Psychology, Context Matters

    ERIC Educational Resources Information Center

    Greene, Beverly

    2010-01-01

    Against the backdrop of the historical 2008 presidential election, I discuss the ways that the election of marginalized group members to public office can be used to silence the discourse on the social marginalization of group members and to remove these analyses from their appropriate context. I emphasize the need to materialize alternatives to…

  3. Copula based prediction models: an application to an aortic regurgitation study

    PubMed Central

    Kumar, Pranesh; Shoukri, Mohamed M

    2007-01-01

    Background: An important issue in prediction modeling of multivariate data is the measure of dependence structure. The use of Pearson's correlation as a dependence measure has several pitfalls and hence application of regression prediction models based on this correlation may not be an appropriate methodology. As an alternative, a copula based methodology for prediction modeling and an algorithm to simulate data are proposed. Methods: The method consists of introducing copulas as an alternative to the correlation coefficient commonly used as a measure of dependence. An algorithm based on the marginal distributions of random variables is applied to construct the Archimedean copulas. Monte Carlo simulations are carried out to replicate datasets, estimate prediction model parameters and validate them using Lin's concordance measure. Results: We have carried out a correlation-based regression analysis on data from 20 patients aged 17–82 years on pre-operative and post-operative ejection fractions after surgery and estimated the prediction model: Post-operative ejection fraction = - 0.0658 + 0.8403 (Pre-operative ejection fraction); p = 0.0008; 95% confidence interval of the slope coefficient (0.3998, 1.2808). From the exploratory data analysis, it is noted that both the pre-operative and post-operative ejection fractions measurements have slight departures from symmetry and are skewed to the left. It is also noted that the measurements tend to be widely spread and have shorter tails compared to normal distribution. Therefore predictions made from the correlation-based model corresponding to the pre-operative ejection fraction measurements in the lower range may not be accurate. Further it is found that the best approximated marginal distributions of pre-operative and post-operative ejection fractions (using q-q plots) are gamma distributions. The copula based prediction model is estimated as: Post -operative ejection fraction = - 0.0933 + 0.8907 × (Pre-operative ejection fraction); p = 0.00008 ; 95% confidence interval for slope coefficient (0.4810, 1.3003). For both models differences in the predicted post-operative ejection fractions in the lower range of pre-operative ejection measurements are considerably different and prediction errors due to copula model are smaller. To validate the copula methodology we have re-sampled with replacement fifty independent bootstrap samples and have estimated concordance statistics 0.7722 (p = 0.0224) for the copula model and 0.7237 (p = 0.0604) for the correlation model. The predicted and observed measurements are concordant for both models. The estimates of accuracy components are 0.9233 and 0.8654 for copula and correlation models respectively. Conclusion: Copula-based prediction modeling is demonstrated to be an appropriate alternative to the conventional correlation-based prediction modeling since the correlation-based prediction models are not appropriate to model the dependence in populations with asymmetrical tails. Proposed copula-based prediction model has been validated using the independent bootstrap samples. PMID:17573974

  4. The Spleen Is an Ideal Site for Inducing Transplanted Islet Graft Expansion in Mice

    PubMed Central

    Takahashi, Hiroyuki; Kodama, Shohta

    2017-01-01

    Alternative islet transplantation sites have the potential to reduce the marginal number of islets required to ameliorate hyperglycemia in recipients with diabetes. Previously, we reported that T cell leukemia homeobox 1 (Tlx1)+ stem cells in the spleen effectively regenerated into insulin-producing cells in the pancreas of non-obese diabetic mice with end-stage disease. Thus, we investigated the spleen as a potential alternative islet transplantation site. Streptozotocin-induced diabetic C57BL/6 mice received syngeneic islets into the portal vein (PV), beneath the kidney capsule (KC), or into the spleen (SP). The marginal number of islets by PV, KC, or SP was 200, 100, and 50, respectively. Some plasma inflammatory cytokine levels in the SP group were significantly lower than those of the PV group after receiving a marginal number of islets, indicating reduced inflammation in the SP group. Insulin contents were increased 280 days after islet transplantation compared with those immediately following transplantation (p<0.05). Additionally, Tlx1-related genes, including Rrm2b and Pla2g2d, were up-regulated, which indicates that islet grafts expanded in the spleen. The spleen is an ideal candidate for an alternative islet transplantation site because of the resulting reduced inflammation and expansion of the islet graft. PMID:28135283

  5. Erosional unconformity or non-deposition? An alternative interpretation of the Eocene seismic stratigraphy offshore Wilkes Land, East Antarctica

    NASA Astrophysics Data System (ADS)

    Sauermilch, Isabel; Whittaker, Joanne; Totterdell, Jennifer; Jokat, Wilfried

    2017-04-01

    The sedimentary stratigraphy along the conjugate Australian-Antarctic continental margins provide insights into their tectonic evolution as well as changes in paleoceanographic conditions in the Southern Ocean. A comprehensive network of multichannel seismic reflection data as well as geological information from drill cores have been used to interpret the stratigraphic evolution of these margins. However, a number of alternative seismic interpretations exist for the Antarctic side, particularly due to sparse drill core information. A prominent high-amplitude reflector observed along the margin, extending from the continental shelf to the foot-of-slope, is at the centre of debate. Recently, two major hiatuses (from 33.6 - 47.9 Ma and 51.06 - 51.9 Ma) were recovered by the IODP drill core U1356A offshore Wilkes Land and correlated to this prominent reflector. Previous seismic stratigraphic investigations interpreted this structure as an erosional unconformity and proposed different events as a possible cause for this formation, including first arrival of the continental glaciation at the coast at about 34 Ma, increase in spreading rate between Australia and Antarctica at about 45 Ma and drastic global sea level drop of 70 m at about 43 Ma. However, such a large-scale erosion must consequently lead to a re-deposition of a significantly large amount of sediment somewhere along the margins, but, to date, no such deposition is observed in the seismic reflection data. Here, we present an alternative seismo-stratigraphic interpretation based on correlation to the sedimentary structures along the Australian margin. We argue that the prominent unconformity is formed due to non-deposition of sediment between 47.8 and 33.6 Ma. The sedimentary units underlying this unconformity show strong similarities in structure, seismic characteristics and variation along the margin with sequences that are partly exposed to the seafloor at the foot of the Australian slope. On the Australian flank, the age of these exposed sediment sequences ranges from 65 Ma to 45 Ma. Low to no sedimentation from 45 Ma to the present-day offshore Australia has been interpreted to explain the exposure of these old sediment units. We propose that non-deposition occurred along both margins from 45 Ma, until large-scale glacial deposition started at 33.6 Ma along the Antarctic margin. Using our new interpretation, we create paleo-bathymetric reconstructions using the software BALPAL at 83 Ma, 65 Ma and 45 Ma. The resulting paleo-bathymetric maps provide essential information, e.g. for paleo-oceanographic and -climatic investigations in the Southern Ocean.

  6. An Alternative Retrieval Algorithm for the Ozone Mapping and Profiler Suite Limb Profiler

    DTIC Science & Technology

    2012-05-01

    behavior of aerosol extinction from the upper troposphere through the stratosphere is critical for retrieving ozone in this region. Aerosol scattering is......include area code) b. ABSTRACT c. THIS PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT An Alternative Retrieval Algorithm for the Ozone Mapping and

  7. Hierarchical Marginal Land Assessment for Land Use Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, Shujiang; Post, Wilfred M; Wang, Dali

    2013-01-01

    Marginal land provides an alternative potential for food and bioenergy production in the face of limited land resources; however, effective assessment of marginal lands is not well addressed. Concerns over environmental risks, ecosystem services and sustainability for marginal land have been widely raised. The objective of this study was to develop a hierarchical marginal land assessment framework for land use planning and management. We first identified major land functions linking production, environment, ecosystem services and economics, and then classified land resources into four categories of marginal land using suitability and limitations associated with major management goals, including physically marginal land,more » biologically marginal land, environmental-ecological marginal land, and economically marginal land. We tested this assessment framework in south-western Michigan, USA. Our results indicated that this marginal land assessment framework can be potentially feasible on land use planning for food and bioenergy production, and balancing multiple goals of land use management. We also compared our results with marginal land assessment from the Conservation Reserve Program (CRP) and land capability classes (LCC) that are used in the US. The hierarchical assessment framework has advantages of quantitatively reflecting land functions and multiple concerns. This provides a foundation upon which focused studies can be identified in order to improve the assessment framework by quantifying high-resolution land functions associated with environment and ecosystem services as well as their criteria are needed to improve the assessment framework.« less

  8. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron command-feedback gain by +/- 20 percent. Minimum eigenvalues of the return difference matrix which bound the singular values are also presented. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  9. Flight-determined stability analysis of multiple-input-multiple-output control systems

    NASA Technical Reports Server (NTRS)

    Burken, John J.

    1992-01-01

    Singular value analysis can give conservative stability margin results. Applying structure to the uncertainty can reduce this conservatism. This paper presents flight-determined stability margins for the X-29A lateral-directional, multiloop control system. These margins are compared with the predicted unscaled singular values and scaled structured singular values. The algorithm was further evaluated with flight data by changing the roll-rate-to-aileron-command-feedback gain by +/- 20 percent. Also presented are the minimum eigenvalues of the return difference matrix which bound the singular values. Extracting multiloop singular values from flight data and analyzing the feedback gain variations validates this technique as a measure of robustness. This analysis can be used for near-real-time flight monitoring and safety testing.

  10. Test and evaluation of the HIDEC engine uptrim algorithm

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemented into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  11. Response of ocean ecosystems to climate warming

    NASA Astrophysics Data System (ADS)

    Sarmiento, J. L.; Slater, R.; Barber, R.; Bopp, L.; Doney, S. C.; Hirst, A. C.; Kleypas, J.; Matear, R.; Mikolajewicz, U.; Monfray, P.; Soldatov, V.; Spall, S. A.; Stouffer, R.

    2004-09-01

    We examine six different coupled climate model simulations to determine the ocean biological response to climate warming between the beginning of the industrial revolution and 2050. We use vertical velocity, maximum winter mixed layer depth, and sea ice cover to define six biomes. Climate warming leads to a contraction of the highly productive marginal sea ice biome by 42% in the Northern Hemisphere and 17% in the Southern Hemisphere, and leads to an expansion of the low productivity permanently stratified subtropical gyre biome by 4.0% in the Northern Hemisphere and 9.4% in the Southern Hemisphere. In between these, the subpolar gyre biome expands by 16% in the Northern Hemisphere and 7% in the Southern Hemisphere, and the seasonally stratified subtropical gyre contracts by 11% in both hemispheres. The low-latitude (mostly coastal) upwelling biome area changes only modestly. Vertical stratification increases, which would be expected to decrease nutrient supply everywhere, but increase the growing season length in high latitudes. We use satellite ocean color and climatological observations to develop an empirical model for predicting chlorophyll from the physical properties of the global warming simulations. Four features stand out in the response to global warming: (1) a drop in chlorophyll in the North Pacific due primarily to retreat of the marginal sea ice biome, (2) a tendency toward an increase in chlorophyll in the North Atlantic due to a complex combination of factors, (3) an increase in chlorophyll in the Southern Ocean due primarily to the retreat of and changes at the northern boundary of the marginal sea ice zone, and (4) a tendency toward a decrease in chlorophyll adjacent to the Antarctic continent due primarily to freshening within the marginal sea ice zone. We use three different primary production algorithms to estimate the response of primary production to climate warming based on our estimated chlorophyll concentrations. The three algorithms give a global increase in primary production of 0.7% at the low end to 8.1% at the high end, with very large regional differences. The main cause of both the response to warming and the variation between algorithms is the temperature sensitivity of the primary production algorithms. We also show results for the period between the industrial revolution and 2050 and 2090.

  12. Mesozoic carbonate-siliciclastic platform to basin systems of a South Tethyan margin (Egypt, East Mediterranean)

    NASA Astrophysics Data System (ADS)

    Tassy, Aurélie; Crouzy, Emmanuel; Gorini, Christian; Rubino, Jean-Loup

    2015-04-01

    The Mesozoïc Egyptian margin is the south margin of a remnant of the Neo-Tethys Ocean, at the African northern plate boundary. East Mediterranean basin developed during the late Triassic-Early Jurassic rifting with a NW-SE opening direction (Frizon de Lamotte et al., 2011). During Mesozoïc, Egypt margin was a transform margin with a NW-SE orientation of transform faults. In the Eastern Mediterranean basin, Mesozoïc margins are characterized by mixed carbonate-siliciclastics platforms where subsidence and eustacy are the main parameters controlling the facies distribution and geometries of the platform-to-basin transition. Geometries and facies on the platform-slope-basin system, today well constrained on the Levant area, where still poorly known on the Egyptian margin. Geometries and stratigraphic architecture of the Egyptian margin are revealed, thanks to a regional seismic and well data-base provided by an industrial-academic group (GRI, Total). The objective is to understand the sismostratigraphic architecture of the platform-slope-basin system in a key area from Western Desert to Nile delta and Levant margin. Mapping of the top Jurassic and top Cretaceous show seismic geomorphology of the margin, with the cartography of the hinge line from Western Desert to Sinaï. During the Jurassic, carbonate platform show a prograding profile and a distally thickening of the external platform, non-abrupt slope profiles, and palaeovalleys incisions. Since the Cretaceous, the aggrading and retrograding mixed carbonate-siliciclastic platform show an alternation of steep NW-SE oblique segments and distally steepened segments. These structures of the platform edge are strongly controlled by the inherited tethyan transform directions. Along the hinge line, embayments are interpreted as megaslides. The basin infilling is characterised by an alternation of chaotic seismic facies and high amplitude reflectors onlaping the paleoslopes. MTC deposits can mobilize thick sedimentary series (up to 3500 m) as a mixed combination of debris flows, internal preserved blocks, and/or compressively-deformed distal allochthonous masses. Transported material have proceeded from the dismantling of the Mesozoic mixed carbonate-siliciclastic platform. They can spread down slope over areas as large as 70000 of km2. According to stratigraphic correlations with global sea-level positions, platform instability would have been triggered by the gravitational collapse of the carbonate-siliciclastic platform under its own weight after successive subaerial exposures which were able to generate karstification processes. Seismic interpretation is constrained by a detailed assessment of the Egyptian margin paleogeography supported by wells. This margin segment is briefly compared to the outcropping Apulian margin in Italy.

  13. Mapping and assessing variability in the Antarctic marginal ice zone, pack ice and coastal polynyas in two sea ice algorithms with implications on breeding success of snow petrels

    NASA Astrophysics Data System (ADS)

    Stroeve, Julienne C.; Jenouvrier, Stephanie; Campbell, G. Garrett; Barbraud, Christophe; Delord, Karine

    2016-08-01

    Sea ice variability within the marginal ice zone (MIZ) and polynyas plays an important role for phytoplankton productivity and krill abundance. Therefore, mapping their spatial extent as well as seasonal and interannual variability is essential for understanding how current and future changes in these biologically active regions may impact the Antarctic marine ecosystem. Knowledge of the distribution of MIZ, consolidated pack ice and coastal polynyas in the total Antarctic sea ice cover may also help to shed light on the factors contributing towards recent expansion of the Antarctic ice cover in some regions and contraction in others. The long-term passive microwave satellite data record provides the longest and most consistent record for assessing the proportion of the sea ice cover that is covered by each of these ice categories. However, estimates of the amount of MIZ, consolidated pack ice and polynyas depend strongly on which sea ice algorithm is used. This study uses two popular passive microwave sea ice algorithms, the NASA Team and Bootstrap, and applies the same thresholds to the sea ice concentrations to evaluate the distribution and variability in the MIZ, the consolidated pack ice and coastal polynyas. Results reveal that the seasonal cycle in the MIZ and pack ice is generally similar between both algorithms, yet the NASA Team algorithm has on average twice the MIZ and half the consolidated pack ice area as the Bootstrap algorithm. Trends also differ, with the Bootstrap algorithm suggesting statistically significant trends towards increased pack ice area and no statistically significant trends in the MIZ. The NASA Team algorithm on the other hand indicates statistically significant positive trends in the MIZ during spring. Potential coastal polynya area and amount of broken ice within the consolidated ice pack are also larger in the NASA Team algorithm. The timing of maximum polynya area may differ by as much as 5 months between algorithms. These differences lead to different relationships between sea ice characteristics and biological processes, as illustrated here with the breeding success of an Antarctic seabird.

  14. Line-drawing algorithms for parallel machines

    NASA Technical Reports Server (NTRS)

    Pang, Alex T.

    1990-01-01

    The fact that conventional line-drawing algorithms, when applied directly on parallel machines, can lead to very inefficient codes is addressed. It is suggested that instead of modifying an existing algorithm for a parallel machine, a more efficient implementation can be produced by going back to the invariants in the definition. Popular line-drawing algorithms are compared with two alternatives; distance to a line (a point is on the line if sufficiently close to it) and intersection with a line (a point on the line if an intersection point). For massively parallel single-instruction-multiple-data (SIMD) machines (with thousands of processors and up), the alternatives provide viable line-drawing algorithms. Because of the pixel-per-processor mapping, their performance is independent of the line length and orientation.

  15. Which method of posttraumatic stress disorder classification best predicts psychosocial function in children with traumatic brain injury?

    PubMed

    Iselin, Greg; Le Brocque, Robyne; Kenardy, Justin; Anderson, Vicki; McKinlay, Lynne

    2010-10-01

    Controversy surrounds the classification of posttraumatic stress disorder (PTSD), particularly in children and adolescents with traumatic brain injury (TBI). In these populations, it is difficult to differentiate TBI-related organic memory loss from dissociative amnesia. Several alternative PTSD classification algorithms have been proposed for use with children. This paper investigates DSM-IV-TR and alternative PTSD classification algorithms, including and excluding the dissociative amnesia item, in terms of their ability to predict psychosocial function following pediatric TBI. A sample of 184 children aged 6-14 years were recruited following emergency department presentation and/or hospital admission for TBI. PTSD was assessed via semi-structured clinical interview (CAPS-CA) with the child at 3 months post-injury. Psychosocial function was assessed using the parent report CHQ-PF50. Two alternative classification algorithms, the PTSD-AA and 2 of 3 algorithms, reached statistical significance. While the inclusion of the dissociative amnesia item increased prevalence rates across algorithms, it generally resulted in weaker associations with psychosocial function. The PTSD-AA algorithm appears to have the strongest association with psychosocial function following TBI in children and adolescents. Removing the dissociative amnesia item from the diagnostic algorithm generally results in improved validity. Copyright 2010 Elsevier Ltd. All rights reserved.

  16. The Psychopharmacology Algorithm Project at the Harvard South Shore Program: An Algorithm for Generalized Anxiety Disorder.

    PubMed

    Abejuela, Harmony Raylen; Osser, David N

    2016-01-01

    This revision of previous algorithms for the pharmacotherapy of generalized anxiety disorder was developed by the Psychopharmacology Algorithm Project at the Harvard South Shore Program. Algorithms from 1999 and 2010 and associated references were reevaluated. Newer studies and reviews published from 2008-14 were obtained from PubMed and analyzed with a focus on their potential to justify changes in the recommendations. Exceptions to the main algorithm for special patient populations, such as women of childbearing potential, pregnant women, the elderly, and those with common medical and psychiatric comorbidities, were considered. Selective serotonin reuptake inhibitors (SSRIs) are still the basic first-line medication. Early alternatives include duloxetine, buspirone, hydroxyzine, pregabalin, or bupropion, in that order. If response is inadequate, then the second recommendation is to try a different SSRI. Additional alternatives now include benzodiazepines, venlafaxine, kava, and agomelatine. If the response to the second SSRI is unsatisfactory, then the recommendation is to try a serotonin-norepinephrine reuptake inhibitor (SNRI). Other alternatives to SSRIs and SNRIs for treatment-resistant or treatment-intolerant patients include tricyclic antidepressants, second-generation antipsychotics, and valproate. This revision of the GAD algorithm responds to issues raised by new treatments under development (such as pregabalin) and organizes the evidence systematically for practical clinical application.

  17. Intraoperative Raman Spectroscopy of Soft Tissue Sarcomas

    PubMed Central

    Nguyen, John Q.; Gowani, Zain S.; O’Connor, Maggie; Pence, Isaac J.; Nguyen, The-Quyen; Holt, Ginger E.; Schwartz, Herbert S.; Halpern, Jennifer L.; Mahadevan-Jansen, Anita

    2017-01-01

    Background and Objective Soft tissue sarcomas (STS) are a rare and heterogeneous group of malignant tumors that are often treated through surgical resection. Current intraoperative margin assessment methods are limited and highlight the need for an improved approach with respect to time and specificity. Here we investigate the potential of near-infrared Raman spectroscopy for the intraoperative differentiation of STS from surrounding normal tissue. Materials and Methods In vivo Raman measurements at 785 nm excitation were intraoperatively acquired from subjects undergoing STS resection using a probe based spectroscopy system. A multivariate classification algorithm was developed in order to automatically identify spectral features that can be used to differentiate STS from the surrounding normal muscle and fat. The classification algorithm was subsequently tested using leave-one-subject-out cross-validation. Results With the exclusion of well-differentiated liposarcomas, the algorithm was able to classify STS from the surrounding normal muscle and fat with a sensitivity and specificity of 89.5% and 96.4%, respectively. Conclusion These results suggest that single point near-infrared Raman spectroscopy could be utilized as a rapid and non-destructive surgical guidance tool for identifying abnormal tissue margins in need of further excision. PMID:27454580

  18. Intraoperative Raman spectroscopy of soft tissue sarcomas.

    PubMed

    Nguyen, John Q; Gowani, Zain S; O'Connor, Maggie; Pence, Isaac J; Nguyen, The-Quyen; Holt, Ginger E; Schwartz, Herbert S; Halpern, Jennifer L; Mahadevan-Jansen, Anita

    2016-10-01

    Soft tissue sarcomas (STS) are a rare and heterogeneous group of malignant tumors that are often treated through surgical resection. Current intraoperative margin assessment methods are limited and highlight the need for an improved approach with respect to time and specificity. Here we investigate the potential of near-infrared Raman spectroscopy for the intraoperative differentiation of STS from surrounding normal tissue. In vivo Raman measurements at 785 nm excitation were intraoperatively acquired from subjects undergoing STS resection using a probe based spectroscopy system. A multivariate classification algorithm was developed in order to automatically identify spectral features that can be used to differentiate STS from the surrounding normal muscle and fat. The classification algorithm was subsequently tested using leave-one-subject-out cross-validation. With the exclusion of well-differentiated liposarcomas, the algorithm was able to classify STS from the surrounding normal muscle and fat with a sensitivity and specificity of 89.5% and 96.4%, respectively. These results suggest that single point near-infrared Raman spectroscopy could be utilized as a rapid and non-destructive surgical guidance tool for identifying abnormal tissue margins in need of further excision. Lasers Surg. Med. 48:774-781, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Proletarianization of English Language Teaching: Iranian EFL Teachers and Their Alternative Role as Transformative Intellectuals

    ERIC Educational Resources Information Center

    Safari, Parvin

    2017-01-01

    In the field of English Language Teaching (ELT), attention has been shifted toward the alternative role of teachers as transformative intellectuals whereby transformation in teaching occurs from control and technical operations to criticism and intellectual reflection. This role enables teachers to focus on marginalized students' lived experiences…

  20. Robust functional regression model for marginal mean and subject-specific inferences.

    PubMed

    Cao, Chunzheng; Shi, Jian Qing; Lee, Youngjo

    2017-01-01

    We introduce flexible robust functional regression models, using various heavy-tailed processes, including a Student t-process. We propose efficient algorithms in estimating parameters for the marginal mean inferences and in predicting conditional means as well as interpolation and extrapolation for the subject-specific inferences. We develop bootstrap prediction intervals (PIs) for conditional mean curves. Numerical studies show that the proposed model provides a robust approach against data contamination or distribution misspecification, and the proposed PIs maintain the nominal confidence levels. A real data application is presented as an illustrative example.

  1. Using the Multiplicative Schwarz Alternating Algorithm (MSAA) for Solving the Large Linear System of Equations Related to Global Gravity Field Recovery up to Degree and Order 120

    NASA Astrophysics Data System (ADS)

    Safari, A.; Sharifi, M. A.; Amjadiparvar, B.

    2010-05-01

    The GRACE mission has substantiated the low-low satellite-to-satellite tracking (LL-SST) concept. The LL-SST configuration can be combined with the previously realized high-low SST concept in the CHAMP mission to provide a much higher accuracy. The line of sight (LOS) acceleration difference between the GRACE satellite pair is the mostly used observable for mapping the global gravity field of the Earth in terms of spherical harmonic coefficients. In this paper, mathematical formulae for LOS acceleration difference observations have been derived and the corresponding linear system of equations has been set up for spherical harmonic up to degree and order 120. The total number of unknowns is 14641. Such a linear equation system can be solved with iterative solvers or direct solvers. However, the runtime of direct methods or that of iterative solvers without a suitable preconditioner increases tremendously. This is the reason why we need a more sophisticated method to solve the linear system of problems with a large number of unknowns. Multiplicative variant of the Schwarz alternating algorithm is a domain decomposition method, which allows it to split the normal matrix of the system into several smaller overlaped submatrices. In each iteration step the multiplicative variant of the Schwarz alternating algorithm solves linear systems with the matrices obtained from the splitting successively. It reduces both runtime and memory requirements drastically. In this paper we propose the Multiplicative Schwarz Alternating Algorithm (MSAA) for solving the large linear system of gravity field recovery. The proposed algorithm has been tested on the International Association of Geodesy (IAG)-simulated data of the GRACE mission. The achieved results indicate the validity and efficiency of the proposed algorithm in solving the linear system of equations from accuracy and runtime points of view. Keywords: Gravity field recovery, Multiplicative Schwarz Alternating Algorithm, Low-Low Satellite-to-Satellite Tracking

  2. Margins Associated with Loss of Assured Safety for Systems with Multiple Time-Dependent Failure Modes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Helton, Jon C.; Brooks, Dusty Marie; Sallaberry, Cedric Jean-Marie.

    Representations for margins associated with loss of assured safety (LOAS) for weak link (WL)/strong link (SL) systems involving multiple time-dependent failure modes are developed. The following topics are described: (i) defining properties for WLs and SLs, (ii) background on cumulative distribution functions (CDFs) for link failure time, link property value at link failure, and time at which LOAS occurs, (iii) CDFs for failure time margins defined by (time at which SL system fails) – (time at which WL system fails), (iv) CDFs for SL system property values at LOAS, (v) CDFs for WL/SL property value margins defined by (property valuemore » at which SL system fails) – (property value at which WL system fails), and (vi) CDFs for SL property value margins defined by (property value of failing SL at time of SL system failure) – (property value of this SL at time of WL system failure). Included in this presentation is a demonstration of a verification strategy based on defining and approximating the indicated margin results with (i) procedures based on formal integral representations and associated quadrature approximations and (ii) procedures based on algorithms for sampling-based approximations.« less

  3. Effect of defuzzification method of fuzzy modeling

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    Imprecision can arise in fuzzy relational modeling as a result of fuzzification, inference and defuzzification. These three sources of imprecision are difficult to separate. We have determined through numerical studies that an important source of imprecision is the defuzzification stage. This imprecision adversely affects the quality of the model output. The most widely used defuzzification algorithm is known by the name of `center of area' (COA) or `center of gravity' (COG). In this paper, we show that this algorithm not only maps the near limit values of the variables improperly but also introduces errors for middle domain values of the same variables. Furthermore, the behavior of this algorithm is a function of the shape of the reference sets. We compare the COA method to the weighted average of cluster centers (WACC) procedure in which the transformation is carried out based on the values of the cluster centers belonging to each of the reference membership functions instead of using the functions themselves. We show that this procedure is more effective and computationally much faster than the COA. The method is tested for a family of reference sets satisfying certain constraints, that is, for any support value the sum of reference membership function values equals one and the peak values of the two marginal membership functions project to the boundaries of the universe of discourse. For all the member sets of this family of reference sets the defuzzification errors do not get bigger as the linguistic variables tend to their extreme values. In addition, the more reference sets that are defined for a certain linguistic variable, the less the average defuzzification error becomes. In case of triangle shaped reference sets there is no defuzzification error at all. Finally, an alternative solution is provided that improves the performance of the COA method.

  4. Staged marginal contoured and central excision technique in the surgical management of perianal Paget's disease.

    PubMed

    Möller, Mecker G; Lugo-Baruqui, Jose Alejandro; Milikowski, Clara; Salgado, Christopher J

    2014-04-01

    Extramammary Paget's disease (EMPD) is an adenocarcinoma of the apocrine glands with unknown exact prevalence and obscure etiology. It has been divided into primary EMPD and secondary EMPD, in which an internal malignancy is usually associated. Treatment for primary EMPD usually consists of wide lesion excision with negative margins. Multiple methods have been proposed to obtain free-margin status of the disease. These include visible border lesion excision, punch biopsies, and micrographic and frozen-section surgery, with different results but still high recurrence rates. The investigators propose a method consisting of a staged contoured marginal excision using "en face" permanent pathologic analysis preceding the steps of central excision of the lesion and the final reconstruction of the surgical defect. Advantages of this method include adequate margin control allowing final reconstruction and tissue preservation, while minimizing patient discomfort. The staged contoured marginal and central excision technique offers a new alternative to the armamentarium for surgical oncologists for the management of EMPD in which margin control is imperative for control of recurrence rates. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Review of LMIs, Interior Point Methods, Complexity Theory, and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Mesbahi, M.

    1996-01-01

    From end of intro: ...We would like to show that for certain problems in systems and control theory, there exist algorithms for which corresponding (xi) can be viewed as a certain measure of robustness, e.g., stability margin.

  6. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  7. An Introduction to Multivariate Curve Resolution-Alternating Least Squares: Spectrophotometric Study of the Acid-Base Equilibria of 8-Hydroxyquinoline-5-Sulfonic Acid

    ERIC Educational Resources Information Center

    Rodriguez-Rodriguez, Cristina; Amigo, Jose Manuel; Coello, Jordi; Maspoch, Santiago

    2007-01-01

    A spectrophotometric study of the acid-base equilibria of 8-hydroxyquinoline-5-sulfonic acid to describe the multivariate curve resolution-alternating least squares algorithm (MCR-ALS) is described. The algorithm provides a lot of information and hence is of great importance for the chemometrics research.

  8. Evaluation of Enhanced Risk Monitors for Use on Advanced Reactors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramuhalli, Pradeep; Veeramany, Arun; Bonebrake, Christopher A.

    This study provides an overview of the methodology for integrating time-dependent failure probabilities into nuclear power reactor risk monitors. This prototypic enhanced risk monitor (ERM) methodology was evaluated using a hypothetical probabilistic risk assessment (PRA) model, generated using a simplified design of a liquid-metal-cooled advanced reactor (AR). Component failure data from industry compilation of failures of components similar to those in the simplified AR model were used to initialize the PRA model. Core damage frequency (CDF) over time were computed and analyzed. In addition, a study on alternative risk metrics for ARs was conducted. Risk metrics that quantify the normalizedmore » cost of repairs, replacements, or other operations and management (O&M) actions were defined and used, along with an economic model, to compute the likely economic risk of future actions such as deferred maintenance based on the anticipated change in CDF due to current component condition and future anticipated degradation. Such integration of conventional-risk metrics with alternate-risk metrics provides a convenient mechanism for assessing the impact of O&M decisions on safety and economics of the plant. It is expected that, when integrated with supervisory control algorithms, such integrated-risk monitors will provide a mechanism for real-time control decision-making that ensure safety margins are maintained while operating the plant in an economically viable manner.« less

  9. MT's algorithm: A new algorithm to search for the optimum set of modulation indices for simultaneous range, command, and telemetry

    NASA Technical Reports Server (NTRS)

    Nguyen, Tien Manh

    1989-01-01

    MT's algorithm was developed as an aid in the design of space telecommunications systems when utilized with simultaneous range/command/telemetry operations. This algorithm provides selection of modulation indices for: (1) suppression of undesired signals to achieve desired link performance margins and/or to allow for a specified performance degradation in the data channel (command/telemetry) due to the presence of undesired signals (interferers); and (2) optimum power division between the carrier, the range, and the data channel. A software program using this algorithm was developed for use with MathCAD software. This software program, called the MT program, provides the computation of optimum modulation indices for all possible cases that are recommended by the Consultative Committee on Space Data System (CCSDS) (with emphasis on the squarewave, NASA/JPL ranging system).

  10. Utilization of cone-beam CT for offline evaluation of target volume coverage during prostate image-guided radiotherapy based on bony anatomy alignment.

    PubMed

    Paluska, Petr; Hanus, Josef; Sefrova, Jana; Rouskova, Lucie; Grepl, Jakub; Jansa, Jan; Kasaova, Linda; Hodek, Miroslav; Zouhar, Milan; Vosmik, Milan; Petera, Jiri

    2012-01-01

    To assess target volume coverage during prostate image-guided radiotherapy based on bony anatomy alignment and to assess possibility of safety margin reduction. Implementation of IGRT should influence safety margins. Utilization of cone-beam CT provides current 3D anatomic information directly in irradiation position. Such information enables reconstruction of the actual dose distribution. Seventeen prostate patients were treated with daily bony anatomy image-guidance. Cone-beam CT (CBCT) scans were acquired once a week immediately after bony anatomy alignment. After the prostate, seminal vesicles, rectum and bladder were contoured, the delivered dose distribution was reconstructed. Target dose coverage was evaluated by the proportion of the CTV encompassed by the 95% isodose. Original plans employed a 1 cm safety margin. Alternative plans assuming a smaller 7 mm margin between CTV and PTV were evaluated in the same way. Rectal and bladder volumes were compared with the initial ones. Rectal and bladder volumes irradiated with doses higher than 75 Gy, 70 Gy, 60 Gy, 50 Gy and 40 Gy were analyzed. In 12% of reconstructed plans the prostate coverage was not sufficient. The prostate underdosage was observed in 5 patients. Coverage of seminal vesicles was not satisfactory in 3% of plans. Most of the target underdosage corresponded to excessive rectal or bladder filling. Evaluation of alternative plans assuming a smaller 7 mm margin revealed 22% and 11% of plans where prostate and seminal vesicles coverage, respectively, was compromised. These were distributed over 8 and 7 patients, respectively. Sufficient dose coverage of target volumes was not achieved for all patients. Reducing of safety margin is not acceptable. Initial rectal and bladder volumes cannot be considered representative for subsequent treatment.

  11. Initial Evaluations of LoC Prediction Algorithms Using the NASA Vertical Motion Simulator

    NASA Technical Reports Server (NTRS)

    Krishnakumar, Kalmanje; Stepanyan, Vahram; Barlow, Jonathan; Hardy, Gordon; Dorais, Greg; Poolla, Chaitanya; Reardon, Scott; Soloway, Donald

    2014-01-01

    Flying near the edge of the safe operating envelope is an inherently unsafe proposition. Edge of the envelope here implies that small changes or disturbances in system state or system dynamics can take the system out of the safe envelope in a short time and could result in loss-of-control events. This study evaluated approaches to predicting loss-of-control safety margins as the aircraft gets closer to the edge of the safe operating envelope. The goal of the approach is to provide the pilot aural, visual, and tactile cues focused on maintaining the pilot's control action within predicted loss-of-control boundaries. Our predictive architecture combines quantitative loss-of-control boundaries, an adaptive prediction method to estimate in real-time Markov model parameters and associated stability margins, and a real-time data-based predictive control margins estimation algorithm. The combined architecture is applied to a nonlinear transport class aircraft. Evaluations of various feedback cues using both test and commercial pilots in the NASA Ames Vertical Motion-base Simulator (VMS) were conducted in the summer of 2013. The paper presents results of this evaluation focused on effectiveness of these approaches and the cues in preventing the pilots from entering a loss-of-control event.

  12. Automated detection of breast cancer in resected specimens with fluorescence lifetime imaging

    NASA Astrophysics Data System (ADS)

    Phipps, Jennifer E.; Gorpas, Dimitris; Unger, Jakob; Darrow, Morgan; Bold, Richard J.; Marcu, Laura

    2018-01-01

    Re-excision rates for breast cancer lumpectomy procedures are currently nearly 25% due to surgeons relying on inaccurate or incomplete methods of evaluating specimen margins. The objective of this study was to determine if cancer could be automatically detected in breast specimens from mastectomy and lumpectomy procedures by a classification algorithm that incorporated parameters derived from fluorescence lifetime imaging (FLIm). This study generated a database of co-registered histologic sections and FLIm data from breast cancer specimens (N  =  20) and a support vector machine (SVM) classification algorithm able to automatically detect cancerous, fibrous, and adipose breast tissue. Classification accuracies were greater than 97% for automated detection of cancerous, fibrous, and adipose tissue from breast cancer specimens. The classification worked equally well for specimens scanned by hand or with a mechanical stage, demonstrating that the system could be used during surgery or on excised specimens. The ability of this technique to simply discriminate between cancerous and normal breast tissue, in particular to distinguish fibrous breast tissue from tumor, which is notoriously challenging for optical techniques, leads to the conclusion that FLIm has great potential to assess breast cancer margins. Identification of positive margins before waiting for complete histologic analysis could significantly reduce breast cancer re-excision rates.

  13. Bayesian analysis of the flutter margin method in aeroelasticity

    DOE PAGES

    Khalil, Mohammad; Poirel, Dominique; Sarkar, Abhijit

    2016-08-27

    A Bayesian statistical framework is presented for Zimmerman and Weissenburger flutter margin method which considers the uncertainties in aeroelastic modal parameters. The proposed methodology overcomes the limitations of the previously developed least-square based estimation technique which relies on the Gaussian approximation of the flutter margin probability density function (pdf). Using the measured free-decay responses at subcritical (preflutter) airspeeds, the joint non-Gaussain posterior pdf of the modal parameters is sampled using the Metropolis–Hastings (MH) Markov chain Monte Carlo (MCMC) algorithm. The posterior MCMC samples of the modal parameters are then used to obtain the flutter margin pdfs and finally the fluttermore » speed pdf. The usefulness of the Bayesian flutter margin method is demonstrated using synthetic data generated from a two-degree-of-freedom pitch-plunge aeroelastic model. The robustness of the statistical framework is demonstrated using different sets of measurement data. In conclusion, it will be shown that the probabilistic (Bayesian) approach reduces the number of test points required in providing a flutter speed estimate for a given accuracy and precision.« less

  14. An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.

    PubMed

    Zou, Hui; Zou, Zhihong; Wang, Xiaojing

    2015-11-12

    The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.

  15. Federated learning of predictive models from federated Electronic Health Records.

    PubMed

    Brisimi, Theodora S; Chen, Ruidi; Mela, Theofanie; Olshevsky, Alex; Paschalidis, Ioannis Ch; Shi, Wei

    2018-04-01

    In an era of "big data," computationally efficient and privacy-aware solutions for large-scale machine learning problems become crucial, especially in the healthcare domain, where large amounts of data are stored in different locations and owned by different entities. Past research has been focused on centralized algorithms, which assume the existence of a central data repository (database) which stores and can process the data from all participants. Such an architecture, however, can be impractical when data are not centrally located, it does not scale well to very large datasets, and introduces single-point of failure risks which could compromise the integrity and privacy of the data. Given scores of data widely spread across hospitals/individuals, a decentralized computationally scalable methodology is very much in need. We aim at solving a binary supervised classification problem to predict hospitalizations for cardiac events using a distributed algorithm. We seek to develop a general decentralized optimization framework enabling multiple data holders to collaborate and converge to a common predictive model, without explicitly exchanging raw data. We focus on the soft-margin l 1 -regularized sparse Support Vector Machine (sSVM) classifier. We develop an iterative cluster Primal Dual Splitting (cPDS) algorithm for solving the large-scale sSVM problem in a decentralized fashion. Such a distributed learning scheme is relevant for multi-institutional collaborations or peer-to-peer applications, allowing the data holders to collaborate, while keeping every participant's data private. We test cPDS on the problem of predicting hospitalizations due to heart diseases within a calendar year based on information in the patients Electronic Health Records prior to that year. cPDS converges faster than centralized methods at the cost of some communication between agents. It also converges faster and with less communication overhead compared to an alternative distributed algorithm. In both cases, it achieves similar prediction accuracy measured by the Area Under the Receiver Operating Characteristic Curve (AUC) of the classifier. We extract important features discovered by the algorithm that are predictive of future hospitalizations, thus providing a way to interpret the classification results and inform prevention efforts. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Toward Developing an Unbiased Scoring Algorithm for "NASA" and Similar Ranking Tasks.

    ERIC Educational Resources Information Center

    Lane, Irving M.; And Others

    1981-01-01

    Presents both logical and empirical evidence to illustrate that the conventional scoring algorithm for ranking tasks significantly underestimates the initial level of group ability and that Slevin's alternative scoring algorithm significantly overestimates the initial level of ability. Presents a modification of Slevin's algorithm which authors…

  17. An analysis of the effect of defect structures on catalytic surfaces by the boundary element technique

    NASA Astrophysics Data System (ADS)

    Peirce, Anthony P.; Rabitz, Herschel

    1988-08-01

    The boundary element (BE) technique is used to analyze the effect of defects on one-dimensional chemically active surfaces. The standard BE algorithm for diffusion is modified to include the effects of bulk desorption by making use of an asymptotic expansion technique to evaluate influences near boundaries and defect sites. An explicit time evolution scheme is proposed to treat the non-linear equations associated with defect sites. The proposed BE algorithm is shown to provide an efficient and convergent algorithm for modelling localized non-linear behavior. Since it exploits the actual Green's function of the linear diffusion-desorption process that takes place on the surface, the BE algorithm is extremely stable. The BE algorithm is applied to a number of interesting physical problems in which non-linear reactions occur at localized defects. The Lotka-Volterra system is considered in which the source, sink and predator-prey interaction terms are distributed at different defect sites in the domain and in which the defects are coupled by diffusion. This example provides a stringent test of the stability of the numerical algorithm. Marginal stability oscillations are analyzed for the Prigogine-Lefever reaction that occurs on a lattice of defects. Dissipative effects are observed for large perturbations to the marginal stability state, and rapid spatial reorganization of uniformly distributed initial perturbations is seen to take place. In another series of examples the effect of defect locations on the balance between desorptive processes on chemically active surfaces is considered. The effect of dynamic pulsing at various time-scales is considered for a one species reactive trapping model. Similar competitive behavior between neighboring defects previously observed for static adsorption levels is shown to persist for dynamic loading of the surface. The analysis of a more complex three species reaction process also provides evidence of competitive behavior between neighboring defect sites. The proposed BE algorithm is shown to provide a useful technique for analyzing the effect of defect sites on chemically active surfaces.

  18. Perennial warm-season grasses for producing biofuel and enhancing soil properties: an alternative to corn residue removal

    USDA-ARS?s Scientific Manuscript database

    Removal of corn (Zea mays L.) residues at high rates for biofuel and other off-farm uses may negatively impact soil and the environment in the long term. Biomass removal from perennial warm-season grasses (WSGs) grown in marginally productive lands could be an alternative to corn residue removal as ...

  19. Rural Schools, Rural Communities: An Alternative View of the Future. Keynote Address.

    ERIC Educational Resources Information Center

    Nachtigal, Paul M.

    The urbanization and industrialization of a society based on commercial competitiveness has resulted in the marginalization of rural communities and the disempowerment of rural people. An alternative view of the future is needed, and rural schools have a part to play in creating it. Four sets of forces are driving society toward a different…

  20. Test and evaluation of the HIDEC engine uptrim algorithm. [Highly Integrated Digital Electronic Control for aircraft

    NASA Technical Reports Server (NTRS)

    Ray, R. J.; Myers, L. P.

    1986-01-01

    The highly integrated digital electronic control (HIDEC) program will demonstrate and evaluate the improvements in performance and mission effectiveness that result from integrated engine-airframe control systems. Performance improvements will result from an adaptive engine stall margin mode, a highly integrated mode that uses the airplane flight conditions and the resulting inlet distortion to continuously compute engine stall margin. When there is excessive stall margin, the engine is uptrimmed for more thrust by increasing engine pressure ratio (EPR). The EPR uptrim logic has been evaluated and implemente into computer simulations. Thrust improvements over 10 percent are predicted for subsonic flight conditions. The EPR uptrim was successfully demonstrated during engine ground tests. Test results verify model predictions at the conditions tested.

  1. How Cuba's Latin American School of Medicine challenges the ethics of physician migration.

    PubMed

    Huish, Robert

    2009-08-01

    This paper demonstrates a working alternative to the accepted ethics of physician migration. A dominant cosmopolitan ethics encourages upward mobility of physicians in a globalized labour force, and this ultimately advances the position of individuals rather than improving public health-care service for vulnerable communities in the global South. Cuba's Escuela Latinoamericana de Medicina (ELAM) challenges this trend as its institutional ethics furnishes graduates with appropriate skills, knowledge and service ethics to deliver quality care in marginalized areas. This paper provides an analysis of how ELAM trains physicians in community-oriented service for marginalized areas in the global South. The principle finding of this analysis is that ELAM exhibits a working alternative to the accepted ethics of physician migration, as it encourages graduates to practice in marginalized communities rather than feed the migration pipeline into the North. Arguably, ELAM serves as an important case study in how a medical school's ethics can work to bring graduates closer to the communities that are in desperate need of their skills and of their compassion.

  2. Constrained optimization by radial basis function interpolation for high-dimensional expensive black-box problems with infeasible initial points

    NASA Astrophysics Data System (ADS)

    Regis, Rommel G.

    2014-02-01

    This article develops two new algorithms for constrained expensive black-box optimization that use radial basis function surrogates for the objective and constraint functions. These algorithms are called COBRA and Extended ConstrLMSRBF and, unlike previous surrogate-based approaches, they can be used for high-dimensional problems where all initial points are infeasible. They both follow a two-phase approach where the first phase finds a feasible point while the second phase improves this feasible point. COBRA and Extended ConstrLMSRBF are compared with alternative methods on 20 test problems and on the MOPTA08 benchmark automotive problem (D.R. Jones, Presented at MOPTA 2008), which has 124 decision variables and 68 black-box inequality constraints. The alternatives include a sequential penalty derivative-free algorithm, a direct search method with kriging surrogates, and two multistart methods. Numerical results show that COBRA algorithms are competitive with Extended ConstrLMSRBF and they generally outperform the alternatives on the MOPTA08 problem and most of the test problems.

  3. Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions.

    PubMed

    Chen, Ke; Wang, Shihai

    2011-01-01

    Semi-supervised learning concerns the problem of learning in the presence of labeled and unlabeled data. Several boosting algorithms have been extended to semi-supervised learning with various strategies. To our knowledge, however, none of them takes all three semi-supervised assumptions, i.e., smoothness, cluster, and manifold assumptions, together into account during boosting learning. In this paper, we propose a novel cost functional consisting of the margin cost on labeled data and the regularization penalty on unlabeled data based on three fundamental semi-supervised assumptions. Thus, minimizing our proposed cost functional with a greedy yet stagewise functional optimization procedure leads to a generic boosting framework for semi-supervised learning. Extensive experiments demonstrate that our algorithm yields favorite results for benchmark and real-world classification tasks in comparison to state-of-the-art semi-supervised learning algorithms, including newly developed boosting algorithms. Finally, we discuss relevant issues and relate our algorithm to the previous work.

  4. Segmentation of Coronary Angiograms Using Gabor Filters and Boltzmann Univariate Marginal Distribution Algorithm

    PubMed Central

    Cervantes-Sanchez, Fernando; Hernandez-Aguirre, Arturo; Solorio-Meza, Sergio; Ornelas-Rodriguez, Manuel; Torres-Cisneros, Miguel

    2016-01-01

    This paper presents a novel method for improving the training step of the single-scale Gabor filters by using the Boltzmann univariate marginal distribution algorithm (BUMDA) in X-ray angiograms. Since the single-scale Gabor filters (SSG) are governed by three parameters, the optimal selection of the SSG parameters is highly desirable in order to maximize the detection performance of coronary arteries while reducing the computational time. To obtain the best set of parameters for the SSG, the area (A z) under the receiver operating characteristic curve is used as fitness function. Moreover, to classify vessel and nonvessel pixels from the Gabor filter response, the interclass variance thresholding method has been adopted. The experimental results using the proposed method obtained the highest detection rate with A z = 0.9502 over a training set of 40 images and A z = 0.9583 with a test set of 40 images. In addition, the experimental results of vessel segmentation provided an accuracy of 0.944 with the test set of angiograms. PMID:27738422

  5. Ensemble-marginalized Kalman filter for linear time-dependent PDEs with noisy boundary conditions: application to heat transfer in building walls

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco; Sawlan, Zaid; Scavino, Marco; Tempone, Raúl; Wood, Christopher

    2018-07-01

    In this work, we present the ensemble-marginalized Kalman filter (EnMKF), a sequential algorithm analogous to our previously proposed approach (Ruggeri et al 2017 Bayesian Anal. 12 407–33, Iglesias et al 2018 Int. J. Heat Mass Transfer 116 417–31), for estimating the state and parameters of linear parabolic partial differential equations in initial-boundary value problems when the boundary data are noisy. We apply EnMKF to infer the thermal properties of building walls and to estimate the corresponding heat flux from real and synthetic data. Compared with a modified ensemble Kalman filter (EnKF) that is not marginalized, EnMKF reduces the bias error, avoids the collapse of the ensemble without needing to add inflation, and converges to the mean field posterior using or less of the ensemble size required by EnKF. According to our results, the marginalization technique in EnMKF is key to performance improvement with smaller ensembles at any fixed time.

  6. RBoost: Label Noise-Robust Boosting Algorithm Based on a Nonconvex Loss Function and the Numerically Stable Base Learners.

    PubMed

    Miao, Qiguang; Cao, Ying; Xia, Ge; Gong, Maoguo; Liu, Jiachen; Song, Jianfeng

    2016-11-01

    AdaBoost has attracted much attention in the machine learning community because of its excellent performance in combining weak classifiers into strong classifiers. However, AdaBoost tends to overfit to the noisy data in many applications. Accordingly, improving the antinoise ability of AdaBoost plays an important role in many applications. The sensitiveness to the noisy data of AdaBoost stems from the exponential loss function, which puts unrestricted penalties to the misclassified samples with very large margins. In this paper, we propose two boosting algorithms, referred to as RBoost1 and RBoost2, which are more robust to the noisy data compared with AdaBoost. RBoost1 and RBoost2 optimize a nonconvex loss function of the classification margin. Because the penalties to the misclassified samples are restricted to an amount less than one, RBoost1 and RBoost2 do not overfocus on the samples that are always misclassified by the previous base learners. Besides the loss function, at each boosting iteration, RBoost1 and RBoost2 use numerically stable ways to compute the base learners. These two improvements contribute to the robustness of the proposed algorithms to the noisy training and testing samples. Experimental results on the synthetic Gaussian data set, the UCI data sets, and a real malware behavior data set illustrate that the proposed RBoost1 and RBoost2 algorithms perform better when the training data sets contain noisy data.

  7. Automated detection of a prostate Ni-Ti stent in electronic portal images.

    PubMed

    Carl, Jesper; Nielsen, Henning; Nielsen, Jane; Lund, Bente; Larsen, Erik Hoejkjaer

    2006-12-01

    Planning target volumes (PTV) in fractionated radiotherapy still have to be outlined with wide margins to the clinical target volume due to uncertainties arising from daily shift of the prostate position. A recently proposed new method of visualization of the prostate is based on insertion of a thermo-expandable Ni-Ti stent. The current study proposes a new detection algorithm for automated detection of the Ni-Ti stent in electronic portal images. The algorithm is based on the Ni-Ti stent having a cylindrical shape with a fixed diameter, which was used as the basis for an automated detection algorithm. The automated method uses enhancement of lines combined with a grayscale morphology operation that looks for enhanced pixels separated with a distance similar to the diameter of the stent. The images in this study are all from prostate cancer patients treated with radiotherapy in a previous study. Images of a stent inserted in a humanoid phantom demonstrated a localization accuracy of 0.4-0.7 mm which equals the pixel size in the image. The automated detection of the stent was compared to manual detection in 71 pairs of orthogonal images taken in nine patients. The algorithm was successful in 67 of 71 pairs of images. The method is fast, has a high success rate, good accuracy, and has a potential for unsupervised localization of the prostate before radiotherapy, which would enable automated repositioning before treatment and allow for the use of very tight PTV margins.

  8. Estimation of marginal costs at existing waste treatment facilities.

    PubMed

    Martinez-Sanchez, Veronica; Hulgaard, Tore; Hindsgaul, Claus; Riber, Christian; Kamuk, Bettina; Astrup, Thomas F

    2016-04-01

    This investigation aims at providing an improved basis for assessing economic consequences of alternative Solid Waste Management (SWM) strategies for existing waste facilities. A bottom-up methodology was developed to determine marginal costs in existing facilities due to changes in the SWM system, based on the determination of average costs in such waste facilities as function of key facility and waste compositional parameters. The applicability of the method was demonstrated through a case study including two existing Waste-to-Energy (WtE) facilities, one with co-generation of heat and power (CHP) and another with only power generation (Power), affected by diversion strategies of five waste fractions (fibres, plastic, metals, organics and glass), named "target fractions". The study assumed three possible responses to waste diversion in the WtE facilities: (i) biomass was added to maintain a constant thermal load, (ii) Refused-Derived-Fuel (RDF) was included to maintain a constant thermal load, or (iii) no reaction occurred resulting in a reduced waste throughput without full utilization of the facility capacity. Results demonstrated that marginal costs of diversion from WtE were up to eleven times larger than average costs and dependent on the response in the WtE plant. Marginal cost of diversion were between 39 and 287 € Mg(-1) target fraction when biomass was added in a CHP (from 34 to 303 € Mg(-1) target fraction in the only Power case), between -2 and 300 € Mg(-1) target fraction when RDF was added in a CHP (from -2 to 294 € Mg(-1) target fraction in the only Power case) and between 40 and 303 € Mg(-1) target fraction when no reaction happened in a CHP (from 35 to 296 € Mg(-1) target fraction in the only Power case). Although average costs at WtE facilities were highly influenced by energy selling prices, marginal costs were not (provided a response was initiated at the WtE to keep constant the utilized thermal capacity). Failing to systematically address and include costs in existing waste facilities in decision-making may unintendedly lead to higher overall costs at societal level. To avoid misleading conclusions, economic assessment of alternative SWM solutions should not only consider potential costs associated with alternative treatment but also include marginal costs associated with existing facilities. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    NASA Astrophysics Data System (ADS)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  10. Noninferiority trial designs for odds ratios and risk differences.

    PubMed

    Hilton, Joan F

    2010-04-30

    This study presents constrained maximum likelihood derivations of the design parameters of noninferiority trials for binary outcomes with the margin defined on the odds ratio (ψ) or risk-difference (δ) scale. The derivations show that, for trials in which the group-specific response rates are equal under the point-alternative hypothesis, the common response rate, π(N), is a fixed design parameter whose value lies between the control and experimental rates hypothesized at the point-null, {π(C), π(E)}. We show that setting π(N) equal to the value of π(C) that holds under H(0) underestimates the overall sample size requirement. Given {π(C), ψ} or {π(C), δ} and the type I and II error rates, or algorithm finds clinically meaningful design values of π(N), and the corresponding minimum asymptotic sample size, N=n(E)+n(C), and optimal allocation ratio, γ=n(E)/n(C). We find that optimal allocations are increasingly imbalanced as ψ increases, with γ(ψ)<1 and γ(δ)≈1/γ(ψ), and that ranges of allocation ratios map to the minimum sample size. The latter characteristic allows trialists to consider trade-offs between optimal allocation at a smaller N and a preferred allocation at a larger N. For designs with relatively large margins (e.g. ψ>2.5), trial results that are presented on both scales will differ in power, with more power lost if the study is designed on the risk-difference scale and reported on the odds ratio scale than vice versa. 2010 John Wiley & Sons, Ltd.

  11. Feedbacks Between Surface Processes and Tectonics at Rifted Margins: a Numerical Approach

    NASA Astrophysics Data System (ADS)

    Andres-Martinez, M.; Perez-Gussinye, M.; Morgan, J. P.; Armitage, J. J.

    2014-12-01

    Mantle dynamics drives the rifting of the continents and consequent crustal processes shape the topography of the rifted margins. Surface processes modify the topography by eroding positive reliefs and sedimenting on the basins. This lateral displacement of masses implies a change in the loads during rifting, affecting the architecture of the resulting margins. Furthermore, thermal insulation due to sediments could potentially have an impact on the rheologies, which are proved to be one of the most influential parameters that control the deformation style at the continental margins. In order to understand the feedback between these processes we have developed a numerical geodynamic model based on MILAMIN. Our model consists of a 2D Lagrangian triangular mesh for which velocities, displacements, pressures and temperatures are calculated each time step. The model is visco-elastic and includes a free-surface stabilization algorithm, strain weakening and an erosion/sedimentation algorithm. Sediment loads and temperatures on the sediments are taken into account when solving velocities and temperatures for the whole model. Although surface processes are strongly three-dimensional, we have chosen to study a 2D section parallel to the extension as a first approach. Results show that where sedimentation occurs strain further localizes. This is due to the extra load of the sediments exerting a gravitational force over the topography. We also observed angular unconformities on the sediments due to the rotation of crustal blocks associated with normal faults. In order to illustrate the feedbacks between surface and inner processes we will show a series of models calculated with different rheologies and extension velocities, with and without erosion/sedimentation. We will then discuss to which extent thermal insulation due to sedimentation and increased stresses due to sediment loading affect the geometry and distribution of faulting, the rheology of the lower crust and consequently margin architecture.

  12. Silvicultural Alternatives in Bottomland Hardwoods and Their Impact on Stand Quality

    Treesearch

    Harvey E. Kennedy; Robert L. Johnson

    1984-01-01

    Bottomland hardwoods occur on some 35 million acres of forest land in swamps, creek margins, river bottoms, and brown loam bluffs from Virginia to Texas. These hardwood types are very important because the wood has great value and is in demand by forest industries. This article discusses silvicultural alternatives such as site-species relationships, how hardwood timber...

  13. Including Alternative Stories in the Mainstream. How Transcultural Young People in Norway Perform Creative Cultural Resistance in and outside of School

    ERIC Educational Resources Information Center

    Dewilde, Joke; Skrefsrud, Thor-André

    2016-01-01

    The development of an inclusive pedagogy takes on new urgency in Norwegian schools as the student body has become increasingly culturally and linguistically diverse. Traditionally, the Norwegian school has been dominated by homogenising and assimilating discourses, whereas alternative voices have been situated at the margins. In response to this…

  14. Cutting blade dentitions in squaliform sharks form by modification of inherited alternate tooth ordering patterns

    PubMed Central

    Smith, Moya Meredith

    2016-01-01

    The squaliform sharks represent one of the most speciose shark clades. Many adult squaliforms have blade-like teeth, either on both jaws or restricted to the lower jaw, forming a continuous, serrated blade along the jaw margin. These teeth are replaced as a single unit and successor teeth lack the alternate arrangement present in other elasmobranchs. Micro-CT scans of embryos of squaliforms and a related outgroup (Pristiophoridae) revealed that the squaliform dentition pattern represents a highly modified version of tooth replacement seen in other clades. Teeth of Squalus embryos are arranged in an alternate pattern, with successive tooth rows containing additional teeth added proximally. Asynchronous timing of tooth production along the jaw and tooth loss prior to birth cause teeth to align in oblique sets containing teeth from subsequent rows; these become parallel to the jaw margin during ontogeny, so that adult Squalus has functional tooth rows comprising obliquely stacked teeth of consecutive developmental rows. In more strongly heterodont squaliforms, initial embryonic lower teeth develop into the oblique functional sets seen in adult Squalus, with no requirement to form, and subsequently lose, teeth arranged in an initial alternate pattern. PMID:28018617

  15. Cutting blade dentitions in squaliform sharks form by modification of inherited alternate tooth ordering patterns

    NASA Astrophysics Data System (ADS)

    Underwood, Charlie; Johanson, Zerina; Smith, Moya Meredith

    2016-11-01

    The squaliform sharks represent one of the most speciose shark clades. Many adult squaliforms have blade-like teeth, either on both jaws or restricted to the lower jaw, forming a continuous, serrated blade along the jaw margin. These teeth are replaced as a single unit and successor teeth lack the alternate arrangement present in other elasmobranchs. Micro-CT scans of embryos of squaliforms and a related outgroup (Pristiophoridae) revealed that the squaliform dentition pattern represents a highly modified version of tooth replacement seen in other clades. Teeth of Squalus embryos are arranged in an alternate pattern, with successive tooth rows containing additional teeth added proximally. Asynchronous timing of tooth production along the jaw and tooth loss prior to birth cause teeth to align in oblique sets containing teeth from subsequent rows; these become parallel to the jaw margin during ontogeny, so that adult Squalus has functional tooth rows comprising obliquely stacked teeth of consecutive developmental rows. In more strongly heterodont squaliforms, initial embryonic lower teeth develop into the oblique functional sets seen in adult Squalus, with no requirement to form, and subsequently lose, teeth arranged in an initial alternate pattern.

  16. Algorithm Development and Validation for Satellite-Derived Distributions of DOC and CDOM in the US Middle Atlantic Bight

    NASA Technical Reports Server (NTRS)

    Mannino, Antonio; Russ, Mary E.; Hooker, Stanford B.

    2007-01-01

    In coastal ocean waters, distributions of dissolved organic carbon (DOC) and chromophoric dissolved organic matter (CDOM) vary seasonally and interannually due to multiple source inputs and removal processes. We conducted several oceanographic cruises within the continental margin of the U.S. Middle Atlantic Bight (MAB) to collect field measurements in order to develop algorithms to retrieve CDOM and DOC from NASA's MODIS-Aqua and SeaWiFS satellite sensors. In order to develop empirical algorithms for CDOM and DOC, we correlated the CDOM absorption coefficient (a(sub cdom)) with in situ radiometry (remote sensing reflectance, Rrs, band ratios) and then correlated DOC to Rrs band ratios through the CDOM to DOC relationships. Our validation analyses demonstrate successful retrieval of DOC and CDOM from coastal ocean waters using the MODIS-Aqua and SeaWiFS satellite sensors with mean absolute percent differences from field measurements of < 9 %for DOC, 20% for a(sub cdom)(355)1,6 % for a(sub cdom)(443), and 12% for the CDOM spectral slope. To our knowledge, the algorithms presented here represent the first validated algorithms for satellite retrieval of a(sub cdom) DOC, and CDOM spectral slope in the coastal ocean. The satellite-derived DOC and a(sub cdom) products demonstrate the seasonal net ecosystem production of DOC and photooxidation of CDOM from spring to fall. With accurate satellite retrievals of CDOM and DOC, we will be able to apply satellite observations to investigate interannual and decadal-scale variability in surface CDOM and DOC within continental margins and monitor impacts of climate change and anthropogenic activities on coastal ecosystems.

  17. Three-dimensional representations of salt-dome margins at four active strategic petroleum reserve sites.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rautman, Christopher Arthur; Stein, Joshua S.

    2003-01-01

    Existing paper-based site characterization models of salt domes at the four active U.S. Strategic Petroleum Reserve sites have been converted to digital format and visualized using modern computer software. The four sites are the Bayou Choctaw dome in Iberville Parish, Louisiana; the Big Hill dome in Jefferson County, Texas; the Bryan Mound dome in Brazoria County, Texas; and the West Hackberry dome in Cameron Parish, Louisiana. A new modeling algorithm has been developed to overcome limitations of many standard geological modeling software packages in order to deal with structurally overhanging salt margins that are typical of many salt domes. Thismore » algorithm, and the implementing computer program, make use of the existing interpretive modeling conducted manually using professional geological judgement and presented in two dimensions in the original site characterization reports as structure contour maps on the top of salt. The algorithm makes use of concepts of finite-element meshes of general engineering usage. Although the specific implementation of the algorithm described in this report and the resulting output files are tailored to the modeling and visualization software used to construct the figures contained herein, the algorithm itself is generic and other implementations and output formats are possible. The graphical visualizations of the salt domes at the four Strategic Petroleum Reserve sites are believed to be major improvements over the previously available two-dimensional representations of the domes via conventional geologic drawings (cross sections and contour maps). Additionally, the numerical mesh files produced by this modeling activity are available for import into and display by other software routines. The mesh data are not explicitly tabulated in this report; however an electronic version in simple ASCII format is included on a PC-based compact disk.« less

  18. Margins: a status report from the Annual Meeting of the American Society of Breast Surgeons.

    PubMed

    Harness, Jay K; Giuliano, Armando E; Pockaj, Barbara A; Downs-Kelly, Erinn

    2014-10-01

    Since the emergence of breast conserving surgery (BCS) as an alternative to mastectomy in the 1980's, there has been little consensus on what constitutes acceptable margins for cases of invasive breast cancer, how best to evaluate margins in the operating room, or an understanding of the challenging process of margin assessment by pathologists. The program committee for the 15th Annual Meeting of The American Society of Breast Surgeons organized a plenary session to discuss the latest thinking and guidelines for these important issues. The SSO/ASTRO Consensus Guideline on Margins for BCS was an important focus of discussion. The SSO/ASTRO consensus panelists concluded that "no ink on tumor" is an adequate surgical margin for BCS in patients with invasive breast cancers. Intraoperative strategies to decrease the incidence of positive margins include intraoperative localization techniques (wire-localization, ultrasound, radioactive seed) and intraoperative margin assessments with specimen radiography, imprint cytology, and frozen section. Studies also demonstrate the positive effect of shave margins with or without intraoperative margin assessment. The College of American Pathologists protocols for breast specimen margin evaluation consider multiple variables that can impact the proper assessment of margins. These variables include: tissue fixation time, specimen orientation, cold ischemia time, leaking ink, specimen pancaking and others that surgeons need to be aware of. Determining when "enough is enough" should not only be the application of guidelines and national standards, but also a multidisciplinary discussion between breast cancer specialists for what is right for the individual patient's unique circumstances.

  19. Marginal abatement cost curves for NOx that account for ...

    EPA Pesticide Factsheets

    A marginal abatement cost curve (MACC) traces out the relationship between the quantity of pollution abated and the marginal cost of abating each additional unit. In the context of air quality management, MACCs typically are developed by sorting end-of-pipe controls by their respective cost effectiveness. Alternative measures, such as renewable electricity, energy efficiency, and fuel switching (RE/EE/FS), are not considered as it is difficult to quantify their abatement potential. In this paper, we demonstrate the use of an energy system model to develop a MACC for nitrogen oxides (NOx) that incorporates both end-of-pipe controls and these alternative measures. We decompose the MACC by sector, and evaluate the cost-effectiveness of RE/EE/FS relative to end-of-pipe controls. RE/EE/FS are shown to produce considerable emission reductions after end-of-pipe controls have been exhausted. Furthermore, some RE/EE/FS are shown to be cost-competitive with end-of-pipe controls. Demonstrate how the MARKAL energy system model can be used to evaluate the potential role of renewable electricity, energy efficiency and fuel switching (RE/EE/FS) in achieving NOx reductions. For this particular analysis, we show that RE/EE/FSs are able to increase the quantity of NOx reductions available for a particular marginal cost (ranging from $5k per ton to $40k per ton) by approximately 50%.

  20. Blind compressive sensing dynamic MRI

    PubMed Central

    Lingala, Sajan Goud; Jacob, Mathews

    2013-01-01

    We propose a novel blind compressive sensing (BCS) frame work to recover dynamic magnetic resonance images from undersampled measurements. This scheme models the dynamic signal as a sparse linear combination of temporal basis functions, chosen from a large dictionary. In contrast to classical compressed sensing, the BCS scheme simultaneously estimates the dictionary and the sparse coefficients from the undersampled measurements. Apart from the sparsity of the coefficients, the key difference of the BCS scheme with current low rank methods is the non-orthogonal nature of the dictionary basis functions. Since the number of degrees of freedom of the BCS model is smaller than that of the low-rank methods, it provides improved reconstructions at high acceleration rates. We formulate the reconstruction as a constrained optimization problem; the objective function is the linear combination of a data consistency term and sparsity promoting ℓ1 prior of the coefficients. The Frobenius norm dictionary constraint is used to avoid scale ambiguity. We introduce a simple and efficient majorize-minimize algorithm, which decouples the original criterion into three simpler sub problems. An alternating minimization strategy is used, where we cycle through the minimization of three simpler problems. This algorithm is seen to be considerably faster than approaches that alternates between sparse coding and dictionary estimation, as well as the extension of K-SVD dictionary learning scheme. The use of the ℓ1 penalty and Frobenius norm dictionary constraint enables the attenuation of insignificant basis functions compared to the ℓ0 norm and column norm constraint assumed in most dictionary learning algorithms; this is especially important since the number of basis functions that can be reliably estimated is restricted by the available measurements. We also observe that the proposed scheme is more robust to local minima compared to K-SVD method, which relies on greedy sparse coding. Our phase transition experiments demonstrate that the BCS scheme provides much better recovery rates than classical Fourier-based CS schemes, while being only marginally worse than the dictionary aware setting. Since the overhead in additionally estimating the dictionary is low, this method can be very useful in dynamic MRI applications, where the signal is not sparse in known dictionaries. We demonstrate the utility of the BCS scheme in accelerating contrast enhanced dynamic data. We observe superior reconstruction performance with the BCS scheme in comparison to existing low rank and compressed sensing schemes. PMID:23542951

  1. Numerical simulation of aerodynamic performance of a couple multiple units high-speed train

    NASA Astrophysics Data System (ADS)

    Niu, Ji-qiang; Zhou, Dan; Liu, Tang-hong; Liang, Xi-feng

    2017-05-01

    In order to determine the effect of the coupling region on train aerodynamic performance, and how the coupling region affects aerodynamic performance of the couple multiple units trains when they both run and pass each other in open air, the entrance of two such trains into a tunnel and their passing each other in the tunnel was simulated in Fluent 14.0. The numerical algorithm employed in this study was verified by the data of scaled and full-scale train tests, and the difference lies within an acceptable range. The results demonstrate that the distribution of aerodynamic forces on the train cars is altered by the coupling region; however, the coupling region has marginal effect on the drag and lateral force on the whole train under crosswind, and the lateral force on the train cars is more sensitive to couple multiple units compared to the other two force coefficients. It is also determined that the component of the coupling region increases the fluctuation of aerodynamic coefficients for each train car under crosswind. Affected by the coupling region, a positive pressure pulse was introduced in the alternating pressure produced by trains passing by each other in the open air, and the amplitude of the alternating pressure was decreased by the coupling region. The amplitude of the alternating pressure on the train or on the tunnel was significantly decreased by the coupling region of the train. This phenomenon did not alter the distribution law of pressure on the train and tunnel; moreover, the effect of the coupling region on trains passing by each other in the tunnel is stronger than that on a single train passing through the tunnel.

  2. Red (anthocyanic) leaf margins do not correspond to increased phenolic content in New Zealand Veronica spp.

    PubMed Central

    Hughes, Nicole M.; Smith, William K.; Gould, Kevin S.

    2010-01-01

    Background and Aims Red or purple coloration of leaf margins is common in angiosperms, and is found in approx. 25 % of New Zealand Veronica species. However, the functional significance of margin coloration is unknown. We hypothesized that anthocyanins in leaf margins correspond with increased phenolic content in leaf margins and/or the leaf entire, signalling low palatability or leaf quality to edge-feeding insects. Methods Five species of Veronica with red leaf margins, and six species without, were examined in a common garden. Phenolic content in leaf margins and interior lamina regions of juvenile and fully expanded leaves was quantified using the Folin–Ciocalteu assay. Proportions of leaf margins eaten and average lengths of continuous bites were used as a proxy for palatability. Key Results Phenolic content was consistently higher in leaf margins compared with leaf interiors in all species; however, neither leaf margins nor more interior tissues differed significantly in phenolic content with respects to margin colour. Mean phenolic content was inversely correlated with the mean length of continuous bites, suggesting effective deterrence of grazing. However, there was no difference in herbivore consumption of red and green margins, and the plant species with the longest continuous grazing patterns were both red-margined. Conclusions Red margin coloration was not an accurate indicator of total phenolic content in leaf margins or interior lamina tissue in New Zealand Veronica. Red coloration was also ineffective in deterring herbivory on the leaf margin, though studies controlling for variations in leaf structure and biochemistry (e.g. intra-specific studies) are needed before more precise conclusions can be drawn. It is also recommended that future studies focus on the relationship between anthocyanin and specific defence compounds (rather than general phenolic pools), and evaluate possible alternative functions of red margins in leaves (e.g. antioxidants, osmotic adjustment). PMID:20145003

  3. An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China

    PubMed Central

    Zou, Hui; Zou, Zhihong; Wang, Xiaojing

    2015-01-01

    The increase and the complexity of data caused by the uncertain environment is today’s reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006–2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality. PMID:26569283

  4. Losses from effluent taxes and quotas under uncertainty

    USGS Publications Warehouse

    Watson, W.D.; Ridker, R.G.

    1984-01-01

    Recent theoretical papers by Adar and Griffin (J. Environ. Econ. Manag.3, 178-188 (1976)), Fishelson (J. Environ. Econ. Manag.3, 189-197 (1976)), and Weitzman (Rev. Econ. Studies41, 477-491 (1974)) show that,different expected social losses arise from using effluent taxes and quotas as alternative control instruments when marginal control costs are uncertain. Key assumptions in these analyses are linear marginal cost and benefit functions and an additive error for the marginal cost function (to reflect uncertainty). In this paper, empirically derived nonlinear functions and more realistic multiplicative error terms are used to estimate expected control and damage costs and to identify (empirically) the mix of control instruments that minimizes expected losses. ?? 1984.

  5. A Model-Free No-arbitrage Price Bound for Variance Options

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonnans, J. Frederic, E-mail: frederic.bonnans@inria.fr; Tan Xiaolu, E-mail: xiaolu.tan@polytechnique.edu

    2013-08-01

    We suggest a numerical approximation for an optimization problem, motivated by its applications in finance to find the model-free no-arbitrage bound of variance options given the marginal distributions of the underlying asset. A first approximation restricts the computation to a bounded domain. Then we propose a gradient projection algorithm together with the finite difference scheme to solve the optimization problem. We prove the general convergence, and derive some convergence rate estimates. Finally, we give some numerical examples to test the efficiency of the algorithm.

  6. Data Assimilation in the Presence of Forecast Bias: The GEOS Moisture Analysis

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Todling, Ricardo

    1999-01-01

    We describe the application of the unbiased sequential analysis algorithm developed by Dee and da Silva (1998) to the GEOS DAS moisture analysis. The algorithm estimates the persistent component of model error using rawinsonde observations and adjusts the first-guess moisture field accordingly. Results of two seasonal data assimilation cycles show that moisture analysis bias is almost completely eliminated in all observed regions. The improved analyses cause a sizable reduction in the 6h-forecast bias and a marginal improvement in the error standard deviations.

  7. Global Optimality of the Successive Maxbet Algorithm.

    ERIC Educational Resources Information Center

    Hanafi, Mohamed; ten Berge, Jos M. F.

    2003-01-01

    It is known that the Maxbet algorithm, which is an alternative to the method of generalized canonical correlation analysis and Procrustes analysis, may converge to local maxima. Discusses an eigenvalue criterion that is sufficient, but not necessary, for global optimality of the successive Maxbet algorithm. (SLD)

  8. A technique for accelerating the convergence of restarted GMRES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, A H; Jessup, E R; Manteuffel, T

    2004-03-09

    We have observed that the residual vectors at the end of each restart cycle of restarted GMRES often alternate direction in a cyclic fashion, thereby slowing convergence. We present a new technique for accelerating the convergence of restarted GMRES by disrupting this alternating pattern. The new algorithm resembles a full conjugate gradient method with polynomial preconditioning, and its implementation requires minimal changes to the standard restarted GMRES algorithm.

  9. Bell-Curve Based Evolutionary Optimization Algorithm

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; Laba, K.; Kincaid, R.

    1998-01-01

    The paper presents an optimization algorithm that falls in the category of genetic, or evolutionary algorithms. While the bit exchange is the basis of most of the Genetic Algorithms (GA) in research and applications in America, some alternatives, also in the category of evolutionary algorithms, but use a direct, geometrical approach have gained popularity in Europe and Asia. The Bell-Curve Based Evolutionary Algorithm (BCB) is in this alternative category and is distinguished by the use of a combination of n-dimensional geometry and the normal distribution, the bell-curve, in the generation of the offspring. The tool for creating a child is a geometrical construct comprising a line connecting two parents and a weighted point on that line. The point that defines the child deviates from the weighted point in two directions: parallel and orthogonal to the connecting line, the deviation in each direction obeying a probabilistic distribution. Tests showed satisfactory performance of BCB. The principal advantage of BCB is its controllability via the normal distribution parameters and the geometrical construct variables.

  10. Development of an Algorithm for Satellite Remote Sensing of Sea and Lake Ice

    NASA Astrophysics Data System (ADS)

    Dorofy, Peter T.

    Satellite remote sensing of snow and ice has a long history. The traditional method for many snow and ice detection algorithms has been the use of the Normalized Difference Snow Index (NDSI). This manuscript is composed of two parts. Chapter 1, Development of a Mid-Infrared Sea and Lake Ice Index (MISI) using the GOES Imager, discusses the desirability, development, and implementation of alternative index for an ice detection algorithm, application of the algorithm to the detection of lake ice, and qualitative validation against other ice mapping products; such as, the Ice Mapping System (IMS). Chapter 2, Application of Dynamic Threshold in a Lake Ice Detection Algorithm, continues with a discussion of the development of a method that considers the variable viewing and illumination geometry of observations throughout the day. The method is an alternative to Bidirectional Reflectance Distribution Function (BRDF) models. Evaluation of the performance of the algorithm is introduced by aggregating classified pixels within geometrical boundaries designated by IMS and obtaining sensitivity and specificity statistical measures.

  11. Using manual prostate contours to enhance deformable registration of endorectal MRI.

    PubMed

    Cheung, M R; Krishnan, K

    2012-10-01

    Endorectal MRI provides detailed images of the prostate anatomy and is useful for radiation treatment planning. Here we describe a Demons field-initialized B-spline deformable registration of prostate MRI. T2-weighted endorectal MRIs of five patients were used. The prostate and the tumor of each patient were manually contoured. The planning MRIs and their segmentations were simulated by warping the corresponding endorectal MRIs using thin plate spline (TPS). Deformable registration was initialized using the deformation field generated using Demons algorithm to map the deformed prostate MRI to the non-deformed one. The solution was refined with B-Spline registration. Volume overlap similarity was used to assess the accuracy of registration and to suggest a minimum margin to account for the registration errors. Initialization using Demons algorithm took about 15 min on a computer with 2.8 GHz Intel, 1.3 GB RAM. Refinement B-spline registration (200 iterations) took less than 5 min. Using the synthetic images as the ground truth, at zero margin, the average (S.D.) 98 (±0.4)% for prostate coverage was 97 (±1)% for tumor. The average (±S.D.) treatment margin required to cover the entire prostate was 1.5 (±0.2)mm. The average (± S.D.) treatment margin required to cover the tumor was 0.7 (±0.1)mm. We also demonstrated the challenges in registering an in vivo deformed MRI to an in vivo non-deformed MRI. We here present a deformable registration scheme that can overcome large deformation. This platform is expected to be useful for prostate cancer radiation treatment planning. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  12. Providing animal health services to the poor in Northern Ghana: rethinking the role of community animal health workers?

    PubMed

    Mockshell, Jonathan; Ilukor, John; Birner, Regina

    2014-02-01

    The Community Animal Health Workers (CAHWs) system has been promoted as an alternative solution to providing animal health services in marginal areas. Yet, access to quality animal health services still remains a fundamental problem for livestock dependent communities. This paper uses the concepts of accessibility, affordability, and transaction costs to examine the perceptions of livestock keepers about the various animal health service providers. The empirical analysis is based on a survey of 120 livestock-keeping households in the Tolon-Kumbungu and Savelugu-Nanton districts in the Northern Region of Ghana. A multinomial logit model was used to determine the factors that influence households' choice of alternative animal health service providers. The results show that the government para-vets are the most preferred type of animal health service providers while CAHWs are the least preferred. Reasons for this observation include high transaction costs and low performance resulting from limited training. In areas with few or no government para-vets, farmers have resorted to self-treatment or to selling sick animals for consumption, which has undesirable health implications. These practices also result in significant financial losses for farmers. This paper finds that the CAHWs' system is insufficient for providing quality animal health services to the rural poor in marginal areas. Therefore, market-smart alternative solutions requiring strong public sector engagement to support livestock farmers in marginal areas and setting minimum training standards for animal health service providers merit policy consideration.

  13. An Efficient Augmented Lagrangian Method for Statistical X-Ray CT Image Reconstruction.

    PubMed

    Li, Jiaojiao; Niu, Shanzhou; Huang, Jing; Bian, Zhaoying; Feng, Qianjin; Yu, Gaohang; Liang, Zhengrong; Chen, Wufan; Ma, Jianhua

    2015-01-01

    Statistical iterative reconstruction (SIR) for X-ray computed tomography (CT) under the penalized weighted least-squares criteria can yield significant gains over conventional analytical reconstruction from the noisy measurement. However, due to the nonlinear expression of the objective function, most exiting algorithms related to the SIR unavoidably suffer from heavy computation load and slow convergence rate, especially when an edge-preserving or sparsity-based penalty or regularization is incorporated. In this work, to address abovementioned issues of the general algorithms related to the SIR, we propose an adaptive nonmonotone alternating direction algorithm in the framework of augmented Lagrangian multiplier method, which is termed as "ALM-ANAD". The algorithm effectively combines an alternating direction technique with an adaptive nonmonotone line search to minimize the augmented Lagrangian function at each iteration. To evaluate the present ALM-ANAD algorithm, both qualitative and quantitative studies were conducted by using digital and physical phantoms. Experimental results show that the present ALM-ANAD algorithm can achieve noticeable gains over the classical nonlinear conjugate gradient algorithm and state-of-the-art split Bregman algorithm in terms of noise reduction, contrast-to-noise ratio, convergence rate, and universal quality index metrics.

  14. Margin-maximizing feature elimination methods for linear and nonlinear kernel-based discriminant functions.

    PubMed

    Aksu, Yaman; Miller, David J; Kesidis, George; Yang, Qing X

    2010-05-01

    Feature selection for classification in high-dimensional spaces can improve generalization, reduce classifier complexity, and identify important, discriminating feature "markers." For support vector machine (SVM) classification, a widely used technique is recursive feature elimination (RFE). We demonstrate that RFE is not consistent with margin maximization, central to the SVM learning approach. We thus propose explicit margin-based feature elimination (MFE) for SVMs and demonstrate both improved margin and improved generalization, compared with RFE. Moreover, for the case of a nonlinear kernel, we show that RFE assumes that the squared weight vector 2-norm is strictly decreasing as features are eliminated. We demonstrate this is not true for the Gaussian kernel and, consequently, RFE may give poor results in this case. MFE for nonlinear kernels gives better margin and generalization. We also present an extension which achieves further margin gains, by optimizing only two degrees of freedom--the hyperplane's intercept and its squared 2-norm--with the weight vector orientation fixed. We finally introduce an extension that allows margin slackness. We compare against several alternatives, including RFE and a linear programming method that embeds feature selection within the classifier design. On high-dimensional gene microarray data sets, University of California at Irvine (UCI) repository data sets, and Alzheimer's disease brain image data, MFE methods give promising results.

  15. Evaluation of the marginal fit of metal copings fabricated on three different marginal designs using conventional and accelerated casting techniques: an in vitro study.

    PubMed

    Vaidya, Sharad; Parkash, Hari; Bhargava, Akshay; Gupta, Sharad

    2014-01-01

    Abundant resources and techniques have been used for complete coverage crown fabrication. Conventional investing and casting procedures for phosphate-bonded investments require a 2- to 4-h procedure before completion. Accelerated casting techniques have been used, but may not result in castings with matching marginal accuracy. The study measured the marginal gap and determined the clinical acceptability of single cast copings invested in a phosphate-bonded investment with the use of conventional and accelerated methods. One hundred and twenty cast coping samples were fabricated using conventional and accelerated methods, with three finish lines: Chamfer, shoulder and shoulder with bevel. Sixty copings were prepared with each technique. Each coping was examined with a stereomicroscope at four predetermined sites and measurements of marginal gaps were documented for each. A master chart was prepared for all the data and was analyzed using Statistical Package for the Social Sciences version. Evidence of marginal gap was then evaluated by t-test. Analysis of variance and Post-hoc analysis were used to compare two groups as well as to make comparisons between three subgroups . Measurements recorded showed no statistically significant difference between conventional and accelerated groups. Among the three marginal designs studied, shoulder with bevel showed the best marginal fit with conventional as well as accelerated casting techniques. Accelerated casting technique could be a vital alternative to the time-consuming conventional casting technique. The marginal fit between the two casting techniques showed no statistical difference.

  16. 30 CFR 204.203 - What is the other relief option?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.203 What is the other relief option? (a) Under this relief option, you may request any type of accounting and auditing relief...

  17. Evaluation of marginal gap of Ni-Cr copings made with conventional and accelerated casting techniques.

    PubMed

    Tannamala, Pavan Kumar; Azhagarasan, Nagarasampatti Sivaprakasam; Shankar, K Chitra

    2013-01-01

    Conventional casting techniques following the manufacturers' recommendations are time consuming. Accelerated casting techniques have been reported, but their accuracy with base metal alloys has not been adequately studied. We measured the vertical marginal gap of nickel-chromium copings made by conventional and accelerated casting techniques and determined the clinical acceptability of the cast copings in this study. Experimental design, in vitro study, lab settings. Ten copings each were cast by conventional and accelerated casting techniques. All copings were identical, only their mold preparation schedules differed. Microscopic measurements were recorded at ×80 magnification on the perpendicular to the axial wall at four predetermined sites. The marginal gap values were evaluated by paired t test. The mean marginal gap by conventional technique (34.02 μm) is approximately 10 μm lesser than that of accelerated casting technique (44.62 μm). As the P value is less than 0.0001, there is highly significant difference between the two techniques with regard to vertical marginal gap. The accelerated casting technique is time saving and the marginal gap measured was within the clinically acceptable limits and could be an alternative to time-consuming conventional techniques.

  18. Adaptive jammer nulling in EHF communications satellites

    NASA Astrophysics Data System (ADS)

    Bhagwan, Jai; Kavanagh, Stephen; Yen, J. L.

    A preliminary investigation is reviewed concerning adaptive null steering multibeam uplink receiving system concepts for future extremely high frequency communications satellites. Primary alternatives in the design of the uplink antenna, the multibeam adaptive nulling receiver, and the processing algorithm and optimization criterion are discussed. The alternatives are phased array, lens or reflector antennas, nulling at radio frequency or an intermediate frequency, wideband versus narrowband nulling, and various adaptive nulling algorithms. A primary determinant of the hardware complexity is the receiving system architecture, which is described for the alternative antenna and nulling concepts. The final concept chosen will be influenced by the nulling performance requirements, cost, and technological readiness.

  19. The PlusCal Algorithm Language

    NASA Astrophysics Data System (ADS)

    Lamport, Leslie

    Algorithms are different from programs and should not be described with programming languages. The only simple alternative to programming languages has been pseudo-code. PlusCal is an algorithm language that can be used right now to replace pseudo-code, for both sequential and concurrent algorithms. It is based on the TLA + specification language, and a PlusCal algorithm is automatically translated to a TLA + specification that can be checked with the TLC model checker and reasoned about formally.

  20. Kerr Reservoir LANDSAT experiment analysis for March 1981

    NASA Technical Reports Server (NTRS)

    Lecroy, S. R. (Principal Investigator)

    1982-01-01

    LANDSAT radiance data were used in an experiment conducted on the waters of Kerr Reservoir to determine if reliable algorithms could be developed that relate water quality parameters to remotely sensed data. A mix of different types of algorithms using the LANDSAT bands was generated to provide a thorough understanding of the relationships among the data involved. Except for secchi depth, the study demonstrated that for the ranges measured, the algorithms that satisfactorily represented the data encompass a mix of linear and nonlinear forms using only one LANDSAT band. Ratioing techniques did not improve the results since the initial design of the experiment minimized the errors against which this procedure is effective. Good correlations were found for total suspended solids, iron, turbidity, and secchi depth. Marginal correlations were discovered for nitrate and tannin + lignin. Quantification maps of Kerr Reservoir are presented for many of the water quality parameters using the developed algorithms.

  1. An intelligent identification algorithm for the monoclonal picking instrument

    NASA Astrophysics Data System (ADS)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  2. Probabilistic cosmological mass mapping from weak lensing shear

    DOE PAGES

    Schneider, M. D.; Ng, K. Y.; Dawson, W. A.; ...

    2017-04-10

    Here, we infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear ormore » Gaussian-distributed shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  3. Probabilistic Cosmological Mass Mapping from Weak Lensing Shear

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schneider, M. D.; Dawson, W. A.; Ng, K. Y.

    2017-04-10

    We infer gravitational lensing shear and convergence fields from galaxy ellipticity catalogs under a spatial process prior for the lensing potential. We demonstrate the performance of our algorithm with simulated Gaussian-distributed cosmological lensing shear maps and a reconstruction of the mass distribution of the merging galaxy cluster Abell 781 using galaxy ellipticities measured with the Deep Lens Survey. Given interim posterior samples of lensing shear or convergence fields on the sky, we describe an algorithm to infer cosmological parameters via lens field marginalization. In the most general formulation of our algorithm we make no assumptions about weak shear or Gaussian-distributedmore » shape noise or shears. Because we require solutions and matrix determinants of a linear system of dimension that scales with the number of galaxies, we expect our algorithm to require parallel high-performance computing resources for application to ongoing wide field lensing surveys.« less

  4. An intelligent fault diagnosis method of rolling bearings based on regularized kernel Marginal Fisher analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Shi, Tielin; Xuan, Jianping

    2012-05-01

    Generally, the vibration signals of fault bearings are non-stationary and highly nonlinear under complicated operating conditions. Thus, it's a big challenge to extract optimal features for improving classification and simultaneously decreasing feature dimension. Kernel Marginal Fisher analysis (KMFA) is a novel supervised manifold learning algorithm for feature extraction and dimensionality reduction. In order to avoid the small sample size problem in KMFA, we propose regularized KMFA (RKMFA). A simple and efficient intelligent fault diagnosis method based on RKMFA is put forward and applied to fault recognition of rolling bearings. So as to directly excavate nonlinear features from the original high-dimensional vibration signals, RKMFA constructs two graphs describing the intra-class compactness and the inter-class separability, by combining traditional manifold learning algorithm with fisher criteria. Therefore, the optimal low-dimensional features are obtained for better classification and finally fed into the simplest K-nearest neighbor (KNN) classifier to recognize different fault categories of bearings. The experimental results demonstrate that the proposed approach improves the fault classification performance and outperforms the other conventional approaches.

  5. Unwinding the hairball graph: Pruning algorithms for weighted complex networks

    NASA Astrophysics Data System (ADS)

    Dianati, Navid

    2016-01-01

    Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.

  6. Low dose CT reconstruction via L1 norm dictionary learning using alternating minimization algorithm and balancing principle.

    PubMed

    Wu, Junfeng; Dai, Fang; Hu, Gang; Mou, Xuanqin

    2018-04-18

    Excessive radiation exposure in computed tomography (CT) scans increases the chance of developing cancer and has become a major clinical concern. Recently, statistical iterative reconstruction (SIR) with l0-norm dictionary learning regularization has been developed to reconstruct CT images from the low dose and few-view dataset in order to reduce radiation dose. Nonetheless, the sparse regularization term adopted in this approach is l0-norm, which cannot guarantee the global convergence of the proposed algorithm. To address this problem, in this study we introduced the l1-norm dictionary learning penalty into SIR framework for low dose CT image reconstruction, and developed an alternating minimization algorithm to minimize the associated objective function, which transforms CT image reconstruction problem into a sparse coding subproblem and an image updating subproblem. During the image updating process, an efficient model function approach based on balancing principle is applied to choose the regularization parameters. The proposed alternating minimization algorithm was evaluated first using real projection data of a sheep lung CT perfusion and then using numerical simulation based on sheep lung CT image and chest image. Both visual assessment and quantitative comparison using terms of root mean square error (RMSE) and structural similarity (SSIM) index demonstrated that the new image reconstruction algorithm yielded similar performance with l0-norm dictionary learning penalty and outperformed the conventional filtered backprojection (FBP) and total variation (TV) minimization algorithms.

  7. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    DOEpatents

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  8. Optimization Techniques for Analysis of Biological and Social Networks

    DTIC Science & Technology

    2012-03-28

    analyzing a new metaheuristic technique, variable objective search. 3. Experimentation and application: Implement the proposed algorithms , test and fine...alternative mathematical programming formulations, their theoretical analysis, the development of exact algorithms , and heuristics. Originally, clusters...systematic fashion under a unifying theoretical and algorithmic framework. Optimization, Complex Networks, Social Network Analysis, Computational

  9. 17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...

  10. 17 CFR Appendix A to Part 37 - Guidance on Compliance With Registration Criteria

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... facility should include the system's trade-matching algorithm and order entry procedures. A submission involving a trade-matching algorithm that is based on order priority factors other than on a best price/earliest time basis should include a brief explanation of the alternative algorithm. (b) A board of trade's...

  11. Design and Large-Scale Evaluation of Educational Games for Teaching Sorting Algorithms

    ERIC Educational Resources Information Center

    Battistella, Paulo Eduardo; von Wangenheim, Christiane Gresse; von Wangenheim, Aldo; Martina, Jean Everson

    2017-01-01

    The teaching of sorting algorithms is an essential topic in undergraduate computing courses. Typically the courses are taught through traditional lectures and exercises involving the implementation of the algorithms. As an alternative, this article presents the design and evaluation of three educational games for teaching Quicksort and Heapsort.…

  12. Validation of the alternating conditional estimation algorithm for estimation of flexible extensions of Cox's proportional hazards model with nonlinear constraints on the parameters.

    PubMed

    Wynant, Willy; Abrahamowicz, Michal

    2016-11-01

    Standard optimization algorithms for maximizing likelihood may not be applicable to the estimation of those flexible multivariable models that are nonlinear in their parameters. For applications where the model's structure permits separating estimation of mutually exclusive subsets of parameters into distinct steps, we propose the alternating conditional estimation (ACE) algorithm. We validate the algorithm, in simulations, for estimation of two flexible extensions of Cox's proportional hazards model where the standard maximum partial likelihood estimation does not apply, with simultaneous modeling of (1) nonlinear and time-dependent effects of continuous covariates on the hazard, and (2) nonlinear interaction and main effects of the same variable. We also apply the algorithm in real-life analyses to estimate nonlinear and time-dependent effects of prognostic factors for mortality in colon cancer. Analyses of both simulated and real-life data illustrate good statistical properties of the ACE algorithm and its ability to yield new potentially useful insights about the data structure. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Parameter estimation for the 4-parameter Asymmetric Exponential Power distribution by the method of L-moments using R

    USGS Publications Warehouse

    Asquith, William H.

    2014-01-01

    The implementation characteristics of two method of L-moments (MLM) algorithms for parameter estimation of the 4-parameter Asymmetric Exponential Power (AEP4) distribution are studied using the R environment for statistical computing. The objective is to validate the algorithms for general application of the AEP4 using R. An algorithm was introduced in the original study of the L-moments for the AEP4. A second or alternative algorithm is shown to have a larger L-moment-parameter domain than the original. The alternative algorithm is shown to provide reliable parameter production and recovery of L-moments from fitted parameters. A proposal is made for AEP4 implementation in conjunction with the 4-parameter Kappa distribution to create a mixed-distribution framework encompassing the joint L-skew and L-kurtosis domains. The example application provides a demonstration of pertinent algorithms with L-moment statistics and two 4-parameter distributions (AEP4 and the Generalized Lambda) for MLM fitting to a modestly asymmetric and heavy-tailed dataset using R.

  14. Passive margins getting squeezed in the mantle convection vice

    NASA Astrophysics Data System (ADS)

    Husson, Laurent; Yamato, Philippe; Becker, Thorsten; Pedoja, Kevin

    2013-04-01

    Quaternary coastal geomorphology reveals that passive margins underwent wholesale uplift at least during the glacial cycle. In addition, these not-so-passive margins often exhibit long term exhumation and tectonic inversion, which suggest that compression and tectonic shortening could be the mechanism that triggers their overall uplift. We speculate that the compression in the lithosphere gradually increased during the Cenozoic. The many mountain belts at active margins that accompany this event readily witness this increase. Less clear is how that compression increase affects passive margins. In order to address this issue, we design minimalist 2D viscous models to quantify the impact of plate collision on the stress regime. In these models, a sluggish plate is disposed on a less viscous mantle. It is driven by a "mantle conveyor belt" alternatively excited by lateral shear stresses that represent a downwelling on one side, an upwelling on the other side, or both simultaneously. The lateral edges of the plate are either free or fixed, respectively representing the cases of free convergence and collision. In practice, it dramatically changes the upper boundary condition for mantle circulation and subsequently, for the stress field. The flow pattern transiently evolves almost between two end-members, starting from a situation close to a Couette flow to a pattern that looks like a Poiseuille flow with an almost null velocity at the surface (though in the models, the horizontal velocity at the surface is not strictly null, as the lithosphere deforms). In the second case, the lithosphere is highly stressed horizontally and deforms. For an equivalent bulk driving force, compression increases drastically at passive margins if upwellings are active because they push plates towards the collision. Conversely, if only downwellings are activated, compression occurs on one half of the plate and extension on the other half, because only the downwelling is pulling the plate. Thus, active upwellings underneath oceanic plates are required to explain compression at passive margins. This conclusion is corroborated by "real-Earth" 3D spherical models, wherein the flow is alternatively driven by density anomalies inferred from seismic tomography -and therefore include both downwellings at subduction zones and upwellings above the superswells- and density anomalies that correspond to subducting slabs only. While the second scenario mostly compresses the active margins of upper plates and leave other areas at rest, the first scenario efficiently compresses passive margins where the geological record reveals their uplift, exhumation, and tectonic inversion.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Infante, Anthony A.; Infante, Dzintra; Chan, M.-C.

    We characterized chicken erythrocyte and human platelet ferritin by biochemical studies and immunofluorescence. Erythrocyte ferritin was found to be a homopolymer of H-ferritin subunits, resistant to proteinase K digestion, heat stable, and contained iron. In mature chicken erythrocytes and human platelets, ferritin was localized at the marginal band, a ring-shaped peripheral microtubule bundle, and displayed properties of bona fide microtubule-associated proteins such as tau. Red blood cell ferritin association with the marginal band was confirmed by temperature-induced disassembly-reassembly of microtubules. During erythrocyte differentiation, ferritin co-localized with coalescing microtubules during marginal band formation. In addition, ferritin was found in the nucleimore » of mature erythrocytes, but was not detectable in those of bone marrow erythrocyte precursors. These results suggest that ferritin has a function in marginal band formation and possibly in protection of the marginal band from damaging effects of reactive oxygen species by sequestering iron in the mature erythrocyte. Moreover, our data suggest that ferritin and syncolin, a previously identified erythrocyte microtubule-associated protein, are identical. Nuclear ferritin might contribute to transcriptional silencing or, alternatively, constitute a ferritin reservoir.« less

  16. Rotor Design Options for Improving XV-15 Whirl-Flutter Stability Margins

    NASA Technical Reports Server (NTRS)

    Acree, C. W., Jr.; Peyran, R. J.; Johnson, Wayne

    2004-01-01

    Rotor design changes intended to improve tiltrotor whirl-flutter stability margins were analyzed. A baseline analytical model of the XV-15 was established, and then a thinner, composite wing was designed to be representative of a high-speed tiltrotor. The rotor blade design was modified to increase the stability speed margin for the thin-wing design. Small rearward offsets of the aerodynamic-center locus with respect to the blade elastic axis created large increases in the stability boundary. The effect was strongest for offsets at the outboard part of the blade, where an offset of the aerodynamic center by 10% of tip chord improved the stability margin by over 100 knots. Forward offsets of the blade center of gravity had similar but less pronounced effects. Equivalent results were seen for swept-tip blades. Appropriate combinations of sweep and pitch stiffness completely eliminated whirl flutter within the speed range examined; alternatively, they allowed large increases in pitch-flap coupling (delta-three) for a given stability margin. A limited investigation of the rotor loads in helicopter and airplane configuration showed only minor increases in loads.

  17. Multimodal Estimation of Distribution Algorithms.

    PubMed

    Yang, Qiang; Chen, Wei-Neng; Li, Yun; Chen, C L Philip; Xu, Xiang-Min; Zhang, Jun

    2016-02-15

    Taking the advantage of estimation of distribution algorithms (EDAs) in preserving high diversity, this paper proposes a multimodal EDA. Integrated with clustering strategies for crowding and speciation, two versions of this algorithm are developed, which operate at the niche level. Then these two algorithms are equipped with three distinctive techniques: 1) a dynamic cluster sizing strategy; 2) an alternative utilization of Gaussian and Cauchy distributions to generate offspring; and 3) an adaptive local search. The dynamic cluster sizing affords a potential balance between exploration and exploitation and reduces the sensitivity to the cluster size in the niching methods. Taking advantages of Gaussian and Cauchy distributions, we generate the offspring at the niche level through alternatively using these two distributions. Such utilization can also potentially offer a balance between exploration and exploitation. Further, solution accuracy is enhanced through a new local search scheme probabilistically conducted around seeds of niches with probabilities determined self-adaptively according to fitness values of these seeds. Extensive experiments conducted on 20 benchmark multimodal problems confirm that both algorithms can achieve competitive performance compared with several state-of-the-art multimodal algorithms, which is supported by nonparametric tests. Especially, the proposed algorithms are very promising for complex problems with many local optima.

  18. A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

    DOE PAGES

    Gamarnik, David; Misra, Sidhant

    2016-06-06

    Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less

  19. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Sukumar, Sreenivas R.; Ballard, Grey M.

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems formore » $$\\WW$$ and $$\\HH$$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $$\\WW$$ and $$\\HH$$ within the alternating iterations.« less

  20. Multi-Parent Clustering Algorithms from Stochastic Grammar Data Models

    NASA Technical Reports Server (NTRS)

    Mjoisness, Eric; Castano, Rebecca; Gray, Alexander

    1999-01-01

    We introduce a statistical data model and an associated optimization-based clustering algorithm which allows data vectors to belong to zero, one or several "parent" clusters. For each data vector the algorithm makes a discrete decision among these alternatives. Thus, a recursive version of this algorithm would place data clusters in a Directed Acyclic Graph rather than a tree. We test the algorithm with synthetic data generated according to the statistical data model. We also illustrate the algorithm using real data from large-scale gene expression assays.

  1. Integration of oncologic margins in three-dimensional virtual planning for head and neck surgery, including a validation of the software pathway.

    PubMed

    Kraeima, Joep; Schepers, Rutger H; van Ooijen, Peter M A; Steenbakkers, Roel J H M; Roodenburg, Jan L N; Witjes, Max J H

    2015-10-01

    Three-dimensional (3D) virtual planning of reconstructive surgery, after resection, is a frequently used method for improving accuracy and predictability. However, when applied to malignant cases, the planning of the oncologic resection margins is difficult due to visualisation of tumours in the current 3D planning. Embedding tumour delineation on a magnetic resonance image, similar to the routinely performed radiotherapeutic contouring of tumours, is expected to provide better margin planning. A new software pathway was developed for embedding tumour delineation on magnetic resonance imaging (MRI) within the 3D virtual surgical planning. The software pathway was validated by the use of five bovine cadavers implanted with phantom tumour objects. MRI and computed tomography (CT) images were fused and the tumour was delineated using radiation oncology software. This data was converted to the 3D virtual planning software by means of a conversion algorithm. Tumour volumes and localization were determined in both software stages for comparison analysis. The approach was applied to three clinical cases. A conversion algorithm was developed to translate the tumour delineation data to the 3D virtual plan environment. The average difference in volume of the tumours was 1.7%. This study reports a validated software pathway, providing multi-modality image fusion for 3D virtual surgical planning. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  2. Color object detection using spatial-color joint probability functions.

    PubMed

    Luo, Jiebo; Crandall, David

    2006-06-01

    Object detection in unconstrained images is an important image understanding problem with many potential applications. There has been little success in creating a single algorithm that can detect arbitrary objects in unconstrained images; instead, algorithms typically must be customized for each specific object. Consequently, it typically requires a large number of exemplars (for rigid objects) or a large amount of human intuition (for nonrigid objects) to develop a robust algorithm. We present a robust algorithm designed to detect a class of compound color objects given a single model image. A compound color object is defined as having a set of multiple, particular colors arranged spatially in a particular way, including flags, logos, cartoon characters, people in uniforms, etc. Our approach is based on a particular type of spatial-color joint probability function called the color edge co-occurrence histogram. In addition, our algorithm employs perceptual color naming to handle color variation, and prescreening to limit the search scope (i.e., size and location) for the object. Experimental results demonstrated that the proposed algorithm is insensitive to object rotation, scaling, partial occlusion, and folding, outperforming a closely related algorithm based on color co-occurrence histograms by a decisive margin.

  3. Constrained Metric Learning by Permutation Inducing Isometries.

    PubMed

    Bosveld, Joel; Mahmood, Arif; Huynh, Du Q; Noakes, Lyle

    2016-01-01

    The choice of metric critically affects the performance of classification and clustering algorithms. Metric learning algorithms attempt to improve performance, by learning a more appropriate metric. Unfortunately, most of the current algorithms learn a distance function which is not invariant to rigid transformations of images. Therefore, the distances between two images and their rigidly transformed pair may differ, leading to inconsistent classification or clustering results. We propose to constrain the learned metric to be invariant to the geometry preserving transformations of images that induce permutations in the feature space. The constraint that these transformations are isometries of the metric ensures consistent results and improves accuracy. Our second contribution is a dimension reduction technique that is consistent with the isometry constraints. Our third contribution is the formulation of the isometry constrained logistic discriminant metric learning (IC-LDML) algorithm, by incorporating the isometry constraints within the objective function of the LDML algorithm. The proposed algorithm is compared with the existing techniques on the publicly available labeled faces in the wild, viewpoint-invariant pedestrian recognition, and Toy Cars data sets. The IC-LDML algorithm has outperformed existing techniques for the tasks of face recognition, person identification, and object classification by a significant margin.

  4. Treatment selection for patients with ductal carcinoma in situ (DCIS) of the breast using the University of Southern California/Van Nuys (USC/VNPI) prognostic index.

    PubMed

    Silverstein, Melvin J; Lagios, Michael D

    2015-01-01

    The University of Southern California/Van Nuys Prognostic Index (USC/VNPI) is an algorithm that quantifies five measurable prognostic factors known to be important in predicting local recurrence in conservatively treated patients with ductal carcinoma in situ (DCIS) (tumor size, margin width, nuclear grade, age, and comedonecrosis). With five times as many patients since originally developed, sufficient numbers now exist for analysis by individual scores rather than groups of scores. To achieve a local recurrence rate of less than 20% at 12 years, these data support excision alone for all patients scoring 4, 5, or 6 and patients who score 7 but have margin widths ≥3 mm. Excision plus RT achieves the less than 20% local recurrence threshold at 12 years for patients who score 7 and have margins <3 mm, patients who score 8 and have margins ≥3 mm, and for patients who score 9 and have margins ≥5 mm. Mastectomy is required for patients who score 8 and have margins <3 mm, who score 9 and have margins <5 mm and for all patients who score 10, 11, or 12 to keep the local recurrence rate less than 20% at 12 years. DCIS is a highly favorable disease. There is no difference in mortality rate regardless of which treatment is chosen. The USC/VNPI is a numeric tool that can be used to aid the treatment decision-making process. © 2015 Wiley Periodicals, Inc.

  5. Low Average Sidelobe Slot Array Antennas for Radiometer Applications

    NASA Technical Reports Server (NTRS)

    Rengarajan, Sembiam; Zawardzki, Mark S.; Hodges, Richard E.

    2012-01-01

    In radiometer applications, it is required to design antennas that meet low average sidelobe levels and low average return loss over a specified frequency bandwidth. It is a challenge to meet such specifications over a frequency range when one uses resonant elements such as waveguide feed slots. In addition to their inherent narrow frequency band performance, the problem is exacerbated due to modeling errors and manufacturing tolerances. There was a need to develop a design methodology to solve the problem. An iterative design procedure was developed by starting with an array architecture, lattice spacing, aperture distribution, waveguide dimensions, etc. The array was designed using Elliott s technique with appropriate values of the total slot conductance in each radiating waveguide, and the total resistance in each feed waveguide. Subsequently, the array performance was analyzed by the full wave method of moments solution to the pertinent integral equations. Monte Carlo simulations were also carried out to account for amplitude and phase errors introduced for the aperture distribution due to modeling errors as well as manufacturing tolerances. If the design margins for the average sidelobe level and the average return loss were not adequate, array architecture, lattice spacing, aperture distribution, and waveguide dimensions were varied in subsequent iterations. Once the design margins were found to be adequate, the iteration was stopped and a good design was achieved. A symmetric array architecture was found to meet the design specification with adequate margin. The specifications were near 40 dB for angular regions beyond 30 degrees from broadside. Separable Taylor distribution with nbar=4 and 35 dB sidelobe specification was chosen for each principal plane. A non-separable distribution obtained by the genetic algorithm was found to have similar characteristics. The element spacing was obtained to provide the required beamwidth and close to a null in the E-plane end-fire direction. Because of the alternating slot offsets, grating lobes called butterfly lobes are produced in non-principal planes close to the H-plane. An attempt to reduce the influence of such grating lobes resulted in a symmetric design.

  6. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    NASA Astrophysics Data System (ADS)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  7. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    NASA Astrophysics Data System (ADS)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  8. Economic analysis of alternative nutritional management of dual-purpose cow herds in central coastal Veracruz, Mexico.

    PubMed

    Absalón-Medina, Victor Antonio; Nicholson, Charles F; Blake, Robert W; Fox, Danny Gene; Juárez-Lagunes, Francisco I; Canudas-Lara, Eduardo G; Rueda-Maldonado, Bertha L

    2012-08-01

    Market information was combined with predicted input-output relationships in an economic analysis of alternative nutritional management for dual-purpose member herds of the Genesis farmer organization of central coastal Veracruz, Mexico. Cow productivity outcomes for typical management and alternative feeding scenarios were obtained from structured sets of simulations in a companion study of productivity limitations and potentials using the Cornell Net Carbohydrate and Protein System model (Version 6.0). Partial budgeting methods and sensitivity analysis were used to identify economically viable alternatives based on expected change in milk income over feed cost (change in revenues from milk sales less change in feed costs). Herd owners in coastal Veracruz have large economic incentives, from $584 to $1,131 in predicted net margin, to increase milk sales by up to 74% across a three-lactation cow lifetime by improving diets based on good quality grass and legume forages. This increment is equal to, or exceeds, in value the total yield from at least one additional lactation per cow lifetime. Furthermore, marginal rates of return (change in milk income over feed costs divided by change in variable costs when alternative practices are used) of 3.3 ± 0.8 indicate clear economic incentives to remove fundamental productivity vulnerabilities due to chronic energy deficits and impeded growth of immature cows under typical management. Sensitivity analyses indicate that the economic outcomes are robust for a variety of market conditions.

  9. MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun

    Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less

  10. MPI-FAUN: An MPI-Based Framework for Alternating-Updating Nonnegative Matrix Factorization

    DOE PAGES

    Kannan, Ramakrishnan; Ballard, Grey; Park, Haesun

    2017-10-30

    Non-negative matrix factorization (NMF) is the problem of determining two non-negative low rank factors W and H, for the given input matrix A, such that A≈WH. NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient parallel algorithms to solve the problem for big data sets. The main contribution of this work is a new, high-performance parallel computational framework for a broad class of NMF algorithms thatmore » iteratively solves alternating non-negative least squares (NLS) subproblems for W and H. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). The framework is flexible and able to leverage a variety of NMF and NLS algorithms, including Multiplicative Update, Hierarchical Alternating Least Squares, and Block Principal Pivoting. Our implementation allows us to benchmark and compare different algorithms on massive dense and sparse data matrices of size that spans from few hundreds of millions to billions. We demonstrate the scalability of our algorithm and compare it with baseline implementations, showing significant performance improvements. The code and the datasets used for conducting the experiments are available online.« less

  11. 30 CFR 204.206 - What will MMS do when it receives my request for other relief?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing... notify you of the effective date of your accounting or auditing relief and other specifics of the relief...

  12. Gigapixel surface imaging of radical prostatectomy specimens for comprehensive detection of cancer-positive surgical margins using structured illumination microscopy

    PubMed Central

    Wang, Mei; Tulman, David B.; Sholl, Andrew B.; Kimbrell, Hillary Z.; Mandava, Sree H.; Elfer, Katherine N.; Luethy, Samuel; Maddox, Michael M.; Lai, Weil; Lee, Benjamin R.; Brown, J. Quincy

    2016-01-01

    Achieving cancer-free surgical margins in oncologic surgery is critical to reduce the need for additional adjuvant treatments and minimize tumor recurrence; however, there is a delicate balance between completeness of tumor removal and preservation of adjacent tissues critical for normal post-operative function. We sought to establish the feasibility of video-rate structured illumination microscopy (VR-SIM) of the intact removed tumor surface as a practical and non-destructive alternative to intra-operative frozen section pathology, using prostate cancer as an initial target. We present the first images of the intact human prostate surface obtained with pathologically-relevant contrast and subcellular detail, obtained in 24 radical prostatectomy specimens immediately after excision. We demonstrate that it is feasible to routinely image the full prostate circumference, generating gigapixel panorama images of the surface that are readily interpreted by pathologists. VR-SIM confirmed detection of positive surgical margins in 3 out of 4 prostates with pathology-confirmed adenocarcinoma at the circumferential surgical margin, and furthermore detected extensive residual cancer at the circumferential margin in a case post-operatively classified by histopathology as having negative surgical margins. Our results suggest that the increased surface coverage of VR-SIM could also provide added value for detection and characterization of positive surgical margins over traditional histopathology. PMID:27257084

  13. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis.

    PubMed

    Kim, Hyunsoo; Park, Haesun

    2007-06-15

    Many practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approximating high-dimensional data in a lower dimensional space. In this article, we introduce a novel formulation of sparse NMF and show how the new formulation leads to a convergent sparse NMF algorithm via alternating non-negativity-constrained least squares. We apply our sparse NMF algorithm to cancer-class discovery and gene expression data analysis and offer biological analysis of the results obtained. Our experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms. The software is available as supplementary material.

  14. Effects of weather on the retrieval of sea ice concentration and ice type from passive microwave data

    NASA Technical Reports Server (NTRS)

    Maslanik, J. A.

    1992-01-01

    Effects of wind, water vapor, and cloud liquid water on ice concentration and ice type calculated from passive microwave data are assessed through radiative transfer calculations and observations. These weather effects can cause overestimates in ice concentration and more substantial underestimates in multi-year ice percentage by decreasing polarization and by decreasing the gradient between frequencies. The effect of surface temperature and air temperature on the magnitudes of weather-related errors is small for ice concentration and substantial for multiyear ice percentage. The existing weather filter in the NASA Team Algorithm addresses only weather effects over open ocean; the additional use of local open-ocean tie points and an alternative weather correction for the marginal ice zone can further reduce errors due to weather. Ice concentrations calculated using 37 versus 18 GHz data show little difference in total ice covered area, but greater differences in intermediate concentration classes. Given the magnitude of weather-related errors in ice classification from passive microwave data, corrections for weather effects may be necessary to detect small trends in ice covered area and ice type for climate studies.

  15. Active flutter suppression using optical output feedback digital controllers

    NASA Technical Reports Server (NTRS)

    1982-01-01

    A method for synthesizing digital active flutter suppression controllers using the concept of optimal output feedback is presented. A convergent algorithm is employed to determine constrained control law parameters that minimize an infinite time discrete quadratic performance index. Low order compensator dynamics are included in the control law and the compensator parameters are computed along with the output feedback gain as part of the optimization process. An input noise adjustment procedure is used to improve the stability margins of the digital active flutter controller. Sample rate variation, prefilter pole variation, control structure variation and gain scheduling are discussed. A digital control law which accommodates computation delay can stabilize the wing with reasonable rms performance and adequate stability margins.

  16. Ice Water Classification Using Statistical Distribution Based Conditional Random Fields in RADARSAT-2 Dual Polarization Imagery

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.

    2017-09-01

    In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.

  17. Small-target leak detection for a closed vessel via infrared image sequences

    NASA Astrophysics Data System (ADS)

    Zhao, Ling; Yang, Hongjiu

    2017-03-01

    This paper focus on a leak diagnosis and localization method based on infrared image sequences. Some problems on high probability of false warning and negative affect for marginal information are solved by leak detection. An experimental model is established for leak diagnosis and localization on infrared image sequences. The differential background prediction is presented to eliminate the negative affect of marginal information on test vessel based on a kernel regression method. A pipeline filter based on layering voting is designed to reduce probability of leak point false warning. A synthesize leak diagnosis and localization algorithm is proposed based on infrared image sequences. The effectiveness and potential are shown for developed techniques through experimental results.

  18. Bi-dimensional null model analysis of presence-absence binary matrices.

    PubMed

    Strona, Giovanni; Ulrich, Werner; Gotelli, Nicholas J

    2018-01-01

    Comparing the structure of presence/absence (i.e., binary) matrices with those of randomized counterparts is a common practice in ecology. However, differences in the randomization procedures (null models) can affect the results of the comparisons, leading matrix structural patterns to appear either "random" or not. Subjectivity in the choice of one particular null model over another makes it often advisable to compare the results obtained using several different approaches. Yet, available algorithms to randomize binary matrices differ substantially in respect to the constraints they impose on the discrepancy between observed and randomized row and column marginal totals, which complicates the interpretation of contrasting patterns. This calls for new strategies both to explore intermediate scenarios of restrictiveness in-between extreme constraint assumptions, and to properly synthesize the resulting information. Here we introduce a new modeling framework based on a flexible matrix randomization algorithm (named the "Tuning Peg" algorithm) that addresses both issues. The algorithm consists of a modified swap procedure in which the discrepancy between the row and column marginal totals of the target matrix and those of its randomized counterpart can be "tuned" in a continuous way by two parameters (controlling, respectively, row and column discrepancy). We show how combining the Tuning Peg with a wise random walk procedure makes it possible to explore the complete null space embraced by existing algorithms. This exploration allows researchers to visualize matrix structural patterns in an innovative bi-dimensional landscape of significance/effect size. We demonstrate the rational and potential of our approach with a set of simulated and real matrices, showing how the simultaneous investigation of a comprehensive and continuous portion of the null space can be extremely informative, and possibly key to resolving longstanding debates in the analysis of ecological matrices. © 2017 The Authors. Ecology, published by Wiley Periodicals, Inc., on behalf of the Ecological Society of America.

  19. Wavelet Monte Carlo dynamics: A new algorithm for simulating the hydrodynamics of interacting Brownian particles

    NASA Astrophysics Data System (ADS)

    Dyer, Oliver T.; Ball, Robin C.

    2017-03-01

    We develop a new algorithm for the Brownian dynamics of soft matter systems that evolves time by spatially correlated Monte Carlo moves. The algorithm uses vector wavelets as its basic moves and produces hydrodynamics in the low Reynolds number regime propagated according to the Oseen tensor. When small moves are removed, the correlations closely approximate the Rotne-Prager tensor, itself widely used to correct for deficiencies in Oseen. We also include plane wave moves to provide the longest range correlations, which we detail for both infinite and periodic systems. The computational cost of the algorithm scales competitively with the number of particles simulated, N, scaling as N In N in homogeneous systems and as N in dilute systems. In comparisons to established lattice Boltzmann and Brownian dynamics algorithms, the wavelet method was found to be only a factor of order 1 times more expensive than the cheaper lattice Boltzmann algorithm in marginally semi-dilute simulations, while it is significantly faster than both algorithms at large N in dilute simulations. We also validate the algorithm by checking that it reproduces the correct dynamics and equilibrium properties of simple single polymer systems, as well as verifying the effect of periodicity on the mobility tensor.

  20. Enhancing the diversity of breeding invertebrates within field margins of intensively managed grassland: Effects of alternative management practices.

    PubMed

    Fritch, Rochelle A; Sheridan, Helen; Finn, John A; McCormack, Stephen; Ó hUallacháin, Daire

    2017-11-01

    Severe declines in biodiversity have been well documented for many taxonomic groups due to intensification of agricultural practices. Establishment and appropriate management of arable field margins can improve the diversity and abundance of invertebrate groups; however, there is much less research on field margins within grassland systems. Three grassland field margin treatments (fencing off the existing vegetation "fenced"; fencing with rotavation and natural regeneration "rotavated" and; fencing with rotavation and seeding "seeded") were compared to a grazed control in the adjacent intensively managed pasture. Invertebrates were sampled using emergence traps to investigate species breeding and overwintering within the margins. Using a manipulation experiment, we tested whether the removal of grazing pressure and nutrient inputs would increase the abundance and richness of breeding invertebrates within grassland field margins. We also tested whether field margin establishment treatments, with their different vegetation communities, would change the abundance and richness of breeding invertebrates in the field margins. Exclusion of grazing and nutrient inputs led to increased abundance and richness in nearly all invertebrate groups that we sampled. However, there were more complex effects of field margin establishment treatment on the abundance and richness of invertebrate taxa. Each of the three establishment treatments supported a distinct invertebrate community. The removal of grazing from grassland field margins provided a greater range of overwintering/breeding habitat for invertebrates. We demonstrate the capacity of field margin establishment to increase the abundance and richness in nearly all invertebrate groups in study plots that were located on previously more depauperate areas of intensively managed grassland. These results from grassland field margins provide evidence to support practical actions that can inform Greening (Pillar 1) and agri-environment measures (Pillar 2) of the Common Agricultural Policy (CAP). Before implementing specific management regimes, the conservation aims of agri-environment measures should be clarified by defining the target species or taxonomic groups.

  1. Effect of repeated ceramic firings on the marginal and internal adaptation of metal-ceramic restorations fabricated with different CAD-CAM technologies.

    PubMed

    Kocaağaoğlu, Hasan; Albayrak, Haydar; Kilinc, Halil Ibrahim; Gümüs, Hasan Önder

    2017-11-01

    The use of computer-aided design and computer-aided manufacturing (CAD-CAM) for metal-ceramic restorations has increased with advances in the technology. However, little is known about the marginal and internal adaptation of restorations fabricated using laser sintering (LS) and soft milling (SM). Moreover, the effects of repeated ceramic firings on the marginal and internal adaptation of metal-ceramic restorations fabricated with LS and SM is also unknown. The purpose of this in vitro study was to investigate the effects of repeated ceramic firings on the marginal and internal adaptation of metal-ceramic copings fabricated using the lost wax (LW), LS, and SM techniques. Ten LW, 10 LS, and 10 SM cobalt-chromium (Co-Cr) copings were fabricated for an artificial tooth (Frasaco GmbH). After the application of veneering ceramic (VITA VMK Master; VITA Zahnfabrik), the marginal and internal discrepancies of these copings were measured with a silicone indicator paste and a stereomicroscope at ×100 magnification after the first, second, and third clinical simulated ceramic firing cycles. Repeated measures 2-way ANOVA and the Fisher LSD post hoc test were used to evaluate differences in marginal and internal discrepancies (α=.05). Neither fabrication protocol nor repeated ceramic firings had any statistically significant effect on internal discrepancy values (P>.05). Marginal discrepancy values were also statistically unaffected by repeated ceramic firings (P>.05); however, the fabrication protocol had a significant effect on marginal discrepancy values (P<.001), with LW resulting in higher marginal discrepancy values than LS or SM (P<.05). Marginal discrepancy values did not vary between LS and SM (P>.05). All groups demonstrated clinically acceptable marginal adaptation after repeated ceramic firing cycles; however, the LS and SM groups demonstrated better marginal adaptation than that of LW group and may be appropriate clinical alternatives to LW. Copyright © 2017 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.

  2. Advanced digital SAR processing study

    NASA Technical Reports Server (NTRS)

    Martinson, L. W.; Gaffney, B. P.; Liu, B.; Perry, R. P.; Ruvin, A.

    1982-01-01

    A highly programmable, land based, real time synthetic aperture radar (SAR) processor requiring a processed pixel rate of 2.75 MHz or more in a four look system was designed. Variations in range and azimuth compression, number of looks, range swath, range migration and SR mode were specified. Alternative range and azimuth processing algorithms were examined in conjunction with projected integrated circuit, digital architecture, and software technologies. The advaced digital SAR processor (ADSP) employs an FFT convolver algorithm for both range and azimuth processing in a parallel architecture configuration. Algorithm performace comparisons, design system design, implementation tradeoffs and the results of a supporting survey of integrated circuit and digital architecture technologies are reported. Cost tradeoffs and projections with alternate implementation plans are presented.

  3. Evolutionary Multiobjective Query Workload Optimization of Cloud Data Warehouses

    PubMed Central

    Dokeroglu, Tansel; Sert, Seyyit Alper; Cinar, Muhammet Serkan

    2014-01-01

    With the advent of Cloud databases, query optimizers need to find paretooptimal solutions in terms of response time and monetary cost. Our novel approach minimizes both objectives by deploying alternative virtual resources and query plans making use of the virtual resource elasticity of the Cloud. We propose an exact multiobjective branch-and-bound and a robust multiobjective genetic algorithm for the optimization of distributed data warehouse query workloads on the Cloud. In order to investigate the effectiveness of our approach, we incorporate the devised algorithms into a prototype system. Finally, through several experiments that we have conducted with different workloads and virtual resource configurations, we conclude remarkable findings of alternative deployments as well as the advantages and disadvantages of the multiobjective algorithms we propose. PMID:24892048

  4. 7 CFR 650.24 - Scenic beauty (visual resource).

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... consideration of alternative management and development systems that preserve scenic beauty or improve the... resource values particularly in waste management systems; field borders, field windbreaks, wetland management, access roads, critical area treatment; design and management of ponds, stream margins, odd areas...

  5. Identity and the Role of the State.

    ERIC Educational Resources Information Center

    Harvey, Luli

    1997-01-01

    Examples of marginalized nonformal learning in Britain that is driven by a search for alternative value systems include initiatives among Kurdish refugees, Blacks seeking identity through black studies, women sharing their stories, the resurgence of Irish culture, and the green movement. (SK)

  6. Performance analysis of a diesel engine driven brushless alternator with combined AC and thyristor fed DC loads through PSPICE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayanan, S.S.Y.; Ananthakrishnan, P.; Hangari, V.U.

    1995-12-31

    A brushless alternator with damper windings in the main alternator and with combined ac and thyristor fed dc loads has been handled ab initio as a total modeling and simulation problem for which a complete steady state performance prediction algorithm has been developed through proper application of Park`s equivalent circuit approach individually to the main and exciter alternator units of the brushless alternator. Details of the problems faced during implementation of this algorithm through PSPICE for the case of a specific 125 kVA brushless alternator as well as methods adopted for successfully overcoming the same have then been presented. Finallymore » a comparison of the predicted performance with those obtained experimentally for this 125 kVA unit has also been provided for the cases of both thyristor fed dc load alone as well as combined ac and thyristor fed dc loads. To enable proper calculation of derating factors to be used in the design of such brushless alternators, the simulation results then include harmonic analysis of the alternator output voltage and current waveforms at the point of common connection of the ac and thyristor fed dc load, damper winding currents, main alternator field winding current, exciter alternator armature voltage and the alternator developed torque and torque angle pulsations.« less

  7. Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation

    NASA Astrophysics Data System (ADS)

    Kim, Sunwoo

    This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.

  8. An application of artificial neural networks to experimental data approximation

    NASA Technical Reports Server (NTRS)

    Meade, Andrew J., Jr.

    1993-01-01

    As an initial step in the evaluation of networks, a feedforward architecture is trained to approximate experimental data by the backpropagation algorithm. Several drawbacks were detected and an alternative learning algorithm was then developed to partially address the drawbacks. This noniterative algorithm has a number of advantages over the backpropagation method and is easily implemented on existing hardware.

  9. A distributed algorithm to maintain and repair the trail networks of arboreal ants.

    PubMed

    Chandrasekhar, Arjun; Gordon, Deborah M; Navlakha, Saket

    2018-06-18

    We study how the arboreal turtle ant (Cephalotes goniodontus) solves a fundamental computing problem: maintaining a trail network and finding alternative paths to route around broken links in the network. Turtle ants form a routing backbone of foraging trails linking several nests and temporary food sources. This species travels only in the trees, so their foraging trails are constrained to lie on a natural graph formed by overlapping branches and vines in the tangled canopy. Links between branches, however, can be ephemeral, easily destroyed by wind, rain, or animal movements. Here we report a biologically feasible distributed algorithm, parameterized using field data, that can plausibly describe how turtle ants maintain the routing backbone and find alternative paths to circumvent broken links in the backbone. We validate the ability of this probabilistic algorithm to circumvent simulated breaks in synthetic and real-world networks, and we derive an analytic explanation for why certain features are crucial to improve the algorithm's success. Our proposed algorithm uses fewer computational resources than common distributed graph search algorithms, and thus may be useful in other domains, such as for swarm computing or for coordinating molecular robots.

  10. TU-AB-BRB-01: Coverage Evaluation and Probabilistic Treatment Planning as a Margin Alternative

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siebers, J.

    The accepted clinical method to accommodate targeting uncertainties inherent in fractionated external beam radiation therapy is to utilize GTV-to-CTV and CTV-to-PTV margins during the planning process to design a PTV-conformal static dose distribution on the planning image set. Ideally, margins are selected to ensure a high (e.g. >95%) target coverage probability (CP) in spite of inherent inter- and intra-fractional positional variations, tissue motions, and initial contouring uncertainties. Robust optimization techniques, also known as probabilistic treatment planning techniques, explicitly incorporate the dosimetric consequences of targeting uncertainties by including CP evaluation into the planning optimization process along with coverage-based planning objectives. Themore » treatment planner no longer needs to use PTV and/or PRV margins; instead robust optimization utilizes probability distributions of the underlying uncertainties in conjunction with CP-evaluation for the underlying CTVs and OARs to design an optimal treated volume. This symposium will describe CP-evaluation methods as well as various robust planning techniques including use of probability-weighted dose distributions, probability-weighted objective functions, and coverage optimized planning. Methods to compute and display the effect of uncertainties on dose distributions will be presented. The use of robust planning to accommodate inter-fractional setup uncertainties, organ deformation, and contouring uncertainties will be examined as will its use to accommodate intra-fractional organ motion. Clinical examples will be used to inter-compare robust and margin-based planning, highlighting advantages of robust-plans in terms of target and normal tissue coverage. Robust-planning limitations as uncertainties approach zero and as the number of treatment fractions becomes small will be presented, as well as the factors limiting clinical implementation of robust planning. Learning Objectives: To understand robust-planning as a clinical alternative to using margin-based planning. To understand conceptual differences between uncertainty and predictable motion. To understand fundamental limitations of the PTV concept that probabilistic planning can overcome. To understand the major contributing factors to target and normal tissue coverage probability. To understand the similarities and differences of various robust planning techniques To understand the benefits and limitations of robust planning techniques.« less

  11. Anthropogenic impacts on continental margins: New frontiers and engagement arena for global sustainability research and action

    NASA Astrophysics Data System (ADS)

    Liu, K. K.; Glavovic, B.; Limburg, K.; Emeis, K. C.; Thomas, H.; Kremer, H.; Avril, B.; Zhang, J.; Mulholland, M. R.; Glaser, M.; Swaney, D. P.

    2014-12-01

    There is an urgent need to design and implement transformative governance strategies that safeguard Earth's life-support systems essential for long-term human well-being. From a series of meetings of the Continental Margins Working Group co-sponsored by IMBER and LOICZ of IGBP, we conclude that the greatest urgency exists at the ocean-land interface - the continental margins or the Margin - which extends from coastlands over continental shelves and slopes bordering the deep ocean. The Margin is enduring quadruple squeeze from (i) Population growth and rising demands for resources; (ii) Ecosystem degradation and loss; (iii) Rising CO2, climate change and alteration of marine biogeochemistry and ecosystems; and (iv) Rapid and irreversible changes in social-ecological systems. Some areas of the Margin that are subject to the greatest pressures (e.g. the Arctic) are also those for which knowledge of fundamental processes remains most limited. Aside from improving our basic understanding of the nature and variability of the Margin, priority issues include: (i) investment reform to prevent lethal but profitable activities; (ii) risk reduction; and (iii) jurisdiction, equity and fiscal responsibility. However, governance deficits or mismatches are particularly pronounced at the ocean-edge of the Margin and the prevailing Law of the Sea is incapable of resolving these challenges. The "gold rush" of accelerating demands for space and resources, and variability in how this domain is regulated, move the Margin to the forefront of global sustainability research and action. We outline a research strategy in 3 engagement arenas: (a) knowledge and understanding of dynamic Margin processes; (b) development, innovation and risk at the Margin; and (c) governance for sustainability on the Margin. The goals are (1) to better understand Margin social-ecological systems, including their physical and biogeochemical components; (2) to develop practical guidance for sustainable development and use of resources; (3) to design governance regimes to stem unsustainable practices; (4) to investigate how to enable equitable sharing of costs and benefits from sustainable use of resources; and (5) to evaluate alternative research approaches and partnerships that address the challenges faced on the Margin.

  12. Accuracy and robustness evaluation in stereo matching

    NASA Astrophysics Data System (ADS)

    Nguyen, Duc M.; Hanca, Jan; Lu, Shao-Ping; Schelkens, Peter; Munteanu, Adrian

    2016-09-01

    Stereo matching has received a lot of attention from the computer vision community, thanks to its wide range of applications. Despite of the large variety of algorithms that have been proposed so far, it is not trivial to select suitable algorithms for the construction of practical systems. One of the main problems is that many algorithms lack sufficient robustness when employed in various operational conditions. This problem is due to the fact that most of the proposed methods in the literature are usually tested and tuned to perform well on one specific dataset. To alleviate this problem, an extensive evaluation in terms of accuracy and robustness of state-of-the-art stereo matching algorithms is presented. Three datasets (Middlebury, KITTI, and MPEG FTV) representing different operational conditions are employed. Based on the analysis, improvements over existing algorithms have been proposed. The experimental results show that our improved versions of cross-based and cost volume filtering algorithms outperform the original versions with large margins on Middlebury and KITTI datasets. In addition, the latter of the two proposed algorithms ranks itself among the best local stereo matching approaches on the KITTI benchmark. Under evaluations using specific settings for depth-image-based-rendering applications, our improved belief propagation algorithm is less complex than MPEG's FTV depth estimation reference software (DERS), while yielding similar depth estimation performance. Finally, several conclusions on stereo matching algorithms are also presented.

  13. Trigram-based algorithms for OCR result correction

    NASA Astrophysics Data System (ADS)

    Bulatov, Konstantin; Manzhikov, Temudzhin; Slavin, Oleg; Faradjev, Igor; Janiszewski, Igor

    2017-03-01

    In this paper we consider a task of improving optical character recognition (OCR) results of document fields on low-quality and average-quality images using N-gram models. Cyrillic fields of Russian Federation internal passport are analyzed as an example. Two approaches are presented: the first one is based on hypothesis of dependence of a symbol from two adjacent symbols and the second is based on calculation of marginal distributions and Bayesian networks computation. A comparison of the algorithms and experimental results within a real document OCR system are presented, it's showed that the document field OCR accuracy can be improved by more than 6% for low-quality images.

  14. Graphical Models for Ordinal Data

    PubMed Central

    Guo, Jian; Levina, Elizaveta; Michailidis, George; Zhu, Ji

    2014-01-01

    A graphical model for ordinal variables is considered, where it is assumed that the data are generated by discretizing the marginal distributions of a latent multivariate Gaussian distribution. The relationships between these ordinal variables are then described by the underlying Gaussian graphical model and can be inferred by estimating the corresponding concentration matrix. Direct estimation of the model is computationally expensive, but an approximate EM-like algorithm is developed to provide an accurate estimate of the parameters at a fraction of the computational cost. Numerical evidence based on simulation studies shows the strong performance of the algorithm, which is also illustrated on data sets on movie ratings and an educational survey. PMID:26120267

  15. Description and performance analysis of a generalized optimal algorithm for aerobraking guidance

    NASA Technical Reports Server (NTRS)

    Evans, Steven W.; Dukeman, Greg A.

    1993-01-01

    A practical real-time guidance algorithm has been developed for aerobraking vehicles which nearly minimizes the maximum heating rate, the maximum structural loads, and the post-aeropass delta V requirement for orbit insertion. The algorithm is general and reusable in the sense that a minimum of assumptions are made, thus greatly reducing the number of parameters that must be determined prior to a given mission. A particularly interesting feature is that in-plane guidance performance is tuned by adjusting one mission-dependent, the bank margin; similarly, the out-of-plane guidance performance is tuned by adjusting a plane controller time constant. Other features of the algorithm are simplicity, efficiency and ease of use. The trimmed vehicle with bank angle modulation as the method of trajectory control. Performance of this guidance algorithm is examined by its use in an aerobraking testbed program. The performance inquiry extends to a wide range of entry speeds covering a number of potential mission applications. Favorable results have been obtained with a minimum of development effort, and directions for improvement of performance are indicated.

  16. Effect of protective coating on microhardness of a new glass ionomer cement: Nanofilled coating versus unfilled resin.

    PubMed

    Faraji, Foad; Heshmat, Haleh; Banava, Sepideh

    2017-01-01

    EQUIA TM is a new gastrointestinal (GI) system with high compressive strength, surface microhardness (MH), and fluoride release potential. This in vitro study aimed to assess the effect of aging and type of protective coating on the MH of EQUIA TM GI cement. A total of 30 disc-shaped specimens measuring 9 mm in diameter and 2 mm in thickness were fabricated of EQUIA TM GI and divided into three groups of G-Coat nanofilled coating (a), no coating (b) and margin bond (c). The Vickers MH value of specimens was measured before (baseline) and at 3 and 6 months after water storage. Data were analyzed using repeated measures ANOVA. Group B had significantly higher MH than the other two groups at baseline. Both G-Coat and margin bond increased the surface MH of GI at 3 and 6 months. The MH values of G-Coat and margin bond groups did not significantly increase or decrease between 3 and 6 months. The increase in MH was greater in the G-Coat compared to the margin bond group in the long-term. Clinically, margin bond may be a suitable alternative when G-Coat is not available.

  17. Staged margin-controlled excision (SMEX) for lentigo maligna melanoma in situ.

    PubMed

    Beveridge, Julie; Taher, Muba; Zhu, Jay; Mahmood, Muhammad N; Salopek, Thomas G

    2018-06-24

    No consensus exists regarding the best surgical strategy to achieve clear surgical margins while minimizing tissue excision when definitely excising lentigo maligna melanoma in situ (LM). The staged margin controlled excision (SMEX) technique is a modification of the spaghetti technique that allows surgeons to minimize margins and ensure complete excision of LM. Our objectives were twofold: a) to evaluate the effectiveness of SMEX for treatment of LM and b) detail the SMEX technique. A retrospective chart review of adult patients who underwent the SMEX technique for treatment of LM from 2011 to 2016 was conducted. Twenty-four patients were identified with predominantly facial lesions. The mean defect size was 12.1 cm 2 . A mean number of two SMEX procedures, with an average margin of 9 mm, were required to obtain complete excision of the LM. Using SMEX, we achieved 100% clearance of LM over a median follow up period of 18 months, with a range of 1-63 months. SMEX offers a reliable surgical excision method that ensures complete excision of LM in a cosmetically sensitive manner. The recurrence outcomes of SMEX are comparable, if not better, than those of alternative excision techniques in the literature. © 2018 Wiley Periodicals, Inc.

  18. Potential alternative fuel sources for agricultural crops and plant components

    USDA-ARS?s Scientific Manuscript database

    The changing landscape of agricultural production is placing unprecedented demands on farmers as they face increasing global competition and greater natural resource conservation challenges. However, shrinking profit margins due to increasing input costs, particularly of fuel and fertilizer, can res...

  19. On-Line Mu Method for Robust Flutter Prediction in Expanding a Safe Flight Envelope for an Aircraft Model Under Flight Test

    NASA Technical Reports Server (NTRS)

    Lind, Richard C. (Inventor); Brenner, Martin J.

    2001-01-01

    A structured singular value (mu) analysis method of computing flutter margins has robust stability of a linear aeroelastic model with uncertainty operators (Delta). Flight data is used to update the uncertainty operators to accurately account for errors in the computed model and the observed range of aircraft dynamics of the aircraft under test caused by time-varying aircraft parameters, nonlinearities, and flight anomalies, such as test nonrepeatability. This mu-based approach computes predict flutter margins that are worst case with respect to the modeling uncertainty for use in determining when the aircraft is approaching a flutter condition and defining an expanded safe flight envelope for the aircraft that is accepted with more confidence than traditional methods that do not update the analysis algorithm with flight data by introducing mu as a flutter margin parameter that presents several advantages over tracking damping trends as a measure of a tendency to instability from available flight data.

  20. Rapid Screening of Cancer Margins in Tissue with Multimodal Confocal Microscopy

    PubMed Central

    Gareau, Daniel S.; Jeon, Hana; Nehal, Kishwer S.; Rajadhyaksha, Milind

    2012-01-01

    Background Complete and accurate excision of cancer is guided by the examination of histopathology. However, preparation of histopathology is labor intensive and slow, leading to insufficient sampling of tissue and incomplete and/or inaccurate excision of margins. We demonstrate the potential utility of multimodal confocal mosaicing microscopy for rapid screening of cancer margins, directly in fresh surgical excisions, without the need for conventional embedding, sectioning or processing. Materials/Methods A multimodal confocal mosaicing microscope was developed to image basal cell carcinoma margins in surgical skin excisions, with resolution that shows nuclear detail. Multimodal contrast is with fluorescence for imaging nuclei and reflectance for cellular cytoplasm and dermal collagen. Thirtyfive excisions of basal cell carcinomas from Mohs surgery were imaged, and the mosaics analyzed by comparison to the corresponding frozen pathology. Results Confocal mosaics are produced in about 9 minutes, displaying tissue in fields-of-view of 12 mm with 2X magnification. A digital staining algorithm transforms black and white contrast to purple and pink, which simulates the appearance of standard histopathology. Mosaicing enables rapid digital screening, which mimics the examination of histopathology. Conclusions Multimodal confocal mosaicing microscopy offers a technology platform to potentially enable real-time pathology at the bedside. The imaging may serve as an adjunct to conventional histopathology, to expedite screening of margins and guide surgery toward more complete and accurate excision of cancer. PMID:22721570

  1. An alternative to FASTSIM for tangential solution of the wheel-rail contact

    NASA Astrophysics Data System (ADS)

    Sichani, Matin Sh.; Enblom, Roger; Berg, Mats

    2016-06-01

    In most rail vehicle dynamics simulation packages, tangential solution of the wheel-rail contact is gained by means of Kalker's FASTSIM algorithm. While 5-25% error is expected for creep force estimation, the errors of shear stress distribution, needed for wheel-rail damage analysis, may rise above 30% due to the parabolic traction bound. Therefore, a novel algorithm named FaStrip is proposed as an alternative to FASTSIM. It is based on the strip theory which extends the two-dimensional rolling contact solution to three-dimensional contacts. To form FaStrip, the original strip theory is amended to obtain accurate estimations for any contact ellipse size and it is combined by a numerical algorithm to handle spin. The comparison between the two algorithms shows that using FaStrip improves the accuracy of the estimated shear stress distribution and the creep force estimation in all studied cases. In combined lateral creepage and spin cases, for instance, the error in force estimation reduces from 18% to less than 2%. The estimation of the slip velocities in the slip zone, needed for wear analysis, is also studied. Since FaStrip is as fast as FASTSIM, it can be an alternative for tangential solution of the wheel-rail contact in simulation packages.

  2. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  3. Feature Screening in Ultrahigh Dimensional Cox's Model.

    PubMed

    Yang, Guangren; Yu, Ye; Li, Runze; Buu, Anne

    Survival data with ultrahigh dimensional covariates such as genetic markers have been collected in medical studies and other fields. In this work, we propose a feature screening procedure for the Cox model with ultrahigh dimensional covariates. The proposed procedure is distinguished from the existing sure independence screening (SIS) procedures (Fan, Feng and Wu, 2010, Zhao and Li, 2012) in that the proposed procedure is based on joint likelihood of potential active predictors, and therefore is not a marginal screening procedure. The proposed procedure can effectively identify active predictors that are jointly dependent but marginally independent of the response without performing an iterative procedure. We develop a computationally effective algorithm to carry out the proposed procedure and establish the ascent property of the proposed algorithm. We further prove that the proposed procedure possesses the sure screening property. That is, with the probability tending to one, the selected variable set includes the actual active predictors. We conduct Monte Carlo simulation to evaluate the finite sample performance of the proposed procedure and further compare the proposed procedure and existing SIS procedures. The proposed methodology is also demonstrated through an empirical analysis of a real data example.

  4. Automatic measurement of voice onset time using discriminative structured prediction.

    PubMed

    Sonderegger, Morgan; Keshet, Joseph

    2012-12-01

    A discriminative large-margin algorithm for automatic measurement of voice onset time (VOT) is described, considered as a case of predicting structured output from speech. Manually labeled data are used to train a function that takes as input a speech segment of an arbitrary length containing a voiceless stop, and outputs its VOT. The function is explicitly trained to minimize the difference between predicted and manually measured VOT; it operates on a set of acoustic feature functions designed based on spectral and temporal cues used by human VOT annotators. The algorithm is applied to initial voiceless stops from four corpora, representing different types of speech. Using several evaluation methods, the algorithm's performance is near human intertranscriber reliability, and compares favorably with previous work. Furthermore, the algorithm's performance is minimally affected by training and testing on different corpora, and remains essentially constant as the amount of training data is reduced to 50-250 manually labeled examples, demonstrating the method's practical applicability to new datasets.

  5. Multi-Agent Patrolling under Uncertainty and Threats.

    PubMed

    Chen, Shaofei; Wu, Feng; Shen, Lincheng; Chen, Jing; Ramchurn, Sarvapali D

    2015-01-01

    We investigate a multi-agent patrolling problem where information is distributed alongside threats in environments with uncertainties. Specifically, the information and threat at each location are independently modelled as multi-state Markov chains, whose states are not observed until the location is visited by an agent. While agents will obtain information at a location, they may also suffer damage from the threat at that location. Therefore, the goal of the agents is to gather as much information as possible while mitigating the damage incurred. To address this challenge, we formulate the single-agent patrolling problem as a Partially Observable Markov Decision Process (POMDP) and propose a computationally efficient algorithm to solve this model. Building upon this, to compute patrols for multiple agents, the single-agent algorithm is extended for each agent with the aim of maximising its marginal contribution to the team. We empirically evaluate our algorithm on problems of multi-agent patrolling and show that it outperforms a baseline algorithm up to 44% for 10 agents and by 21% for 15 agents in large domains.

  6. Titrating versus targeting home care services to frail elderly clients: an application of agency theory and cost-benefit analysis to home care policy.

    PubMed

    Weissert, William; Chernew, Michael; Hirth, Richard

    2003-02-01

    The article summarizes the shortcomings of current home care targeting policy, provides a conceptual framework for understanding the sources of its problems, and proposes an alternative resource allocation method. Methods required for different aspects of the study included synthesis of the published literature, regression analysis of risk predictors, and comparison of actual resource allocations with simulated budgets. Problems of imperfect agency ranging from unclear goals and inappropriate incentives to lack of information about the marginal effectiveness of home care could be mitigated with an improved budgeting method that combines client selection and resource allocation. No program can produce its best outcome performance when its goals are unclear and its technology is unstandardized. Titration of care would reallocate resources to maximize marginal benefit for marginal cost.

  7. Analyzing big data with the hybrid interval regression methods.

    PubMed

    Huang, Chia-Hui; Yang, Keng-Chieh; Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes.

  8. Analyzing Big Data with the Hybrid Interval Regression Methods

    PubMed Central

    Kao, Han-Ying

    2014-01-01

    Big data is a new trend at present, forcing the significant impacts on information technologies. In big data applications, one of the most concerned issues is dealing with large-scale data sets that often require computation resources provided by public cloud services. How to analyze big data efficiently becomes a big challenge. In this paper, we collaborate interval regression with the smooth support vector machine (SSVM) to analyze big data. Recently, the smooth support vector machine (SSVM) was proposed as an alternative of the standard SVM that has been proved more efficient than the traditional SVM in processing large-scale data. In addition the soft margin method is proposed to modify the excursion of separation margin and to be effective in the gray zone that the distribution of data becomes hard to be described and the separation margin between classes. PMID:25143968

  9. Time Domain Stability Margin Assessment Method

    NASA Technical Reports Server (NTRS)

    Clements, Keith

    2017-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  10. Time-Domain Stability Margin Assessment

    NASA Technical Reports Server (NTRS)

    Clements, Keith

    2016-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  11. What makes up marginal lands and how can it be defined and classified?

    NASA Astrophysics Data System (ADS)

    Ivanina, Vadym

    2017-04-01

    Definitions of marginal lands are often not explicit. The term "marginal" is not supported by either a precise definition or research to determine which lands fall into this category. To identify marginal lands terminology/methodology is used which varies between physical characteristics and the current land use of a site as basic perspective. The term 'Marginal' is most commonly followed by 'degraded' lands, and other widely used terms such as 'abandoned', 'idle', 'pasture', 'surplus agricultural land', 'Conservation Reserve Programme' (CRP)', 'barren and carbon-poor land', etc. Some terms are used synonymously. To the category of "marginal" lands are predominantly included lands which are excluded from cultivation due to economic infeasibility or physical restriction for growing conventional crops. Such sites may still have potential to be used for alternative agricultural practice, e.g. bioenergy feedstock production. The existing categorizing of marginal lands does not allow evaluating soil fertility potential or to define type and level of constrains for growing crops as the reason of a low practical value with regards to land use planning. A new marginal land classification has to be established and developed. This classification should be built on criteria of soil biophysical properties, ecologic, environment and climate handicaps for growing crops, be easy in use and of high practical value. The SEEMLA consortium made steps to build such a marginal land classification which is based on direct criteria depicting soil properties and constrains, and defining their productivity potential. By this classification marginal lands are divided into eleven categories: shallow rooting, low fertility, stony texture, sandy texture, clay texture, salinic, sodicic, acidic, overwet, eroded, and contaminated. The basis of this classification was taken criteria modified after and adapted from Regulation EU (1305)2013. To define an area of marginal lands with climate and economic limitations, SEEMLA established and implemented the term "area of land marginality" with a broader on marginal lands. This term includes marginal lands themselves, evaluation of climate constrains and economic efficiency to grow crops. This approach allows to define, categorize and classify marginal land by direct indicators of soil biophysical properties, ecologic and environment constrains, and provides additional evaluation of lands marginality with regards to suitability for growing crops based on climate criteria.

  12. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts.

    PubMed

    Preisser, John S; Long, D Leann; Stamm, John W

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two data sets, one consisting of fictional dmft counts in 2 groups and the other on DMFS among schoolchildren from a randomized clinical trial comparing 3 toothpaste formulations to prevent incident dental caries, are analyzed with negative binomial hurdle, zero-inflated negative binomial, and marginalized zero-inflated negative binomial models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the randomized clinical trial were similar despite their distinctive interpretations. The choice of statistical model class should match the study's purpose, while accounting for the broad decline in children's caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. © 2017 S. Karger AG, Basel.

  13. Matching the Statistical Model to the Research Question for Dental Caries Indices with Many Zero Counts

    PubMed Central

    Preisser, John S.; Long, D. Leann; Stamm, John W.

    2017-01-01

    Marginalized zero-inflated count regression models have recently been introduced for the statistical analysis of dental caries indices and other zero-inflated count data as alternatives to traditional zero-inflated and hurdle models. Unlike the standard approaches, the marginalized models directly estimate overall exposure or treatment effects by relating covariates to the marginal mean count. This article discusses model interpretation and model class choice according to the research question being addressed in caries research. Two datasets, one consisting of fictional dmft counts in two groups and the other on DMFS among schoolchildren from a randomized clinical trial (RCT) comparing three toothpaste formulations to prevent incident dental caries, are analysed with negative binomial hurdle (NBH), zero-inflated negative binomial (ZINB), and marginalized zero-inflated negative binomial (MZINB) models. In the first example, estimates of treatment effects vary according to the type of incidence rate ratio (IRR) estimated by the model. Estimates of IRRs in the analysis of the RCT were similar despite their distinctive interpretations. Choice of statistical model class should match the study’s purpose, while accounting for the broad decline in children’s caries experience, such that dmft and DMFS indices more frequently generate zero counts. Marginalized (marginal mean) models for zero-inflated count data should be considered for direct assessment of exposure effects on the marginal mean dental caries count in the presence of high frequencies of zero counts. PMID:28291962

  14. Passive margins getting squeezed in the mantle convection vice

    NASA Astrophysics Data System (ADS)

    Yamato, Philippe; Husson, Laurent; Becker, Thorsten W.; Pedoja, Kevin

    2013-12-01

    margins often exhibit uplift, exhumation, and tectonic inversion. We speculate that the compression in the lithosphere gradually increased during the Cenozoic, as seen in the number of mountain belts found at active margins during that period. Less clear is how that compression increase affects passive margins. In order to address this issue, we design a 2-D viscous numerical model wherein a lithospheric plate rests above a weaker mantle. It is driven by a mantle conveyor belt, alternatively excited by a lateral downwelling on one side, an upwelling on the other side, or both simultaneously. The lateral edges of the plate are either free or fixed, representing the cases of free convergence, and collision (or slab anchoring), respectively. This distinction changes the upper mechanical boundary condition for mantle circulation and thus, the stress field. Between these two regimes, the flow pattern transiently evolves from a free-slip convection mode toward a no-slip boundary condition above the upper mantle. In the second case, the lithosphere is highly stressed horizontally and deforms. For a constant total driving force, compression increases drastically at passive margins if upwellings are active. Conversely, if downwellings alone are activated, compression occurs at short distances from the trench and extension prevails elsewhere. These results are supported by Earth-like models that reveal the same pattern, where active upwellings are required to excite passive margins compression. Our results substantiate the idea that compression at passive margins is in response to the underlying mantle flow that is increasingly resisted by the Cenozoic collisions.

  15. Quantum factorization of 143 on a dipolar-coupling nuclear magnetic resonance system.

    PubMed

    Xu, Nanyang; Zhu, Jing; Lu, Dawei; Zhou, Xianyi; Peng, Xinhua; Du, Jiangfeng

    2012-03-30

    Quantum algorithms could be much faster than classical ones in solving the factoring problem. Adiabatic quantum computation for this is an alternative approach other than Shor's algorithm. Here we report an improved adiabatic factoring algorithm and its experimental realization to factor the number 143 on a liquid-crystal NMR quantum processor with dipole-dipole couplings. We believe this to be the largest number factored in quantum-computation realizations, which shows the practical importance of adiabatic quantum algorithms.

  16. Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm

    NASA Astrophysics Data System (ADS)

    Wang, Qimei; Yang, Zhihong; Wang, Yong

    In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.

  17. Command and Control of Teams of Autonomous Units

    DTIC Science & Technology

    2012-06-01

    done by a hybrid genetic algorithm (GA) particle swarm optimization ( PSO ) algorithm called PIDGION-alternate. This training algorithm is an ANN ...human controller will recognize the behaviors as being safe and correct. As the HyperNEAT approach produces Artificial Neural Nets ( ANN ), we can...optimization technique that generates efficient ANN controls from simple environmental feedback. FALCONET has been tested showing that it can produce

  18. Evaluation of the influence of dominance rules for the assembly line design problem under consideration of product design alternatives

    NASA Astrophysics Data System (ADS)

    Oesterle, Jonathan; Lionel, Amodeo

    2018-06-01

    The current competitive situation increases the importance of realistically estimating product costs during the early phases of product and assembly line planning projects. In this article, several multi-objective algorithms using difference dominance rules are proposed to solve the problem associated with the selection of the most effective combination of product and assembly lines. The list of developed algorithms includes variants of ant colony algorithms, evolutionary algorithms and imperialist competitive algorithms. The performance of each algorithm and dominance rule is analysed by five multi-objective quality indicators and fifty problem instances. The algorithms and dominance rules are ranked using a non-parametric statistical test.

  19. Algorithms for optimizing the treatment of depression: making the right decision at the right time.

    PubMed

    Adli, M; Rush, A J; Möller, H-J; Bauer, M

    2003-11-01

    Medication algorithms for the treatment of depression are designed to optimize both treatment implementation and the appropriateness of treatment strategies. Thus, they are essential tools for treating and avoiding refractory depression. Treatment algorithms are explicit treatment protocols that provide specific therapeutic pathways and decision-making tools at critical decision points throughout the treatment process. The present article provides an overview of major projects of algorithm research in the field of antidepressant therapy. The Berlin Algorithm Project and the Texas Medication Algorithm Project (TMAP) compare algorithm-guided treatments with treatment as usual. The Sequenced Treatment Alternatives to Relieve Depression Project (STAR*D) compares different treatment strategies in treatment-resistant patients.

  20. One-way quantum computing in superconducting circuits

    NASA Astrophysics Data System (ADS)

    Albarrán-Arriagada, F.; Alvarado Barrios, G.; Sanz, M.; Romero, G.; Lamata, L.; Retamal, J. C.; Solano, E.

    2018-03-01

    We propose a method for the implementation of one-way quantum computing in superconducting circuits. Measurement-based quantum computing is a universal quantum computation paradigm in which an initial cluster state provides the quantum resource, while the iteration of sequential measurements and local rotations encodes the quantum algorithm. Up to now, technical constraints have limited a scalable approach to this quantum computing alternative. The initial cluster state can be generated with available controlled-phase gates, while the quantum algorithm makes use of high-fidelity readout and coherent feedforward. With current technology, we estimate that quantum algorithms with above 20 qubits may be implemented in the path toward quantum supremacy. Moreover, we propose an alternative initial state with properties of maximal persistence and maximal connectedness, reducing the required resources of one-way quantum computing protocols.

  1. Health behaviors and mental health of students attending alternative high schools: a review of the research literature.

    PubMed

    Johnson, Karen E; Taliaferro, Lindsay A

    2012-04-01

    The purpose of this review is to describe current knowledge about health-risk behaviors and mental health among alternative high school students. Substance use, diet and/or physical activity, sexual-risk behaviors, mental health, and violence were reviewed. Students were described as marginalized youth facing significant social environmental challenges. Findings from 43 studies published from 1997-2010 suggested a high prevalence of health-risk behaviors among alternative high school students. Very few studies were conducted by nurse researchers. Suggestions for future research include addressing social environmental factors, resiliency, and emotional/mental health outcomes. Alternative high schools offer a venue to conduct research and implement nursing interventions with high-risk, yet resilient, youth. © 2011, Wiley Periodicals, Inc.

  2. SU-F-BRD-09: Is It Sufficient to Use Only Low Density Tissue-Margin to Compensate Inter-Fractionation Setup Uncertainties in Lung Treatment?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nie, K; Yue, N; Chen, T

    2014-06-15

    Purpose: In lung radiation treatment, PTV is formed with a margin around GTV (or CTV/ITV). Although GTV is most likely of water equivalent density, the PTV margin may be formed with the surrounding low-density tissues, which may lead to unreal dosimetric plan. This study is to evaluate whether the concern of dose calculation inside the PTV with only low density margin could be justified in lung treatment. Methods: Three SBRT cases were analyzed. The PTV from the original plan (Plan-O) was created with a 5–10 mm margin outside the ITV to incorporate setup errors and all mobility from 10 respiratorymore » phases. Test plans were generated with the GTV shifted to the PTV edge to simulate the extreme situations with maximum setup uncertainties. Two representative positions as the very posterior-superior (Plan-PS) and anterior-inferior (Plan-AI) edge were considered. The virtual GTV was assigned a density of 1.0 g.cm−3 and surrounding lung, including the PTV margin, was defined as 0.25 g.cm−3. Also, additional plan with a 1mm tissue-margin instead of full lung-margin was created to evaluate whether a composite-margin (Plan-Comp) has a better approximation for dose calculation. All plans were generated on the average CT using Analytical Anisotropic Algorithm with heterogeneity correction on and all planning parameters/monitor unites remained unchanged. DVH analyses were performed for comparisons. Results: Despite the non-static dose distribution, the high-dose region synchronized with tumor positions. This might due to scatter conditions as greater doses were absorbed in the solid-tumor than in the surrounding low-density lungtissue. However, it still showed missing target coverage in general. Certain level of composite-margin might give better approximation for the dosecalculation. Conclusion: Our exploratory results suggest that with the lungmargin only, the planning dose of PTV might overestimate the coverage of the target during treatment. The significance of this overestimation might warrant further investigation.« less

  3. The 2009 Influenza A(H1N1) Outbreak: Selected Legal Issues

    DTIC Science & Technology

    2009-05-21

    primarily against marginalized, nonwhite persons underscores the need for legal oversight —if only so that affected communities can be assured of the absence...settings. Potential strategies and or guidance addressing telecommuting , alternative schedules, or modified operating hours for retail establishments

  4. Development, fabrication and test of a high purity silica heat shield

    NASA Technical Reports Server (NTRS)

    Rusert, E. L.; Drennan, D. N.; Biggs, M. S.

    1978-01-01

    A highly reflective hyperpure ( 25 ppm ion impurities) slip cast fused silica heat shield material developed for planetary entry probes was successfully scaled up. Process development activities for slip casting large parts included green strength improvements, casting slip preparation, aggregate casting, strength, reflectance, and subscale fabrication. Successful fabrication of a one-half scale Saturn probe (shape and size) heat shield was accomplished while maintaining the silica high purity and reflectance through the scale-up process. However, stress analysis of this original aggregate slip cast material indicated a small margin of safety (MS. = +4%) using a factor of safety of 1.25. An alternate hyperpure material formulation to increase the strength and toughness for a greater safety margin was evaluated. The alternate material incorporates short hyperpure silica fibers into the casting slip. The best formulation evaluated has a 50% by weight fiber addition resulting in an 80% increase in flexural strength and a 170% increase in toughness over the original aggregate slip cast materials with comparable reflectance.

  5. Costs and consequences of automated algorithms versus manual grading for the detection of referable diabetic retinopathy.

    PubMed

    Scotland, G S; McNamee, P; Fleming, A D; Goatman, K A; Philip, S; Prescott, G J; Sharp, P F; Williams, G J; Wykes, W; Leese, G P; Olson, J A

    2010-06-01

    To assess the cost-effectiveness of an improved automated grading algorithm for diabetic retinopathy against a previously described algorithm, and in comparison with manual grading. Efficacy of the alternative algorithms was assessed using a reference graded set of images from three screening centres in Scotland (1253 cases with observable/referable retinopathy and 6333 individuals with mild or no retinopathy). Screening outcomes and grading and diagnosis costs were modelled for a cohort of 180 000 people, with prevalence of referable retinopathy at 4%. Algorithm (b), which combines image quality assessment with detection algorithms for microaneurysms (MA), blot haemorrhages and exudates, was compared with a simpler algorithm (a) (using image quality assessment and MA/dot haemorrhage (DH) detection), and the current practice of manual grading. Compared with algorithm (a), algorithm (b) would identify an additional 113 cases of referable retinopathy for an incremental cost of pound 68 per additional case. Compared with manual grading, automated grading would be expected to identify between 54 and 123 fewer referable cases, for a grading cost saving between pound 3834 and pound 1727 per case missed. Extrapolation modelling over a 20-year time horizon suggests manual grading would cost between pound 25,676 and pound 267,115 per additional quality adjusted life year gained. Algorithm (b) is more cost-effective than the algorithm based on quality assessment and MA/DH detection. With respect to the value of introducing automated detection systems into screening programmes, automated grading operates within the recommended national standards in Scotland and is likely to be considered a cost-effective alternative to manual disease/no disease grading.

  6. Multi-objective optimization for conjunctive water use using coupled hydrogeological and agronomic models: a case study in Heihe mid-reach (China)

    NASA Astrophysics Data System (ADS)

    LI, Y.; Kinzelbach, W.; Pedrazzini, G.

    2017-12-01

    Groundwater is a vital water resource to buffer unexpected drought risk in agricultural production, which is however apt to unsustainable exploitation due to its open access characteristic and a much underestimated marginal cost. Being a wicked problem of general water resource management, groundwater staying hidden from surface terrain further amplifies difficulties of management. China has been facing this challenge in last decades, particularly in the northern part where irrigated agriculture resides despite of scarce surface water available compared to the south. Farmers therefore have been increasingly exploiting groundwater as an alternative in order to reach Chinese food self-sufficiency requirements and feed fast socio-economic development. In this work, we studied Heihe mid-reach located in northern China, which represents one of a few regions suffering from symptoms of unsustainable groundwater use, such as a large drawdown of the groundwater table in some irrigation districts, or soil salinization due to phreatic evaporation in others. In addition, we focus on solving a multi-objective optimization problem of conjunctive water use in order to find an alternative management scheme that fits decision makers' preference. The methodology starts with a global sensitivity analysis to determine the most influential decision variables. Then a state-of-the-art multi-objective evolutionary algorithm (MOEA) is employed to search a hyper-dimensional Pareto Front. The aquifer system is simulated with a distributed Modflow model, which is able to capture the main phenomenon of interest. Results show that the current water allocation scheme seems to exploit the water resources in an inefficient way, where areas with depression cones and areas with salinization or groundwater table rise can both be mitigated with an alternative management scheme. When assuming uncertain boundary conditions according to future climate change, the optimal solutions can yield better performance in economical productivity by reducing opportunity cost under unexpected drought conditions.

  7. Synthesizing Equivalence Indices for the Comparative Evaluation of Technoeconomic Efficiency of Industrial Processes at the Design/Re-engineering Level

    NASA Astrophysics Data System (ADS)

    Fotilas, P.; Batzias, A. F.

    2007-12-01

    The equivalence indices synthesized for the comparative evaluation of technoeconomic efficiency of industrial processes are of critical importance since they serve as both, (i) positive/analytic descriptors of the physicochemical nature of the process and (ii) measures of effectiveness, especially helpful for investigated competitiveness in the industrial/energy/environmental sector of the economy. In the present work, a new algorithmic procedure has been developed, which initially standardizes a real industrial process, then analyzes it as a compromise of two ideal processes, and finally synthesizes the index that can represent/reconstruct the real process as a result of the trade-off between the two ideal processes taking as parental prototypes. The same procedure makes fuzzy multicriteria ranking within a set of pre-selected industrial processes for two reasons: (a) to analyze the process most representative of the production/treatment under consideration, (b) to use the `second best' alternative as a dialectic pole in absence of the two ideal processes mentioned above. An implantation of this procedure is presented, concerning a facility of biological wastewater treatment with six alternatives: activated sludge through (i) continuous-flow incompletely-stirred tank reactors in series, (ii) a plug flow reactor with dispersion, (iii) an oxidation ditch, and biological processing through (iv) a trickling filter, (v) rotating contactors, (vi) shallow ponds. The criteria used for fuzzy (to count for uncertainty) ranking are capital cost, operating cost, environmental friendliness, reliability, flexibility, extendibility. Two complementary indices were synthesized for the (ii)-alternative ranked first and their quantitative expressions were derived, covering a variety of kinetic models as well as recycle/bypass conditions. Finally, analysis of estimating the optimal values of these indices at maximum technoeconomic efficiency is presented and the implications (expected to be) caused by exogenous and endogenous factors (e.g., environmental standards change and innovative energy savings/substitution, respectively) are discussed by means of marginal efficiency graphs.

  8. Micro-CT image reconstruction based on alternating direction augmented Lagrangian method and total variation.

    PubMed

    Gopi, Varun P; Palanisamy, P; Wahid, Khan A; Babyn, Paul; Cooper, David

    2013-01-01

    Micro-computed tomography (micro-CT) plays an important role in pre-clinical imaging. The radiation from micro-CT can result in excess radiation exposure to the specimen under test, hence the reduction of radiation from micro-CT is essential. The proposed research focused on analyzing and testing an alternating direction augmented Lagrangian (ADAL) algorithm to recover images from random projections using total variation (TV) regularization. The use of TV regularization in compressed sensing problems makes the recovered image quality sharper by preserving the edges or boundaries more accurately. In this work TV regularization problem is addressed by ADAL which is a variant of the classic augmented Lagrangian method for structured optimization. The per-iteration computational complexity of the algorithm is two fast Fourier transforms, two matrix vector multiplications and a linear time shrinkage operation. Comparison of experimental results indicate that the proposed algorithm is stable, efficient and competitive with the existing algorithms for solving TV regularization problems. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Mapreduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!

    PubMed

    Lin, Jimmy

    2013-03-01

    Hadoop is currently the large-scale data analysis "hammer" of choice, but there exist classes of algorithms that aren't "nails" in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This article espouses a very different position: that MapReduce is "good enough," and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce. The simple solution is to find alternative, noniterative algorithms that solve the same problem. This article captures my personal experiences as an academic researcher as well as a software engineer in a "real-world" production analytics environment. From this combined perspective, I reflect on the current state and future of "big data" research.

  10. Super-resolved Parallel MRI by Spatiotemporal Encoding

    PubMed Central

    Schmidt, Rita; Baishya, Bikash; Ben-Eliezer, Noam; Seginer, Amir; Frydman, Lucio

    2016-01-01

    Recent studies described an alternative “ultrafast” scanning method based on spatiotemporal (SPEN) principles. SPEN demonstrates numerous potential advantages over EPI-based alternatives, at no additional expense in experimental complexity. An important aspect that SPEN still needs to achieve for providing a competitive acquisition alternative entails exploiting parallel imaging algorithms, without compromising its proven capabilities. The present work introduces a combination of multi-band frequency-swept pulses simultaneously encoding multiple, partial fields-of-view; together with a new algorithm merging a Super-Resolved SPEN image reconstruction and SENSE multiple-receiving methods. The ensuing approach enables one to reduce both the excitation and acquisition times of ultrafast SPEN acquisitions by the customary acceleration factor R, without compromises in either the ensuing spatial resolution, SAR deposition, or the capability to operate in multi-slice mode. The performance of these new single-shot imaging sequences and their ancillary algorithms were explored on phantoms and human volunteers at 3T. The gains of the parallelized approach were particularly evident when dealing with heterogeneous systems subject to major T2/T2* effects, as is the case upon single-scan imaging near tissue/air interfaces. PMID:24120293

  11. XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints

    NASA Technical Reports Server (NTRS)

    Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.

    2018-01-01

    Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.

  12. An algorithm for converting a virtual-bond chain into a complete polypeptide backbone chain

    NASA Technical Reports Server (NTRS)

    Luo, N.; Shibata, M.; Rein, R.

    1991-01-01

    A systematic analysis is presented of the algorithm for converting a virtual-bond chain, defined by the coordinates of the alpha-carbons of a given protein, into a complete polypeptide backbone. An alternative algorithm, based upon the same set of geometric parameters used in the Purisima-Scheraga algorithm but with a different "linkage map" of the algorithmic procedures, is proposed. The global virtual-bond chain geometric constraints are more easily separable from the loal peptide geometric and energetic constraints derived from, for example, the Ramachandran criterion, within the framework of this approach.

  13. Disease Prediction based on Functional Connectomes using a Scalable and Spatially-Informed Support Vector Machine

    PubMed Central

    Watanabe, Takanori; Kessler, Daniel; Scott, Clayton; Angstadt, Michael; Sripada, Chandra

    2014-01-01

    Substantial evidence indicates that major psychiatric disorders are associated with distributed neural dysconnectivity, leading to strong interest in using neuroimaging methods to accurately predict disorder status. In this work, we are specifically interested in a multivariate approach that uses features derived from whole-brain resting state functional connectomes. However, functional connectomes reside in a high dimensional space, which complicates model interpretation and introduces numerous statistical and computational challenges. Traditional feature selection techniques are used to reduce data dimensionality, but are blind to the spatial structure of the connectomes. We propose a regularization framework where the 6-D structure of the functional connectome (defined by pairs of points in 3-D space) is explicitly taken into account via the fused Lasso or the GraphNet regularizer. Our method only restricts the loss function to be convex and margin-based, allowing non-differentiable loss functions such as the hinge-loss to be used. Using the fused Lasso or GraphNet regularizer with the hinge-loss leads to a structured sparse support vector machine (SVM) with embedded feature selection. We introduce a novel efficient optimization algorithm based on the augmented Lagrangian and the classical alternating direction method, which can solve both fused Lasso and GraphNet regularized SVM with very little modification. We also demonstrate that the inner subproblems of the algorithm can be solved efficiently in analytic form by coupling the variable splitting strategy with a data augmentation scheme. Experiments on simulated data and resting state scans from a large schizophrenia dataset show that our proposed approach can identify predictive regions that are spatially contiguous in the 6-D “connectome space,” offering an additional layer of interpretability that could provide new insights about various disease processes. PMID:24704268

  14. Synthesis of Greedy Algorithms Using Dominance Relations

    NASA Technical Reports Server (NTRS)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  15. The causes and effects of the Alternative Motor Fuels Act

    NASA Astrophysics Data System (ADS)

    Liu, Yimin

    The corporate average fuel economy (CAFE) standard is the major policy tool to improve the fleet average miles per gallon of automobile manufacturers in the U.S. The Alternative Motor Fuels Act (AMFA) provides special treatment in calculating the fuel economy of alternative fuel vehicles to give manufacturers CAFE incentives to produce more alternative fuel vehicles. AMFA has as its goals an increase in the production of alternative fuel vehicles and a decrease in gasoline consumption and greenhouse gas emissions. This dissertation examines theoretically the effects of the program set up under AMFA. It finds that, under some conditions, this program may actually increase gasoline consumption and greenhouse gas emissions. The dissertation also uses hedonic techniques to examine whether the Alternative Motor Fuels Act (AMFA) has a significant effect on the implicit price of fuel economy and whether the marginal value of vehicle fuel efficiency changes over time. It estimates the change of implicit price in miles per gallon after the production of alternative fuel vehicles (AFVs). Results indicate that every year consumers may evaluate vehicle fuel economy differently, and that since AFVs came to the market, the marginal value of fuel economy from specific companies producing AFVs has decreased. This finding suggests that since the AMFA provides extra Corporate Average Fuel Economy (CAFE) credit for those automakers producing AFVs, the automakers can take advantage of the incentive to produce more profitable conventional vehicles and meet CAFE standards without improving the fleet fuel economy. In this way, manufacturers who produce AFVs are willing to offer a lower price for the fuel economy under the AMFA. Additionally, this paper suggests that the flexible fuel vehicles (FFVs) on the market are not significantly more expensive than comparable conventional vehicles, even if FFVs are also able to run on an alternative fuel and may cost more than conventional vehicles. In other words, consumers may not notice the difference between flexible fuel vehicles and conventional vehicles, or are not willing to pay higher prices for FFVs of the same make and model. When the U.S. House of Representatives passed the Alternative Fuels Motor Act (AMFA) in 1987, the representatives who did not vote outnumbered those who opposed the law. This dissertation uses a bivariate probit model with sample selection to study congressmen's two-step decisions --- whether to vote and then how to vote --- on the bill. Theories of political decision-making are examined and tested by the two-stage congressmen voting procedure, which confirms constituent economic interests, congressmen ideology and interest groups' contributions play important roles in congressmen's decision-making on economic policies. Furthermore, it suggests that ignoring congressmen not voting may lead to biased conclusions or inaccurate estimation of the influences of some factors. This study also compares the results from the two-step process with the results from the sample of congressmen who voted, and calculates the marginal effects bound of every factor on the probability of passing the AMFA.

  16. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    DTIC Science & Technology

    2017-05-01

    contraction images were analyzed visually and with three different classes of algorithms: pixel standard deviation (SD), high-pass filter and Teager Kaiser...Linear relationships and agreements between computed and visual muscle onset were calculated. The top algorithms were high-pass filtered with a 30 Hz...suggest that computer automated determination using high-pass filtering is a potential objective alternative to visual determination in human

  17. Model-based clustering for RNA-seq data.

    PubMed

    Si, Yaqing; Liu, Peng; Li, Pinghua; Brutnell, Thomas P

    2014-01-15

    RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org

  18. Comparison of photo-matching algorithms commonly used for photographic capture-recapture studies.

    PubMed

    Matthé, Maximilian; Sannolo, Marco; Winiarski, Kristopher; Spitzen-van der Sluijs, Annemarieke; Goedbloed, Daniel; Steinfartz, Sebastian; Stachow, Ulrich

    2017-08-01

    Photographic capture-recapture is a valuable tool for obtaining demographic information on wildlife populations due to its noninvasive nature and cost-effectiveness. Recently, several computer-aided photo-matching algorithms have been developed to more efficiently match images of unique individuals in databases with thousands of images. However, the identification accuracy of these algorithms can severely bias estimates of vital rates and population size. Therefore, it is important to understand the performance and limitations of state-of-the-art photo-matching algorithms prior to implementation in capture-recapture studies involving possibly thousands of images. Here, we compared the performance of four photo-matching algorithms; Wild-ID, I3S Pattern+, APHIS, and AmphIdent using multiple amphibian databases of varying image quality. We measured the performance of each algorithm and evaluated the performance in relation to database size and the number of matching images in the database. We found that algorithm performance differed greatly by algorithm and image database, with recognition rates ranging from 100% to 22.6% when limiting the review to the 10 highest ranking images. We found that recognition rate degraded marginally with increased database size and could be improved considerably with a higher number of matching images in the database. In our study, the pixel-based algorithm of AmphIdent exhibited superior recognition rates compared to the other approaches. We recommend carefully evaluating algorithm performance prior to using it to match a complete database. By choosing a suitable matching algorithm, databases of sizes that are unfeasible to match "by eye" can be easily translated to accurate individual capture histories necessary for robust demographic estimates.

  19. 20171015 - Integrating Toxicity, Toxicokinetic, and Exposure Data for Risk-based Chemical Alternatives Assessment (ISES)

    EPA Science Inventory

    In order to predict the margin between the dose needed for adverse chemical effects and actual human exposure rates, data on hazard, exposure, and toxicokinetics are needed. In vitro methods, biomonitoring, and mathematical modeling have provided initial estimates for many extant...

  20. Glysphosate-resistant Palmer amaranth (Amaranthus palmeri) morphology,growth, and seed production in Georgia

    USDA-ARS?s Scientific Manuscript database

    Herbicide resistant Palmer amaranth has become the most economically detrimental weed of cotton in the Southeast US. With the continual marginalization of potential herbicide tools, research has expanded to include alternative means of affecting future Palmer amaranth populations by altering safe s...

  1. Glyphosate-resistant Palmer amaranth (Amaranthus palmeri) morphology, growth, and seed production in Georgia

    USDA-ARS?s Scientific Manuscript database

    Herbicide resistant Palmer amaranth has become the most economically detrimental weed of cotton in the Southeast US. With the continual marginalization of potential herbicide tools, research has expanded to include alternative means of affecting future Palmer amaranth populations by altering safe s...

  2. Coming to Do Mathematics in the Margins

    ERIC Educational Resources Information Center

    Brown, Raymond; Redmond, Trevor

    2015-01-01

    This paper explores teacher' "identity" as two teachers talk about teaching mathematics in classrooms situated within two different contexts of learning--mainstream and alternative. Employing a form of discourse analysis framed within a participation approach to learning, this paper describes teacher identity in terms of the norms and…

  3. Comparison of pencil beam–based homogeneous vs inhomogeneous target dose planning for stereotactic body radiotherapy of peripheral lung tumors through Monte Carlo–based recalculation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohtakara, Kazuhiro, E-mail: ohtakara@murakami.asahi-u.ac.jp; Hoshi, Hiroaki

    2015-10-01

    This study was conducted to ascertain whether homogeneous target dose planning is suitable for stereotactic body radiotherapy (SBRT) of peripheral lung cancer under appropriate breath-holding. For 20 peripheral lung tumors, paired dynamic conformal arc plans were generated by only adjusting the leaf margin to the planning target volume (PTV) edge for fulfilling the conditions such that the prescription isodose surface (IDS) encompassing exactly 95% of the PTV (PTV D{sub 95}) corresponds to 95% and 80% IDS, normalized to 100% at the PTV isocenter under a pencil beam (PB) algorithm with radiologic path length correction. These plans were recalculated using themore » x-ray voxel Monte Carlo (XVMC) algorithm under otherwise identical conditions, and then compared. Lesions abutting the parietal pleura or not were defined as edge or island tumors, respectively, and the influences of the target volume and its location relative to the chest wall on the target dose were examined. The median (range) leaf margin required for the 95% and 80% plans was 3.9 mm (1.3 to 5.0) and −1.2 mm (−1.8 to 0.1), respectively. Notably, the latter was significantly correlated negatively with PTV. In the 80% plans, the PTV D{sub 95} was slightly higher under XVMC, whereas the PTV D{sub 98} was significantly lower, irrespective of the dose calculation algorithm used. Other PTV and all gross tumor volume doses were significantly higher, while the lung doses outside the PTV were slightly lower. The target doses increased as a function of PTV and were significantly lower for island tumors than for edge tumors. In conclusion, inhomogeneous target dose planning using smaller leaf margin for a larger tumor volume was deemed suitable in ensuring more sufficient target dose while slightly reducing lung dose. In addition, more inhomogeneous target dose planning using <80% IDS (e.g., 70%) for PTV covering would be preferable for island tumors.« less

  4. Essays in the California electricity reserves markets

    NASA Astrophysics Data System (ADS)

    Metaxoglou, Konstantinos

    This dissertation examines inefficiencies in the California electricity reserves markets. In Chapter 1, I use the information released during the investigation of the state's electricity crisis of 2000 and 2001 by the Federal Energy Regulatory Commission to diagnose allocative inefficiencies. Building upon the work of Wolak (2000), I calculate a lower bound for the sellers' price-cost margins using the inverse elasticities of their residual demand curves. The downward bias in my estimates stems from the fact that I don't account for the hierarchical substitutability of the reserve types. The margins averaged at least 20 percent for the two highest quality types of reserves, regulation and spinning, generating millions of dollars in transfers to a handful of sellers. I provide evidence that the deviations from marginal cost pricing were due to the markets' high concentration and a principal-agent relationship that emerged from their design. In Chapter 2, I document systematic differences between the markets' day- and hour-ahead prices. I use a high-dimensional vector moving average model to estimate the premia and conduct correct inferences. To obtain exact maximum likelihood estimates of the model, I employ the EM algorithm that I develop in Chapter 3. I uncover significant day-ahead premia, which I attribute to market design characteristics too. On the demand side, the market design established a principal-agent relationship between the markets' buyers (principal) and their supervisory authority (agent). The agent had very limited incentives to shift reserve purchases to the lower priced hour-ahead markets. On the supply side, the market design raised substantial entry barriers by precluding purely speculative trading and by introducing a complicated code of conduct that induced uncertainty about which actions were subject to regulatory scrutiny. In Chapter 3, I introduce a state-space representation for vector autoregressive moving average models that enables exact maximum likelihood estimation using the EM algorithm. Moreover, my algorithm uses only analytical expressions; it requires the Kalman filter and a fixed-interval smoother in the E step and least squares-type regression in the M step. In contrast, existing maximum likelihood estimation methods require numerical differentiation, both for univariate and multivariate models.

  5. Cenozoic Source-to-Sink of the African margin of the Equatorial Atlantic

    NASA Astrophysics Data System (ADS)

    Rouby, Delphine; Chardon, Dominique; Huyghe, Damien; Guillocheau, François; Robin, Cecile; Loparev, Artiom; Ye, Jing; Dall'Asta, Massimo; Grimaud, Jean-Louis

    2016-04-01

    The objective of the Transform Source to Sink Project (TS2P) is to link the dynamics of the erosion of the West African Craton to the offshore sedimentary basins of the African margin of the Equatorial Atlantic at geological time scales. This margin, alternating transform and oblique segments from Guinea to Nigeria, shows a strong structural variability in the margin width, continental geology and relief, drainage networks and subsidence/accumulation patterns. We analyzed this system combining onshore geology and geomorphology as well as offshore sub-surface data. Mapping and regional correlation of dated lateritic paleo-landscape remnants allows us to reconstruct two physiographic configurations of West Africa during the Cenozoic. We corrected those reconstitutions from flexural isostasy related to the subsequent erosion. These geometries show that the present-day drainage organization stabilized by at least 29 Myrs ago (probably by 34 Myr) revealing the antiquity of the Senegambia, Niger and Volta catchments toward the Atlantic as well as of the marginal upwarp currently forming a continental divide. The drainage rearrangement that lead to this drainage organization was primarily enhanced by the topographic growth of the Hoggar swell and caused a major stratigraphic turnover along the Equatorial margin of West Africa. Elevation differences between paleo-landscape remnants give access to the spatial and temporal distribution of denudation for 3 time-increments since 45 Myrs. From this, we estimate the volumes of sediments and associated lithologies exported by the West African Craton toward different segments of the margin, taking into account the type of eroded bedrock and the successive drainage reorganizations. We compare these data to Cenozoic accumulation histories in the basins and discuss their stratigraphic expression according to the type of margin segment they are preserved in.

  6. Bioenergy crop productivity and potential climate change mitigation from marginal lands in the United States: An ecosystem modeling perspective

    DOE PAGES

    Qin, Zhangcai; Zhuang, Qianlai; Cai, Ximing

    2014-06-16

    Growing biomass feedstocks from marginal lands is becoming an increasingly attractive choice for producing biofuel as an alternative energy to fossil fuels. Here, we used a biogeochemical model at ecosystem scale to estimate crop productivity and greenhouse gas (GHG) emissions from bioenergy crops grown on marginal lands in the United States. Two broadly tested cellulosic crops, switchgrass, and Miscanthus, were assumed to be grown on the abandoned land and mixed crop–vegetation land with marginal productivity. Production of biomass and biofuel as well as net carbon exchange and nitrous oxide emissions were estimated in a spatially explicit manner. We found that,more » cellulosic crops, especially Miscanthus could produce a considerable amount of biomass, and the effective ethanol yield is high on these marginal lands. For every hectare of marginal land, switchgrass and Miscanthus could produce 1.0–2.3 kl and 2.9–6.9 kl ethanol, respectively, depending on nitrogen fertilization rate and biofuel conversion efficiency. Nationally, both crop systems act as net GHG sources. Switchgrass has high global warming intensity (100–390 g CO 2eq l –1 ethanol), in terms of GHG emissions per unit ethanol produced. Miscanthus, however, emits only 21–36 g CO 2eq to produce every liter of ethanol. To reach the mandated cellulosic ethanol target in the United States, growing Miscanthus on the marginal lands could potentially save land and reduce GHG emissions in comparison to growing switchgrass. Furthermore, the ecosystem modeling is still limited by data availability and model deficiencies, further efforts should be made to classify crop–specific marginal land availability, improve model structure, and better integrate ecosystem modeling into life cycle assessment.« less

  7. Time is up: increasing shadow price of time in primary-care office visits.

    PubMed

    Tai-Seale, Ming; McGuire, Thomas

    2012-04-01

    A physician's own time is a scarce resource in primary care, and the physician must constantly evaluate the gain from spending more time with the current patient against moving to address the health-care needs of the next. We formulate and test two alternative hypotheses. The first hypothesis is based on the premise that with time so scarce, physicians equalize the marginal value of time across patients. The second, alternative hypothesis states that physicians allocate the same time to each patient, regardless of how much the patient benefits from the time at the margin. For our empirical work, we examine the presence of a sharply increasing subjective shadow price of time around the 'target' time using video recordings of 385 visits by elderly patients to their primary care physician. We structure the data at the 'topic' level and find evidence consistent with the alternative hypothesis. Specifically, time elapsed within a visit is a very strong determinant of the current topic being the 'last topic'. This finding implies the physician's shadow price of time is rising during the course of a visit. We consider whether dislodging a target-time mentality from physicians (and patients) might contribute to more productive primary care practice. Copyright © 2011 John Wiley & Sons, Ltd.

  8. General simulation algorithm for autocorrelated binary processes.

    PubMed

    Serinaldi, Francesco; Lombardo, Federico

    2017-02-01

    The apparent ubiquity of binary random processes in physics and many other fields has attracted considerable attention from the modeling community. However, generation of binary sequences with prescribed autocorrelation is a challenging task owing to the discrete nature of the marginal distributions, which makes the application of classical spectral techniques problematic. We show that such methods can effectively be used if we focus on the parent continuous process of beta distributed transition probabilities rather than on the target binary process. This change of paradigm results in a simulation procedure effectively embedding a spectrum-based iterative amplitude-adjusted Fourier transform method devised for continuous processes. The proposed algorithm is fully general, requires minimal assumptions, and can easily simulate binary signals with power-law and exponentially decaying autocorrelation functions corresponding, for instance, to Hurst-Kolmogorov and Markov processes. An application to rainfall intermittency shows that the proposed algorithm can also simulate surrogate data preserving the empirical autocorrelation.

  9. Efficient Exact Inference With Loss Augmented Objective in Structured Learning.

    PubMed

    Bauer, Alexander; Nakajima, Shinichi; Muller, Klaus-Robert

    2016-08-19

    Structural support vector machine (SVM) is an elegant approach for building complex and accurate models with structured outputs. However, its applicability relies on the availability of efficient inference algorithms--the state-of-the-art training algorithms repeatedly perform inference to compute a subgradient or to find the most violating configuration. In this paper, we propose an exact inference algorithm for maximizing nondecomposable objectives due to special type of a high-order potential having a decomposable internal structure. As an important application, our method covers the loss augmented inference, which enables the slack and margin scaling formulations of structural SVM with a variety of dissimilarity measures, e.g., Hamming loss, precision and recall, Fβ-loss, intersection over union, and many other functions that can be efficiently computed from the contingency table. We demonstrate the advantages of our approach in natural language parsing and sequence segmentation applications.

  10. Statistical Inference in Hidden Markov Models Using k-Segment Constraints

    PubMed Central

    Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher

    2016-01-01

    Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674

  11. Redundant interferometric calibration as a complex optimization problem

    NASA Astrophysics Data System (ADS)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  12. Enforcing dust mass conservation in 3D simulations of tightly coupled grains with the PHANTOM SPH code

    NASA Astrophysics Data System (ADS)

    Ballabio, G.; Dipierro, G.; Veronesi, B.; Lodato, G.; Hutchison, M.; Laibe, G.; Price, D. J.

    2018-06-01

    We describe a new implementation of the one-fluid method in the SPH code PHANTOM to simulate the dynamics of dust grains in gas protoplanetary discs. We revise and extend previously developed algorithms by computing the evolution of a new fluid quantity that produces a more accurate and numerically controlled evolution of the dust dynamics. Moreover, by limiting the stopping time of uncoupled grains that violate the assumptions of the terminal velocity approximation, we avoid fatal numerical errors in mass conservation. We test and validate our new algorithm by running 3D SPH simulations of a large range of disc models with tightly and marginally coupled grains.

  13. Performance seeking control (PSC) for the F-15 highly integrated digital electronic control (HIDEC) aircraft

    NASA Technical Reports Server (NTRS)

    Orme, John S.

    1995-01-01

    The performance seeking control algorithm optimizes total propulsion system performance. This adaptive, model-based optimization algorithm has been successfully flight demonstrated on two engines with differing levels of degradation. Models of the engine, nozzle, and inlet produce reliable, accurate estimates of engine performance. But, because of an observability problem, component levels of degradation cannot be accurately determined. Depending on engine-specific operating characteristics PSC achieves various levels performance improvement. For example, engines with more deterioration typically operate at higher turbine temperatures than less deteriorated engines. Thus when the PSC maximum thrust mode is applied, for example, there will be less temperature margin available to be traded for increasing thrust.

  14. Variations of mesoscale and large-scale sea ice morphology in the 1984 Marginal Ice Zone Experiment as observed by microwave remote sensing

    NASA Technical Reports Server (NTRS)

    Campbell, W. J.; Josberger, E. G.; Gloersen, P.; Johannessen, O. M.; Guest, P. S.

    1987-01-01

    The data acquired during the summer 1984 Marginal Ice Zone Experiment in the Fram Strait-Greenland Sea marginal ice zone, using airborne active and passive microwave sensors and the Nimbus 7 SMMR, were analyzed to compile a sequential description of the mesoscale and large-scale ice morphology variations during the period of June 6 - July 16, 1984. Throughout the experiment, the long ice edge between northwest Svalbard and central Greenland meandered; eddies were repeatedly formed, moved, and disappeared but the ice edge remained within a 100-km-wide zone. The ice pack behind this alternately diffuse and compact edge underwent rapid and pronounced variations in ice concentration over a 200-km-wide zone. The high-resolution ice concentration distributions obtained in the aircraft images agree well with the low-resolution distributions of SMMR images.

  15. Time Domain Stability Margin Assessment of the NASA Space Launch System GN&C Design for Exploration Mission One

    NASA Technical Reports Server (NTRS)

    Clements, Keith; Wall, John

    2017-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  16. Time Domain Stability Margin Assessment of the NS Space Launch System GN&C Design for Exploration Mission One

    NASA Technical Reports Server (NTRS)

    Clements, Keith; Wall, John

    2017-01-01

    The baseline stability margins for NASA's Space Launch System (SLS) launch vehicle were generated via the classical approach of linearizing the system equations of motion and determining the gain and phase margins from the resulting frequency domain model. To improve the fidelity of the classical methods, the linear frequency domain approach can be extended by replacing static, memoryless nonlinearities with describing functions. This technique, however, does not address the time varying nature of the dynamics of a launch vehicle in flight. An alternative technique for the evaluation of the stability of the nonlinear launch vehicle dynamics along its trajectory is to incrementally adjust the gain and/or time delay in the time domain simulation until the system exhibits unstable behavior. This technique has the added benefit of providing a direct comparison between the time domain and frequency domain tools in support of simulation validation.

  17. An improved least cost routing approach for WDM optical network without wavelength converters

    NASA Astrophysics Data System (ADS)

    Bonani, Luiz H.; Forghani-elahabad, Majid

    2016-12-01

    Routing and wavelength assignment (RWA) problem has been an attractive problem in optical networks, and consequently several algorithms have been proposed in the literature to solve this problem. The most known techniques for the dynamic routing subproblem are fixed routing, fixed-alternate routing, and adaptive routing methods. The first one leads to a high blocking probability (BP) and the last one includes a high computational complexity and requires immense backing from the control and management protocols. The second one suggests a trade-off between performance and complexity, and hence we consider it to improve in our work. In fact, considering the RWA problem in a wavelength routed optical network with no wavelength converter, an improved technique is proposed for the routing subproblem in order to decrease the BP of the network. Based on fixed-alternate approach, the first k shortest paths (SPs) between each node pair is determined. We then rearrange the SPs according to a newly defined cost for the links and paths. Upon arriving a connection request, the sorted paths are consecutively checked for an available wavelength according to the most-used technique. We implement our proposed algorithm and the least-hop fixed-alternate algorithm to show how the rearrangement of SPs contributes to a lower BP in the network. The numerical results demonstrate the efficiency of our proposed algorithm in comparison with the others, considering different number of available wavelengths.

  18. Reconstruction of extended Petri nets from time series data and its application to signal transduction and to gene regulatory networks

    PubMed Central

    2011-01-01

    Background Network inference methods reconstruct mathematical models of molecular or genetic networks directly from experimental data sets. We have previously reported a mathematical method which is exclusively data-driven, does not involve any heuristic decisions within the reconstruction process, and deliveres all possible alternative minimal networks in terms of simple place/transition Petri nets that are consistent with a given discrete time series data set. Results We fundamentally extended the previously published algorithm to consider catalysis and inhibition of the reactions that occur in the underlying network. The results of the reconstruction algorithm are encoded in the form of an extended Petri net involving control arcs. This allows the consideration of processes involving mass flow and/or regulatory interactions. As a non-trivial test case, the phosphate regulatory network of enterobacteria was reconstructed using in silico-generated time-series data sets on wild-type and in silico mutants. Conclusions The new exact algorithm reconstructs extended Petri nets from time series data sets by finding all alternative minimal networks that are consistent with the data. It suggested alternative molecular mechanisms for certain reactions in the network. The algorithm is useful to combine data from wild-type and mutant cells and may potentially integrate physiological, biochemical, pharmacological, and genetic data in the form of a single model. PMID:21762503

  19. Development of a validation model for the defense meteorological satellite program's special sensor microwave imager

    NASA Technical Reports Server (NTRS)

    Swift, C. T.; Goodberlet, M. A.; Wilkerson, J. C.

    1990-01-01

    The Defence Meteorological Space Program's (DMSP) Special Sensor Microwave/Imager (SSM/I), an operational wind speed algorithm was developed. The algorithm is based on the D-matrix approach which seeks a linear relationship between measured SSM/I brightness temperatures and environmental parameters. D-matrix performance was validated by comparing algorithm derived wind speeds with near-simultaneous and co-located measurements made by off-shore ocean buoys. Other topics include error budget modeling, alternate wind speed algorithms, and D-matrix performance with one or more inoperative SSM/I channels.

  20. Status report: Data management program algorithm evaluation activity at Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.

    1977-01-01

    An algorithm evaluation activity was initiated to study the problems associated with image processing by assessing the independent and interdependent effects of registration, compression, and classification techniques on LANDSAT data for several discipline applications. The objective of the activity was to make recommendations on selected applicable image processing algorithms in terms of accuracy, cost, and timeliness or to propose alternative ways of processing the data. As a means of accomplishing this objective, an Image Coding Panel was established. The conduct of the algorithm evaluation is described.

  1. Unified algorithm of cone optics to compute solar flux on central receiver

    NASA Astrophysics Data System (ADS)

    Grigoriev, Victor; Corsi, Clotilde

    2017-06-01

    Analytical algorithms to compute flux distribution on central receiver are considered as a faster alternative to ray tracing. They have quite too many modifications, with HFLCAL and UNIZAR being the most recognized and verified. In this work, a generalized algorithm is presented which is valid for arbitrary sun shape of radial symmetry. Heliostat mirrors can have a nonrectangular profile, and the effects of shading and blocking, strong defocusing and astigmatism can be taken into account. The algorithm is suitable for parallel computing and can benefit from hardware acceleration of polygon texturing.

  2. An l1-TV algorithm for deconvolution with salt and pepper noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wohlberg, Brendt; Rodriguez, Paul

    2008-01-01

    There has recently been considerable interest in applying Total Variation with an {ell}{sup 1} data fidelity term to the denoising of images subject to salt and pepper noise, but the extension of this formulation to more general problems, such as deconvolution, has received little attention, most probably because most efficient algorithms for {ell}{sup 1}-TV denoising can not handle more general inverse problems. We apply the Iteratively Reweighted Norm algorithm to this problem, and compare performance with an alternative algorithm based on the Mumford-Shah functional.

  3. Scheduling language and algorithm development study. Volume 1, phase 2: Design considerations for a scheduling and resource allocation system

    NASA Technical Reports Server (NTRS)

    Morrell, R. A.; Odoherty, R. J.; Ramsey, H. R.; Reynolds, C. C.; Willoughby, J. K.; Working, R. D.

    1975-01-01

    Data and analyses related to a variety of algorithms for solving typical large-scale scheduling and resource allocation problems are presented. The capabilities and deficiencies of various alternative problem solving strategies are discussed from the viewpoint of computer system design.

  4. Transactional Algorithm for Subtracting Fractions: Go Shopping

    ERIC Educational Resources Information Center

    Pinckard, James Seishin

    2009-01-01

    The purpose of this quasi-experimental research study was to examine the effects of an alternative or transactional algorithm for subtracting mixed numbers within the middle school setting. Initial data were gathered from the student achievement of four mathematics teachers at three different school sites. The results indicated students who…

  5. Educational Tool for Optimal Controller Tuning Using Evolutionary Strategies

    ERIC Educational Resources Information Center

    Carmona Morales, D.; Jimenez-Hornero, J. E.; Vazquez, F.; Morilla, F.

    2012-01-01

    In this paper, an optimal tuning tool is presented for control structures based on multivariable proportional-integral-derivative (PID) control, using genetic algorithms as an alternative to traditional optimization algorithms. From an educational point of view, this tool provides students with the necessary means to consolidate their knowledge on…

  6. CHAM: weak signals detection through a new multivariate algorithm for process control

    NASA Astrophysics Data System (ADS)

    Bergeret, François; Soual, Carole; Le Gratiet, B.

    2016-10-01

    Derivatives technologies based on core CMOS processes are significantly aggressive in term of design rules and process control requirements. Process control plan is a derived from Process Assumption (PA) calculations which result in a design rule based on known process variability capabilities, taking into account enough margin to be safe not only for yield but especially for reliability. Even though process assumptions are calculated with a 4 sigma known process capability margin, efficient and competitive designs are challenging the process especially for derivatives technologies in 40 and 28nm nodes. For wafer fab process control, PA are declined in monovariate (layer1 CD, layer2 CD, layer2 to layer1 overlay, layer3 CD etc….) control charts with appropriated specifications and control limits which all together are securing the silicon. This is so far working fine but such system is not really sensitive to weak signals coming from interactions of multiple key parameters (high layer2 CD combined with high layer3 CD as an example). CHAM is a software using an advanced statistical algorithm specifically designed to detect small signals, especially when there are many parameters to control and when the parameters can interact to create yield issues. In this presentation we will first present the CHAM algorithm, then the case-study on critical dimensions, with the results, and we will conclude on future work. This partnership between Ippon and STM is part of E450LMDAP, European project dedicated to metrology and lithography development for future technology nodes, especially 10nm.

  7. A narrow view: The conceptualization of sexual problems in human sexuality textbooks.

    PubMed

    Stelzl, Monika; Stairs, Brittany; Anstey, Hannah

    2018-02-01

    This study examined the ways in which the meaning of 'sexual problems' is constructed and defined in undergraduate human sexuality textbooks. Drawing on feminist and critical discourse frameworks, the dominant as well as the absent/marginalized discourses were identified using critical discourse analysis. Sexual difficulties were largely framed by the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders. Thus, medical discourse was privileged. Alternative conceptualizations and frameworks, such as the New View of Women's Sexual Problems, were included marginally and peripherally. We argue that current constructions of sexuality knowledge reinforce, rather than challenge, existing hegemonic discourses of sexuality.

  8. 77 FR 70213 - Capital, Margin, and Segregation Requirements for Security-Based Swap Dealers and Major Security...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-23

    ...In accordance with the Dodd-Frank Wall Street Reform and Consumer Protection Act of 2010 (``Dodd-Frank Act''), the Securities and Exchange Commission (``Commission''), pursuant to the Securities Exchange Act of 1934 (``Exchange Act''), is proposing capital and margin requirements for security-based swap dealers (``SBSDs'') and major security-based swap participants (``MSBSPs''), segregation requirements for SBSDs, and notification requirements with respect to segregation for SBSDs and MSBSPs. The Commission also is proposing to increase the minimum net capital requirements for broker-dealers permitted to use the alternative internal model-based method for computing net capital (``ANC broker-dealers'').

  9. Effect of inlet ingestion of a wing tip vortex on compressor face flow and turbojet stall margin

    NASA Technical Reports Server (NTRS)

    Mitchell, G. A.

    1975-01-01

    A two-dimensional inlet was alternately mated to a coldpipe plug assembly and a J85-GE-13 turbojet engine, and placed in a Mach 0.4 stream so as to ingest the tip vortex of a forward mounted wing. Vortex properties were measured just forward of the inlet and at the compressor face. Results show that ingestion of a wing tip vortex by a turbojet engine can cause a large reduction in engine stall margin. The loss in stall compressor pressure ratio was primarily dependent on vortex location and rotational direction and not on total-pressure distortion.

  10. Increasing Safety of a Robotic System for Inner Ear Surgery Using Probabilistic Error Modeling Near Vital Anatomy

    PubMed Central

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2017-01-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy. PMID:29200595

  11. Sequential Monte Carlo tracking of the marginal artery by multiple cue fusion and random forest regression.

    PubMed

    Cherry, Kevin M; Peplinski, Brandon; Kim, Lauren; Wang, Shijun; Lu, Le; Zhang, Weidong; Liu, Jianfei; Wei, Zhuoshi; Summers, Ronald M

    2015-01-01

    Given the potential importance of marginal artery localization in automated registration in computed tomography colonography (CTC), we have devised a semi-automated method of marginal vessel detection employing sequential Monte Carlo tracking (also known as particle filtering tracking) by multiple cue fusion based on intensity, vesselness, organ detection, and minimum spanning tree information for poorly enhanced vessel segments. We then employed a random forest algorithm for intelligent cue fusion and decision making which achieved high sensitivity and robustness. After applying a vessel pruning procedure to the tracking results, we achieved statistically significantly improved precision compared to a baseline Hessian detection method (2.7% versus 75.2%, p<0.001). This method also showed statistically significantly improved recall rate compared to a 2-cue baseline method using fewer vessel cues (30.7% versus 67.7%, p<0.001). These results demonstrate that marginal artery localization on CTC is feasible by combining a discriminative classifier (i.e., random forest) with a sequential Monte Carlo tracking mechanism. In so doing, we present the effective application of an anatomical probability map to vessel pruning as well as a supplementary spatial coordinate system for colonic segmentation and registration when this task has been confounded by colon lumen collapse. Published by Elsevier B.V.

  12. Increasing safety of a robotic system for inner ear surgery using probabilistic error modeling near vital anatomy

    NASA Astrophysics Data System (ADS)

    Dillon, Neal P.; Siebold, Michael A.; Mitchell, Jason E.; Blachon, Gregoire S.; Balachandran, Ramya; Fitzpatrick, J. Michael; Webster, Robert J.

    2016-03-01

    Safe and effective planning for robotic surgery that involves cutting or ablation of tissue must consider all potential sources of error when determining how close the tool may come to vital anatomy. A pre-operative plan that does not adequately consider potential deviations from ideal system behavior may lead to patient injury. Conversely, a plan that is overly conservative may result in ineffective or incomplete performance of the task. Thus, enforcing simple, uniform-thickness safety margins around vital anatomy is insufficient in the presence of spatially varying, anisotropic error. Prior work has used registration error to determine a variable-thickness safety margin around vital structures that must be approached during mastoidectomy but ultimately preserved. In this paper, these methods are extended to incorporate image distortion and physical robot errors, including kinematic errors and deflections of the robot. These additional sources of error are discussed and stochastic models for a bone-attached robot for otologic surgery are developed. An algorithm for generating appropriate safety margins based on a desired probability of preserving the underlying anatomical structure is presented. Simulations are performed on a CT scan of a cadaver head and safety margins are calculated around several critical structures for planning of a robotic mastoidectomy.

  13. Estimating overall exposure effects for the clustered and censored outcome using random effect Tobit regression models.

    PubMed

    Wang, Wei; Griswold, Michael E

    2016-11-30

    The random effect Tobit model is a regression model that accommodates both left- and/or right-censoring and within-cluster dependence of the outcome variable. Regression coefficients of random effect Tobit models have conditional interpretations on a constructed latent dependent variable and do not provide inference of overall exposure effects on the original outcome scale. Marginalized random effects model (MREM) permits likelihood-based estimation of marginal mean parameters for the clustered data. For random effect Tobit models, we extend the MREM to marginalize over both the random effects and the normal space and boundary components of the censored response to estimate overall exposure effects at population level. We also extend the 'Average Predicted Value' method to estimate the model-predicted marginal means for each person under different exposure status in a designated reference group by integrating over the random effects and then use the calculated difference to assess the overall exposure effect. The maximum likelihood estimation is proposed utilizing a quasi-Newton optimization algorithm with Gauss-Hermite quadrature to approximate the integration of the random effects. We use these methods to carefully analyze two real datasets. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Effect of protective coating on microhardness of a new glass ionomer cement: Nanofilled coating versus unfilled resin

    PubMed Central

    Faraji, Foad; Heshmat, Haleh; Banava, Sepideh

    2017-01-01

    Background and Objectives: EQUIATM is a new gastrointestinal (GI) system with high compressive strength, surface microhardness (MH), and fluoride release potential. This in vitro study aimed to assess the effect of aging and type of protective coating on the MH of EQUIATM GI cement. Materials and Methods: A total of 30 disc-shaped specimens measuring 9 mm in diameter and 2 mm in thickness were fabricated of EQUIATM GI and divided into three groups of G-Coat nanofilled coating (a), no coating (b) and margin bond (c). The Vickers MH value of specimens was measured before (baseline) and at 3 and 6 months after water storage. Data were analyzed using repeated measures ANOVA. Results: Group B had significantly higher MH than the other two groups at baseline. Both G-Coat and margin bond increased the surface MH of GI at 3 and 6 months. The MH values of G-Coat and margin bond groups did not significantly increase or decrease between 3 and 6 months. Conclusion: The increase in MH was greater in the G-Coat compared to the margin bond group in the long-term. Clinically, margin bond may be a suitable alternative when G-Coat is not available. PMID:29259364

  15. Computer-aided marginal artery detection on computed tomographic colonography

    NASA Astrophysics Data System (ADS)

    Wei, Zhuoshi; Yao, Jianhua; Wang, Shijun; Liu, Jiamin; Summers, Ronald M.

    2012-03-01

    Computed tomographic colonography (CTC) is a minimally invasive technique for colonic polyps and cancer screening. The marginal artery of the colon, also known as the marginal artery of Drummond, is the blood vessel that connects the inferior mesenteric artery with the superior mesenteric artery. The marginal artery runs parallel to the colon for its entire length, providing the blood supply to the colon. Detecting the marginal artery may benefit computer-aided detection (CAD) of colonic polyp. It can be used to identify teniae coli based on their anatomic spatial relationship. It can also serve as an alternative marker for colon localization, in case of colon collapse and inability to directly compute the endoluminal centerline. This paper proposes an automatic method for marginal artery detection on CTC. To the best of our knowledge, this is the first work presented for this purpose. Our method includes two stages. The first stage extracts the blood vessels in the abdominal region. The eigenvalue of Hessian matrix is used to detect line-like structures in the images. The second stage is to reduce the false positives in the first step. We used two different masks to exclude the false positive vessel regions. One is a dilated colon mask which is obtained by colon segmentation. The other is an eroded visceral fat mask which is obtained by fat segmentation in the abdominal region. We tested our method on a CTC dataset with 6 cases. Using ratio-of-overlap with manual labeling of the marginal artery as the standard-of-reference, our method yielded true positive, false positive and false negative fractions of 89%, 33%, 11%, respectively.

  16. Passive margins getting squeezed in the mantle convection vice

    NASA Astrophysics Data System (ADS)

    Yamato, Philippe; Husson, Laurent; Becker, Thorsten W.; Pedoja, Kevin

    2014-05-01

    Passive margins often exhibit uplift, exhumation and tectonic inversion. We speculate that the compression in the lithosphere gradually increased during the Cenozoic. In the same time, the many mountain belts at active margins that accompany this event seem readily witness this increase. However, how that compression increase affects passive margins remains unclear. In order to address this issue, we design a 2D viscous numerical model wherein a lithospheric plate rests above a weaker mantle. It is driven by a mantle conveyor belt, alternatively excited by a lateral downwelling on one side, an upwelling on the other side, or both simultaneously. The lateral edges of the plate are either free or fixed, representing the cases of free convergence, and collision or slab anchoring, respectively. This distinction changes the upper boundary condition for mantle circulation and, as a consequence, the stress field. Our results show that between these two regimes, the flow pattern transiently evolves from a free-slip convection mode towards a no-slip boundary condition above the upper mantle. In the second case, the lithosphere is highly stressed horizontally and deforms. For an equivalent bulk driving force, compression increases drastically at passive margins provided that upwellings are active. Conversely, if downwellings alone are activated, compression occurs at short distances from the trench and extension prevails elsewhere. These results are supported by Earth-like 3D spherical models that reveal the same pattern, where active upwellings are required to excite passive margins compression. These results support the idea that compression at passive margins, is the response to the underlying mantle flow, that is increasingly resisted by the Cenozoic collisions.

  17. IPS Empress inlays and onlays after four years--a clinical study.

    PubMed

    Krämer, N; Frankenberger, R; Pelka, M; Petschelt, A

    1999-07-01

    Ceramic inlays are used as esthetic alternatives to amalgam and other metallic materials for the restoration of badly damaged teeth. However, only limited clinical data are available regarding adhesive inlays and onlays with proximal margins located in dentine. In a prospective, controlled clinical study, the performance of IPS Empress inlays and onlays with cuspal replacements and margins below the amelocemental junction was examined. Ninety-six IPS Empress fillings were placed in 34 patients by six clinicians. The restorations were luted with four different composite systems. The dentin bonding system Syntac Classic was used in addition to the acid-etch-technique. At baseline and after 6 months, one, two and four years after placement the restorations were assessed by two calibrated investigators using modified USPHS codes and criteria. A representative sample of the restorations was investigated by scanning electron microscopy to evaluate wear. Seven of the 96 restorations investigated had to be replaced (failure rate 7%; Kaplan-Meier). Four inlays had suffered cohesive bulk fractures and three teeth required endodontic treatment. After four years in clinical service, significant deterioration (Friedman 2-way Anova; p < 0.05) was found to have occurred in the marginal adaptation of the remaining restorations. Seventy-nine percent of the surviving restorations exhibited marginal deficiencies, independent of the luting composite. Neither the absence of enamel margins, nor cuspal replacement significantly affected the adhesion or marginal quality of the restorations. After four years, extensive IPS Empress inlays and onlays bonded with the dentin bonding system Syntac Classic were found to have a 7% failure rate with 79% of the remaining restorations having marginal deficiencies.

  18. The effect of surface sealants with different filler content on microleakage of Class V resin composite restorations.

    PubMed

    Hepdeniz, Ozge Kam; Temel, Ugur Burak; Ugurlu, Muhittin; Koskan, Ozgur

    2016-01-01

    Microleakage is still one of the most cited reasons for failure of resin composite restorations. Alternative methods to prevent microleakage have been investigated increasingly. The aim of this study is to evaluate the microleakage in Class V resin composite restorations with or without application of surface sealants with different filler content. Ninety-six cavities were prepared on the buccal and lingual surfaces with the coronal margins located in enamel and the cervical margins located in dentin. The cavities restored with an adhesive system (Clearfil SE Bond, Kuraray, Tokyo, Japan) and resin composite (Clearfil Majesty ES-2, Kuraray, Tokyo, Japan). Teeth were stored in distilled water for 24 h and separated into four groups according to the surface sealants (Control, Fortify, Fortify Plus, and G-Coat Plus). The teeth were thermocycled (500 cycles, 5-55° C), immersed in basic fuchsine, sectioned, and analyzed for dye penetration using stereomicroscope. The data were submitted to statistical analysis by Kruskal-Wallis and Bonferroni-Dunn test. The results of the study indicated that there was minimum leakage at the enamel margins of all groups. Bonferroni-Dunn tests revealed that Fortify and GC-Coat groups showed significantly less leakage than the Control group and the Fortify Plus group at dentin margins in lingual surfaces (P < 0.05). The all surface sealants used in this study eliminated microleakage at enamel margins. Moreover, unfilled or nanofilled surface sealants were the most effective in decreasing the degree of marginal microleakage at dentin margins. However, viscosity and penetrability of the sealants could be considered for sealing ability besides composition.

  19. Critical Civic Literacy: Knowledge at the Intersection of Career and Community

    ERIC Educational Resources Information Center

    Pollack, Seth S.

    2013-01-01

    Traditional approaches to civic engagement have been marginalized and have had little impact on the core curriculum. "Critical civic literacy" is an alternative curricular approach to civic engagement that explicitly moves departments, disciplines, and degree programs to examine issues of social responsibility and social justice from the…

  20. Afghanistan Narcotics: The Bigger Battle Toward Stabilization

    DTIC Science & Technology

    2009-04-01

    Development of economic opportunities coupled with effective governmental reform is necessary for the nation to become prosperous, stable, and secure. 15...without marginalizing narcotics production and narco-trafficking. Effective security, strong governance, judicial capability...introduce the possibility of meaningful alternative livelihoods. Development of these economic opportunities coupled with effective governmental

  1. Acute Effects of an Alternative Electronic-Control-Device Waveform in Swine

    DTIC Science & Technology

    2009-03-01

    sequences in a normal individual. Bozeman [32] suggested that lethality due to ECD impact could be due to hyperkalemia related to muscle contraction...and our previous investigations [1, 2], there may be a wide margin of safety, relative to hyperkalemia , for most ECD applications. The increase in

  2. 30 CFR 204.201 - Who may obtain accounting and auditing relief?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Who may obtain accounting and auditing relief... MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.201 Who may obtain accounting and auditing relief? (a) You may obtain accounting and auditing relief under...

  3. Advancing Alternate Tools: Why Science Education Needs CRP and CRT

    ERIC Educational Resources Information Center

    Dodo Seriki, Vanessa

    2018-01-01

    Ridgeway and Yerrick's paper, "Whose banner are we waving?: exploring STEM partnerships for marginalized urban youth," unearthed the tensions that existed between a local community "expert" and a group of students and their facilitator in an afterschool program. Those of us who work with youth who are traditionally…

  4. Hostility or Indifference? The Marginalization of Homeschooling in the Education Profession

    ERIC Educational Resources Information Center

    Howell, Charles

    2013-01-01

    Reasons for neglect of homeschooling in educational research literature are explored. The ideological hostility that occasionally surfaces in policy debates is unlikely to have a major influence on mainstream researchers. An alternative explanation based on Kuhn's concept of normal science is proposed. The dominant paradigm of educational research…

  5. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

    PubMed Central

    Bai, Xiangzhi

    2015-01-01

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion. PMID:26184229

  6. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators.

    PubMed

    Bai, Xiangzhi

    2015-07-15

    The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

  7. Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane

    NASA Technical Reports Server (NTRS)

    Conners, Timothy R.

    1992-01-01

    An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.

  8. A Comparative Study of Interval Management Control Law Capabilities

    NASA Technical Reports Server (NTRS)

    Barmore, Bryan E.; Smith, Colin L.; Palmer, Susan O.; Abbott, Terence S.

    2012-01-01

    This paper presents a new tool designed to allow for rapid development and testing of different control algorithms for airborne spacing. This tool, Interval Management Modeling and Spacing Tool (IM MAST), is a fast-time, low-fidelity tool created to model the approach of aircraft to a runway, with a focus on their interactions with each other. Errors can be induced between pairs of aircraft by varying initial positions, winds, speed profiles, and altitude profiles. Results to-date show that only a few of the algorithms tested had poor behavior in the arrival and approach environment. The majority of the algorithms showed only minimal variation in performance under the test conditions. Trajectory-based algorithms showed high susceptibility to wind forecast errors, while performing marginally better than the other algorithms under other conditions. Trajectory-based algorithms have a sizable advantage, however, of being able to perform relative spacing operations between aircraft on different arrival routes and flight profiles without employing ghosting. methods. This comes at the higher cost of substantially increased complexity, however. Additionally, it was shown that earlier initiation of relative spacing operations provided more time for corrections to be made without any significant problems in the spacing operation itself. Initiating spacing farther out, however, would require more of the aircraft to begin spacing before they merge onto a common route.

  9. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    PubMed

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  10. Real-time image annotation by manifold-based biased Fisher discriminant analysis

    NASA Astrophysics Data System (ADS)

    Ji, Rongrong; Yao, Hongxun; Wang, Jicheng; Sun, Xiaoshuai; Liu, Xianming

    2008-01-01

    Automatic Linguistic Annotation is a promising solution to bridge the semantic gap in content-based image retrieval. However, two crucial issues are not well addressed in state-of-art annotation algorithms: 1. The Small Sample Size (3S) problem in keyword classifier/model learning; 2. Most of annotation algorithms can not extend to real-time online usage due to their low computational efficiencies. This paper presents a novel Manifold-based Biased Fisher Discriminant Analysis (MBFDA) algorithm to address these two issues by transductive semantic learning and keyword filtering. To address the 3S problem, Co-Training based Manifold learning is adopted for keyword model construction. To achieve real-time annotation, a Bias Fisher Discriminant Analysis (BFDA) based semantic feature reduction algorithm is presented for keyword confidence discrimination and semantic feature reduction. Different from all existing annotation methods, MBFDA views image annotation from a novel Eigen semantic feature (which corresponds to keywords) selection aspect. As demonstrated in experiments, our manifold-based biased Fisher discriminant analysis annotation algorithm outperforms classical and state-of-art annotation methods (1.K-NN Expansion; 2.One-to-All SVM; 3.PWC-SVM) in both computational time and annotation accuracy with a large margin.

  11. Identification of Predictive Cis-Regulatory Elements Using a Discriminative Objective Function and a Dynamic Search Space

    PubMed Central

    Karnik, Rahul; Beer, Michael A.

    2015-01-01

    The generation of genomic binding or accessibility data from massively parallel sequencing technologies such as ChIP-seq and DNase-seq continues to accelerate. Yet state-of-the-art computational approaches for the identification of DNA binding motifs often yield motifs of weak predictive power. Here we present a novel computational algorithm called MotifSpec, designed to find predictive motifs, in contrast to over-represented sequence elements. The key distinguishing feature of this algorithm is that it uses a dynamic search space and a learned threshold to find discriminative motifs in combination with the modeling of motifs using a full PWM (position weight matrix) rather than k-mer words or regular expressions. We demonstrate that our approach finds motifs corresponding to known binding specificities in several mammalian ChIP-seq datasets, and that our PWMs classify the ChIP-seq signals with accuracy comparable to, or marginally better than motifs from the best existing algorithms. In other datasets, our algorithm identifies novel motifs where other methods fail. Finally, we apply this algorithm to detect motifs from expression datasets in C. elegans using a dynamic expression similarity metric rather than fixed expression clusters, and find novel predictive motifs. PMID:26465884

  12. Identification of Predictive Cis-Regulatory Elements Using a Discriminative Objective Function and a Dynamic Search Space.

    PubMed

    Karnik, Rahul; Beer, Michael A

    2015-01-01

    The generation of genomic binding or accessibility data from massively parallel sequencing technologies such as ChIP-seq and DNase-seq continues to accelerate. Yet state-of-the-art computational approaches for the identification of DNA binding motifs often yield motifs of weak predictive power. Here we present a novel computational algorithm called MotifSpec, designed to find predictive motifs, in contrast to over-represented sequence elements. The key distinguishing feature of this algorithm is that it uses a dynamic search space and a learned threshold to find discriminative motifs in combination with the modeling of motifs using a full PWM (position weight matrix) rather than k-mer words or regular expressions. We demonstrate that our approach finds motifs corresponding to known binding specificities in several mammalian ChIP-seq datasets, and that our PWMs classify the ChIP-seq signals with accuracy comparable to, or marginally better than motifs from the best existing algorithms. In other datasets, our algorithm identifies novel motifs where other methods fail. Finally, we apply this algorithm to detect motifs from expression datasets in C. elegans using a dynamic expression similarity metric rather than fixed expression clusters, and find novel predictive motifs.

  13. Ductal carcinoma in situ: USC/Van Nuys Prognostic Index and the impact of margin status.

    PubMed

    Silverstein, Melvin J; Buchanan, Claire

    2003-12-01

    As our knowledge of ductal carcinoma in situ (DCIS) continues to evolve, treatment decision-making has become increasingly complex and controversial for both patients and physicians. Treatment options include mastectomy, and breast conservation with or without radiation therapy. Data produced from the randomized clinical trials for DCIS has provided the basis for important treatment recommendations, but are not without limitations. In this article, we review our prospectively collected database consisting of 1036 patients with DCIS treated at the Van Nuys Breast Center and the USC/Norris Comprehensive Cancer Center. We review the use of the USC/Van Nuys Prognostic Index, a clinical algorithm designed to assist physicians in selection of appropriate treatments, and examine the impact of margin status as a sole predictor of local recurrence.

  14. Economic evaluation of progeny-testing and genomic selection schemes for small-sized nucleus dairy cattle breeding programs in developing countries.

    PubMed

    Kariuki, C M; Brascamp, E W; Komen, H; Kahi, A K; van Arendonk, J A M

    2017-03-01

    In developing countries minimal and erratic performance and pedigree recording impede implementation of large-sized breeding programs. Small-sized nucleus programs offer an alternative but rely on their economic performance for their viability. We investigated the economic performance of 2 alternative small-sized dairy nucleus programs [i.e., progeny testing (PT) and genomic selection (GS)] over a 20-yr investment period. The nucleus was made up of 453 male and 360 female animals distributed in 8 non-overlapping age classes. Each year 10 active sires and 100 elite dams were selected. Populations of commercial recorded cows (CRC) of sizes 12,592 and 25,184 were used to produce test daughters in PT or to create a reference population in GS, respectively. Economic performance was defined as gross margins, calculated as discounted revenues minus discounted costs following a single generation of selection. Revenues were calculated as cumulative discounted expressions (CDE, kg) × 0.32 (€/kg of milk) × 100,000 (size commercial population). Genetic superiorities, deterministically simulated using pseudo-BLUP index and CDE, were determined using gene flow. Costs were for one generation of selection. Results show that GS schemes had higher cumulated genetic gain in the commercial cow population and higher gross margins compared with PT schemes. Gross margins were between 3.2- and 5.2-fold higher for GS, depending on size of the CRC population. The increase in gross margin was mostly due to a decreased generation interval and lower running costs in GS schemes. In PT schemes many bulls are culled before selection. We therefore also compared 2 schemes in which semen was stored instead of keeping live bulls. As expected, semen storage resulted in an increase in gross margins in PT schemes, but gross margins remained lower than those of GS schemes. We conclude that implementation of small-sized GS breeding schemes can be economically viable for developing countries. The Authors. Published by the Federation of Animal Science Societies and Elsevier Inc. on behalf of the American Dairy Science Association®. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/3.0/).

  15. VTOL shipboard letdown guidance system analysis

    NASA Technical Reports Server (NTRS)

    Phatak, A. V.; Karmali, M. S.

    1983-01-01

    Alternative letdown guidance strategies are examined for landing of a VTOL aircraft onboard a small aviation ship under adverse environmental conditions. Off line computer simulation of shipboard landing task is utilized for assessing the relative merits of the proposed guidance schemes. The touchdown performance of a nominal constant rate of descent (CROD) letdown strategy serves as a benchmark for ranking the performance of the alternative letdown schemes. Analysis of ship motion time histories indicates the existence of an alternating sequence of quiescent and rough motions called lulls and swells. A real time algorithms lull/swell classification based upon ship motion pattern features is developed. The classification algorithm is used to command a go/no go signal to indicate the initiation and termination of an acceptable landing window. Simulation results show that such a go/no go pattern based letdown guidance strategy improves touchdown performance.

  16. Balancing Vibrations at Harmonic Frequencies by Injecting Harmonic Balancing Signals into the Armature of a Linear Motor/Alternator Coupled to a Stirling Machine

    NASA Technical Reports Server (NTRS)

    Holliday, Ezekiel S. (Inventor)

    2014-01-01

    Vibrations at harmonic frequencies are reduced by injecting harmonic balancing signals into the armature of a linear motor/alternator coupled to a Stirling machine. The vibrations are sensed to provide a signal representing the mechanical vibrations. A harmonic balancing signal is generated for selected harmonics of the operating frequency by processing the sensed vibration signal with adaptive filter algorithms of adaptive filters for each harmonic. Reference inputs for each harmonic are applied to the adaptive filter algorithms at the frequency of the selected harmonic. The harmonic balancing signals for all of the harmonics are summed with a principal control signal. The harmonic balancing signals modify the principal electrical drive voltage and drive the motor/alternator with a drive voltage component in opposition to the vibration at each harmonic.

  17. Binary Multidimensional Scaling for Hashing.

    PubMed

    Huang, Yameng; Lin, Zhouchen

    2017-10-04

    Hashing is a useful technique for fast nearest neighbor search due to its low storage cost and fast query speed. Unsupervised hashing aims at learning binary hash codes for the original features so that the pairwise distances can be best preserved. While several works have targeted on this task, the results are not satisfactory mainly due to the oversimplified model. In this paper, we propose a unified and concise unsupervised hashing framework, called Binary Multidimensional Scaling (BMDS), which is able to learn the hash code for distance preservation in both batch and online mode. In the batch mode, unlike most existing hashing methods, we do not need to simplify the model by predefining the form of hash map. Instead, we learn the binary codes directly based on the pairwise distances among the normalized original features by Alternating Minimization. This enables a stronger expressive power of the hash map. In the online mode, we consider the holistic distance relationship between current query example and those we have already learned, rather than only focusing on current data chunk. It is useful when the data come in a streaming fashion. Empirical results show that while being efficient for training, our algorithm outperforms state-of-the-art methods by a large margin in terms of distance preservation, which is practical for real-world applications.

  18. A comparison of pay-as-bid and marginal pricing in electricity markets

    NASA Astrophysics Data System (ADS)

    Ren, Yongjun

    This thesis investigates the behaviour of electricity markets under marginal and pay-as-bid pricing. Marginal pricing is believed to yield the maximum social welfare and is currently implemented by most electricity markets. However, in view of recent electricity market failures, pay-as-bid has been extensively discussed as a possible alternative to marginal pricing. In this research, marginal and pay-as-bid pricing have been analyzed in electricity markets with both perfect and imperfect competition. The perfect competition case is studied under both exact and uncertain system marginal cost prediction. The comparison of the two pricing methods is conducted through two steps: (i) identify the best offer strategy of the generating companies (gencos); (ii) analyze the market performance under these optimum genco strategies. The analysis results together with numerical simulations show that pay-as-bid and marginal pricing are equivalent in a perfect market with exact system marginal cost prediction. In perfect markets with uncertain demand prediction, the two pricing methods are also equivalent but in an expected value sense. If we compare from the perspective of second order statistics, all market performance measures exhibit much lower values under pay-as-bid than under marginal pricing. The risk of deviating from the mean is therefore much higher under marginal pricing than under pay-as-bid. In an imperfect competition market with exact demand prediction, the research shows that pay-as-bid pricing yields lower consumer payments and lower genco profits. This research provides quantitative evidence that challenges some common claims about pay-as-bid pricing. One is that under pay-as-bid, participants would soon learn how to offer so as to obtain the same or higher profits than what they would have obtained under marginal pricing. This research however shows that, under pay-as-bid, participants can at best earn the same profit or expected profit as under marginal pricing. A second common claim refuted by this research is that pay-as-bid does not provide correct price signals if there is a scarcity of generation resources. We show that pay-as-bid does provide a price signal with such characteristics and furthermore argue that the price signal under marginal pricing with gaming may not necessarily be correct since it would then not reflect a lack of generation capacity but a desire to increase profit.

  19. An algorithmic framework for multiobjective optimization.

    PubMed

    Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.

  20. An Algorithmic Framework for Multiobjective Optimization

    PubMed Central

    Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.

    2013-01-01

    Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795

  1. A Vision-Based Motion Sensor for Undergraduate Laboratories.

    ERIC Educational Resources Information Center

    Salumbides, Edcel John; Maristela, Joyce; Uy, Alfredson; Karremans, Kees

    2002-01-01

    Introduces an alternative method to determine the mechanics of a moving object that uses computer vision algorithms with a charge-coupled device (CCD) camera as a recording device. Presents two experiments, pendulum motion and terminal velocity, to compare results of the alternative and conventional methods. (YDS)

  2. Universal access to HIV treatment in developing countries: going beyond the misinterpretations of the 'cost-effectiveness' algorithm.

    PubMed

    Moatti, Jean Paul; Marlink, Richard; Luchini, Stephane; Kazatchkine, Michel

    2008-07-01

    Economic cost-effectiveness analysis (CEA) has been proposed as the appropriate tool to set priorities for resource allocation among available health interventions. Controversy remains about the way CEA should be used in the field of HIV/AIDS. This paper reviews the general literature in health economics and public economics about the use of CEA for priority setting in public health, in order better to inform current debates about resource allocation in the fight against HIV/AIDS. Theoretical and practical limitations of CEA do not raise major problems when it is applied to compare alternatives for treating the same medical condition or public health problem. Using CEA to set priorities among different health interventions by ranking them from the lowest to the highest values of their cost per life-year saved is appropriate only under the very restrictive and unrealistic assumptions that all interventions compared are discrete and finite alternatives that cannot vary in terms of size and scale. In order for CEA to inform resource allocation compared across programmes to fight the AIDS epidemic, a pragmatic interpretation of this economic approach, like that proposed by the Commission on Macroeconomics and Health, is better suited. Interventions, like a number of preventive strategies and first-line antiretroviral treatments for HIV, whose marginal costs per additional life-year saved are less than three times the gross domestic product per capita, should be considered cost-effective. Because of their empirical and theoretical limitations, results of CEA should only be one element in priority setting among interventions for HIV/AIDS, which should also be informed by explicit debates about societal and ethical preferences.

  3. Corruption of accuracy and efficiency of Markov chain Monte Carlo simulation by inaccurate numerical implementation of conceptual hydrologic models

    NASA Astrophysics Data System (ADS)

    Schoups, G.; Vrugt, J. A.; Fenicia, F.; van de Giesen, N. C.

    2010-10-01

    Conceptual rainfall-runoff models have traditionally been applied without paying much attention to numerical errors induced by temporal integration of water balance dynamics. Reliance on first-order, explicit, fixed-step integration methods leads to computationally cheap simulation models that are easy to implement. Computational speed is especially desirable for estimating parameter and predictive uncertainty using Markov chain Monte Carlo (MCMC) methods. Confirming earlier work of Kavetski et al. (2003), we show here that the computational speed of first-order, explicit, fixed-step integration methods comes at a cost: for a case study with a spatially lumped conceptual rainfall-runoff model, it introduces artificial bimodality in the marginal posterior parameter distributions, which is not present in numerically accurate implementations of the same model. The resulting effects on MCMC simulation include (1) inconsistent estimates of posterior parameter and predictive distributions, (2) poor performance and slow convergence of the MCMC algorithm, and (3) unreliable convergence diagnosis using the Gelman-Rubin statistic. We studied several alternative numerical implementations to remedy these problems, including various adaptive-step finite difference schemes and an operator splitting method. Our results show that adaptive-step, second-order methods, based on either explicit finite differencing or operator splitting with analytical integration, provide the best alternative for accurate and efficient MCMC simulation. Fixed-step or adaptive-step implicit methods may also be used for increased accuracy, but they cannot match the efficiency of adaptive-step explicit finite differencing or operator splitting. Of the latter two, explicit finite differencing is more generally applicable and is preferred if the individual hydrologic flux laws cannot be integrated analytically, as the splitting method then loses its advantage.

  4. Development of Serum Marker Models to Increase Diagnostic Accuracy of Advanced Fibrosis in Nonalcoholic Fatty Liver Disease: The New LINKI Algorithm Compared with Established Algorithms.

    PubMed

    Lykiardopoulos, Byron; Hagström, Hannes; Fredrikson, Mats; Ignatova, Simone; Stål, Per; Hultcrantz, Rolf; Ekstedt, Mattias; Kechagias, Stergios

    2016-01-01

    Detection of advanced fibrosis (F3-F4) in nonalcoholic fatty liver disease (NAFLD) is important for ascertaining prognosis. Serum markers have been proposed as alternatives to biopsy. We attempted to develop a novel algorithm for detection of advanced fibrosis based on a more efficient combination of serological markers and to compare this with established algorithms. We included 158 patients with biopsy-proven NAFLD. Of these, 38 had advanced fibrosis. The following fibrosis algorithms were calculated: NAFLD fibrosis score, BARD, NIKEI, NASH-CRN regression score, APRI, FIB-4, King´s score, GUCI, Lok index, Forns score, and ELF. Study population was randomly divided in a training and a validation group. A multiple logistic regression analysis using bootstrapping methods was applied to the training group. Among many variables analyzed age, fasting glucose, hyaluronic acid and AST were included, and a model (LINKI-1) for predicting advanced fibrosis was created. Moreover, these variables were combined with platelet count in a mathematical way exaggerating the opposing effects, and alternative models (LINKI-2) were also created. Models were compared using area under the receiver operator characteristic curves (AUROC). Of established algorithms FIB-4 and King´s score had the best diagnostic accuracy with AUROCs 0.84 and 0.83, respectively. Higher accuracy was achieved with the novel LINKI algorithms. AUROCs in the total cohort for LINKI-1 was 0.91 and for LINKI-2 models 0.89. The LINKI algorithms for detection of advanced fibrosis in NAFLD showed better accuracy than established algorithms and should be validated in further studies including larger cohorts.

  5. Marginal adaptation of full-coverage CAD/CAM restorations: in vitro study using a non-destructive method.

    PubMed

    Romeo, E; Iorio, M; Storelli, S; Camandona, M; Abati, S

    2009-03-01

    Marginal fit of full-coverage crowns is a major requirement for long term success of this kind of restorations. The purpose of the study was to verify the marginal adaptation of computer assisted design (CAD)/computer assisted manufacturing (CAM) crowns on prepared teeth and on plaster dies. Four couples of materials: zirconia-ceramic veneering (DC-Zircon, DCS Dental, Allschwill, CH/Cercon S, Degussa, DeguDent GmbH, Hanau, Germany), fiber-reinforced composite-composite veneering (DC-Tell, DCS Dental/Gradia, GC Europe, LEuven, Belgium), titanium-ceramic veneering (DC Titan, DCS Dental/Tikrom, Orotig, Verona, Italy) and titanium-composite veneering (DC Titan, DCS Dental/Gradia, GC Europe) were evaluated following the guidelines provided by ADA specific #8. Five crowns were fabricated for each material. Marginal gap values were measured at four points (0 degrees, 90 degrees, 180 degrees and 270 degrees starting from the centre of the vestibular surface) around the finishing line, on prepared teeth and on plaster dies at each step of the fabrication process. Digital photographs were taken at each reference point and a computer software was used to measure the amount of marginal discrepancy in microm. Statistical analysis was performed using t test at 95 percent confidence interval. All the tested materials, except for fiber-reinforced composite, show a marginal adaptation within the limits of ADA specification (25-40 microm). The application of veneering material causes decay in marginal adaptation, except for fiber-reinforced composite. Within the limitations of this study, it was concluded that marginal fit of CAD/CAM restoration is within the limits considered clinically acceptable by ADA specification #8. From the results of this in vitro study, it can be stated that CAD/CAM crowns produced with DCS system show a marginal adaptation within the limits of ADA specific #8, therefore milled CAD/CAM crowns can be considered a good alternative to more traditional waxing-investing-casting technique.

  6. Marginal Contribution-Based Distributed Subchannel Allocation in Small Cell Networks.

    PubMed

    Shah, Shashi; Kittipiyakul, Somsak; Lim, Yuto; Tan, Yasuo

    2018-05-10

    The paper presents a game theoretic solution for distributed subchannel allocation problem in small cell networks (SCNs) analyzed under the physical interference model. The objective is to find a distributed solution that maximizes the welfare of the SCNs, defined as the total system capacity. Although the problem can be addressed through best-response (BR) dynamics, the existence of a steady-state solution, i.e., a pure strategy Nash equilibrium (NE), cannot be guaranteed. Potential games (PGs) ensure convergence to a pure strategy NE when players rationally play according to some specified learning rules. However, such a performance guarantee comes at the expense of complete knowledge of the SCNs. To overcome such requirements, properties of PGs are exploited for scalable implementations, where we utilize the concept of marginal contribution (MC) as a tool to design learning rules of players’ utility and propose the marginal contribution-based best-response (MCBR) algorithm of low computational complexity for the distributed subchannel allocation problem. Finally, we validate and evaluate the proposed scheme through simulations for various performance metrics.

  7. Fast max-margin clustering for unsupervised word sense disambiguation in biomedical texts

    PubMed Central

    Duan, Weisi; Song, Min; Yates, Alexander

    2009-01-01

    Background We aim to solve the problem of determining word senses for ambiguous biomedical terms with minimal human effort. Methods We build a fully automated system for Word Sense Disambiguation by designing a system that does not require manually-constructed external resources or manually-labeled training examples except for a single ambiguous word. The system uses a novel and efficient graph-based algorithm to cluster words into groups that have the same meaning. Our algorithm follows the principle of finding a maximum margin between clusters, determining a split of the data that maximizes the minimum distance between pairs of data points belonging to two different clusters. Results On a test set of 21 ambiguous keywords from PubMed abstracts, our system has an average accuracy of 78%, outperforming a state-of-the-art unsupervised system by 2% and a baseline technique by 23%. On a standard data set from the National Library of Medicine, our system outperforms the baseline by 6% and comes within 5% of the accuracy of a supervised system. Conclusion Our system is a novel, state-of-the-art technique for efficiently finding word sense clusters, and does not require training data or human effort for each new word to be disambiguated. PMID:19344480

  8. An Efficient, Noniterative Method of Identifying the Cost-Effectiveness Frontier.

    PubMed

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D

    2016-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we also provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. © The Author(s) 2015.

  9. An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier

    PubMed Central

    Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.

    2015-01-01

    Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282

  10. A General Exponential Framework for Dimensionality Reduction.

    PubMed

    Wang, Su-Jing; Yan, Shuicheng; Yang, Jian; Zhou, Chun-Guang; Fu, Xiaolan

    2014-02-01

    As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

  11. RGB-D SLAM Combining Visual Odometry and Extended Information Filter

    PubMed Central

    Zhang, Heng; Liu, Yanli; Tan, Jindong; Xiong, Naixue

    2015-01-01

    In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. PMID:26263990

  12. Ladar range image denoising by a nonlocal probability statistics algorithm

    NASA Astrophysics Data System (ADS)

    Xia, Zhi-Wei; Li, Qi; Xiong, Zhi-Peng; Wang, Qi

    2013-01-01

    According to the characteristic of range images of coherent ladar and the basis of nonlocal means (NLM), a nonlocal probability statistics (NLPS) algorithm is proposed in this paper. The difference is that NLM performs denoising using the mean of the conditional probability distribution function (PDF) while NLPS using the maximum of the marginal PDF. In the algorithm, similar blocks are found out by the operation of block matching and form a group. Pixels in the group are analyzed by probability statistics and the gray value with maximum probability is used as the estimated value of the current pixel. The simulated range images of coherent ladar with different carrier-to-noise ratio and real range image of coherent ladar with 8 gray-scales are denoised by this algorithm, and the results are compared with those of median filter, multitemplate order mean filter, NLM, median nonlocal mean filter and its incorporation of anatomical side information, and unsupervised information-theoretic adaptive filter. The range abnormality noise and Gaussian noise in range image of coherent ladar are effectively suppressed by NLPS.

  13. BCM: toolkit for Bayesian analysis of Computational Models using samplers.

    PubMed

    Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A

    2016-10-21

    Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.

  14. Multiscale stochastic simulations of chemical reactions with regulated scale separation

    NASA Astrophysics Data System (ADS)

    Koumoutsakos, Petros; Feigelman, Justin

    2013-07-01

    We present a coupling of multiscale frameworks with accelerated stochastic simulation algorithms for systems of chemical reactions with disparate propensities. The algorithms regulate the propensities of the fast and slow reactions of the system, using alternating micro and macro sub-steps simulated with accelerated algorithms such as τ and R-leaping. The proposed algorithms are shown to provide significant speedups in simulations of stiff systems of chemical reactions with a trade-off in accuracy as controlled by a regulating parameter. More importantly, the error of the methods exhibits a cutoff phenomenon that allows for optimal parameter choices. Numerical experiments demonstrate that hybrid algorithms involving accelerated stochastic simulations can be, in certain cases, more accurate while faster, than their corresponding stochastic simulation algorithm counterparts.

  15. Optimization-Based Model Fitting for Latent Class and Latent Profile Analyses

    ERIC Educational Resources Information Center

    Huang, Guan-Hua; Wang, Su-Mei; Hsu, Chung-Chu

    2011-01-01

    Statisticians typically estimate the parameters of latent class and latent profile models using the Expectation-Maximization algorithm. This paper proposes an alternative two-stage approach to model fitting. The first stage uses the modified k-means and hierarchical clustering algorithms to identify the latent classes that best satisfy the…

  16. GeoSearcher: Location-Based Ranking of Search Engine Results.

    ERIC Educational Resources Information Center

    Watters, Carolyn; Amoudi, Ghada

    2003-01-01

    Discussion of Web queries with geospatial dimensions focuses on an algorithm that assigns location coordinates dynamically to Web sites based on the URL. Describes a prototype search system that uses the algorithm to re-rank search engine results for queries with a geospatial dimension, thus providing an alternative ranking order for search engine…

  17. Algorithm development for Maxwell's equations for computational electromagnetism

    NASA Technical Reports Server (NTRS)

    Goorjian, Peter M.

    1990-01-01

    A new algorithm has been developed for solving Maxwell's equations for the electromagnetic field. It solves the equations in the time domain with central, finite differences. The time advancement is performed implicitly, using an alternating direction implicit procedure. The space discretization is performed with finite volumes, using curvilinear coordinates with electromagnetic components along those directions. Sample calculations are presented of scattering from a metal pin, a square and a circle to demonstrate the capabilities of the new algorithm.

  18. Automatic identification of comparative effectiveness research from Medline citations to support clinicians’ treatment information needs

    PubMed Central

    Zhang, Mingyuan; Fiol, Guilherme Del; Grout, Randall W.; Jonnalagadda, Siddhartha; Medlin, Richard; Mishra, Rashmi; Weir, Charlene; Liu, Hongfang; Mostafa, Javed; Fiszman, Marcelo

    2014-01-01

    Online knowledge resources such as Medline can address most clinicians’ patient care information needs. Yet, significant barriers, notably lack of time, limit the use of these sources at the point of care. The most common information needs raised by clinicians are treatment-related. Comparative effectiveness studies allow clinicians to consider multiple treatment alternatives for a particular problem. Still, solutions are needed to enable efficient and effective consumption of comparative effectiveness research at the point of care. Objective Design and assess an algorithm for automatically identifying comparative effectiveness studies and extracting the interventions investigated in these studies. Methods The algorithm combines semantic natural language processing, Medline citation metadata, and machine learning techniques. We assessed the algorithm in a case study of treatment alternatives for depression. Results Both precision and recall for identifying comparative studies was 0.83. A total of 86% of the interventions extracted perfectly or partially matched the gold standard. Conclusion Overall, the algorithm achieved reasonable performance. The method provides building blocks for the automatic summarization of comparative effectiveness research to inform point of care decision-making. PMID:23920677

  19. Optimal Fungal Space Searching Algorithms.

    PubMed

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  20. SU-E-J-35: Using CBCT as the Alternative Method of Assessing ITV Volume

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Y; Turian, J; Templeton, A

    2015-06-15

    Purpose To study the accuracy of Internal Target Volumes (ITVs) created on cone beam CT (CBCT) by comparing the visible target volume on CBCT to volumes (GTV, ITV, and PTV) outlined on free breathing (FB) CT and 4DCT. Methods A Quasar Cylindrical Motion Phantom with a 3cm diameter ball (14.14 cc) embedded within a cork insert was set up to simulate respiratory motion with a period of 4 seconds and amplitude of 2cm superioinferiorly and 1cm anterioposteriorly. FBCT and 4DCT images were acquired. A PTV-4D was created on the 4DCT by applying a uniform margin of 5mm to the ITV-CT.more » PTV-FB was created by applying a margin of the motion range plus 5mm, i.e. total of 1.5cm laterally and 2.5cm superioinferiorly to the GTV outlined on the FBCT. A dynamic conformal arc was planned to treat the PTV-FB with 1mm margin. A CBCT was acquired before the treatment, on which the target was delineated. During the treatment, the position of the target was monitored using the EPID in cine mode. Results ITV-CBCT and ITV-CT were measured to be 56.6 and 62.7cc, respectively, with a Dice Coefficient (DC) of 0.94 and disagreement in center of mass (COM) of 0.59 mm. On the other hand, GTV-FB was 11.47cc, 19% less than the known volume of the ball. PTV-FB and PTV-4D were 149 and 116 cc, with a DC of 0.71. Part of the ITV-CT was not enclosed by the PTV-FB despite the large margin. The cine EPID images have confirmed geometrical misses of the target. Similar under-coverage was observed in one clinical case and captured by the CBCT, where the implanted fiducials moved outside PTV-FB. Conclusion ITV-CBCT is in good agreement with ITV-CT. When 4DCT was not available, CBCT can be an effective alternative in determining and verifying the PTV margin.« less

  1. Species delimitation using Bayes factors: simulations and application to the Sceloporus scalaris species group (Squamata: Phrynosomatidae).

    PubMed

    Grummer, Jared A; Bryson, Robert W; Reeder, Tod W

    2014-03-01

    Current molecular methods of species delimitation are limited by the types of species delimitation models and scenarios that can be tested. Bayes factors allow for more flexibility in testing non-nested species delimitation models and hypotheses of individual assignment to alternative lineages. Here, we examined the efficacy of Bayes factors in delimiting species through simulations and empirical data from the Sceloporus scalaris species group. Marginal-likelihood scores of competing species delimitation models, from which Bayes factor values were compared, were estimated with four different methods: harmonic mean estimation (HME), smoothed harmonic mean estimation (sHME), path-sampling/thermodynamic integration (PS), and stepping-stone (SS) analysis. We also performed model selection using a posterior simulation-based analog of the Akaike information criterion through Markov chain Monte Carlo analysis (AICM). Bayes factor species delimitation results from the empirical data were then compared with results from the reversible-jump MCMC (rjMCMC) coalescent-based species delimitation method Bayesian Phylogenetics and Phylogeography (BP&P). Simulation results show that HME and sHME perform poorly compared with PS and SS marginal-likelihood estimators when identifying the true species delimitation model. Furthermore, Bayes factor delimitation (BFD) of species showed improved performance when species limits are tested by reassigning individuals between species, as opposed to either lumping or splitting lineages. In the empirical data, BFD through PS and SS analyses, as well as the rjMCMC method, each provide support for the recognition of all scalaris group taxa as independent evolutionary lineages. Bayes factor species delimitation and BP&P also support the recognition of three previously undescribed lineages. In both simulated and empirical data sets, harmonic and smoothed harmonic mean marginal-likelihood estimators provided much higher marginal-likelihood estimates than PS and SS estimators. The AICM displayed poor repeatability in both simulated and empirical data sets, and produced inconsistent model rankings across replicate runs with the empirical data. Our results suggest that species delimitation through the use of Bayes factors with marginal-likelihood estimates via PS or SS analyses provide a useful and complementary alternative to existing species delimitation methods.

  2. ATHENA: A knowledge-based hybrid backpropagation-grammatical evolution neural network algorithm for discovering epistasis among quantitative trait Loci

    PubMed Central

    2010-01-01

    Background Growing interest and burgeoning technology for discovering genetic mechanisms that influence disease processes have ushered in a flood of genetic association studies over the last decade, yet little heritability in highly studied complex traits has been explained by genetic variation. Non-additive gene-gene interactions, which are not often explored, are thought to be one source of this "missing" heritability. Methods Stochastic methods employing evolutionary algorithms have demonstrated promise in being able to detect and model gene-gene and gene-environment interactions that influence human traits. Here we demonstrate modifications to a neural network algorithm in ATHENA (the Analysis Tool for Heritable and Environmental Network Associations) resulting in clear performance improvements for discovering gene-gene interactions that influence human traits. We employed an alternative tree-based crossover, backpropagation for locally fitting neural network weights, and incorporation of domain knowledge obtainable from publicly accessible biological databases for initializing the search for gene-gene interactions. We tested these modifications in silico using simulated datasets. Results We show that the alternative tree-based crossover modification resulted in a modest increase in the sensitivity of the ATHENA algorithm for discovering gene-gene interactions. The performance increase was highly statistically significant when backpropagation was used to locally fit NN weights. We also demonstrate that using domain knowledge to initialize the search for gene-gene interactions results in a large performance increase, especially when the search space is larger than the search coverage. Conclusions We show that a hybrid optimization procedure, alternative crossover strategies, and incorporation of domain knowledge from publicly available biological databases can result in marked increases in sensitivity and performance of the ATHENA algorithm for detecting and modelling gene-gene interactions that influence a complex human trait. PMID:20875103

  3. Economic and market issues on the sustainability of egg production in the United States: analysis of alternative production systems.

    PubMed

    Sumner, D A; Gow, H; Hayes, D; Matthews, W; Norwood, B; Rosen-Molina, J T; Thurman, W

    2011-01-01

    Conventional cage housing for laying hens evolved as a cost-effective egg production system. Complying with mandated hen housing alternatives would raise marginal production costs and require sizable capital investment. California data indicate that shifts from conventional cages to barn housing would likely cause farm-level cost increases of about 40% per dozen. The US data on production costs of such alternatives as furnished cages are not readily available and European data are not applicable to the US industry structure. Economic analysis relies on key facts about production and marketing of conventional and noncage eggs. Even if mandated by government or buyers, shifts to alternative housing would likely occur with lead times of at least 5 yr. Therefore, egg producers and input suppliers would have considerable time to plan new systems and build new facilities. Relatively few US consumers now pay the high retail premiums required for nonconventional eggs from hens housed in alternative systems. However, data from consumer experiments indicate that additional consumers would also be willing to pay some premium. Nonetheless, current data do not allow easy extrapolation to understand the willingness to pay for such eggs by the vast majority of conventional egg consumers. Egg consumption in the United States tends to be relatively unresponsive to price changes, such that sustained farm price increases of 40% would likely reduce consumption by less than 10%. This combination of facts and relationships suggests that, unless low-cost imports grew rapidly, requirements for higher cost hen housing systems would raise US egg prices considerably while reducing egg consumption marginally. Eggs are a low-cost source of animal protein and low-income consumers would be hardest hit. However, because egg expenditures are a very small share of the consumer budget, real income loss for consumers would be small in percentage terms. Finally, the high egg prices imposed by alternative hen housing systems raise complex issues about linking public policy costs to policy beneficiaries.

  4. Visual reconciliation of alternative similarity spaces in climate modeling

    Treesearch

    J Poco; A Dasgupta; Y Wei; William Hargrove; C.R. Schwalm; D.N. Huntzinger; R Cook; E Bertini; C.T. Silva

    2015-01-01

    Visual data analysis often requires grouping of data objects based on their similarity. In many application domains researchers use algorithms and techniques like clustering and multidimensional scaling to extract groupings from data. While extracting these groups using a single similarity criteria is relatively straightforward, comparing alternative criteria poses...

  5. STRATOP: A Model for Designing Effective Product and Communication Strategies. Paper No. 470.

    ERIC Educational Resources Information Center

    Pessemier, Edgar A.

    The STRATOP algorithm was developed to help planners and proponents find and test effectively designed choice objects and communication strategies. Choice objects can range from complex social, scientific, military, or educational alternatives to simple economic alternatives between assortments of branded convenience goods. Two classes of measured…

  6. A Circuit-Based Quantum Algorithm Driven by Transverse Fields for Grover's Problem

    NASA Technical Reports Server (NTRS)

    Jiang, Zhang; Rieffel, Eleanor G.; Wang, Zhihui

    2017-01-01

    We designed a quantum search algorithm, giving the same quadratic speedup achieved by Grover's original algorithm; we replace Grover's diffusion operator (hard to implement) with a product diffusion operator generated by transverse fields (easy to implement). In our algorithm, the problem Hamiltonian (oracle) and the transverse fields are applied to the system alternatively. We construct such a sequence that the corresponding unitary generates a closed transition between the initial state (even superposition of all states) and a modified target state, which has a high degree of overlap with the original target state.

  7. Marginalized Student Access to Technology Education

    ERIC Educational Resources Information Center

    Kurtcu, Wanda M.

    2017-01-01

    The purpose of this paper is to investigate how a teacher can disrupt an established curriculum that continues the cycle of inequity of access to science, technology, engineering, and math (STEM) curriculum by students in alternative education. For this paper, I will focus on the technology components of the STEM curriculum. Technology in the…

  8. Social Reproduction and College Access: Current Evidence, Context, and Potential Alternatives

    ERIC Educational Resources Information Center

    Serna, Gabriel R.; Woulfe, Rebecca

    2017-01-01

    Social reproduction theory identifies schooling as a primary means for the perpetuation of the dominant class's ideologies, values, and power. The ability to access college is so closely tied to these constructs that it contributes to this dominance and marginalization. Social stratification is not only mirrored in higher education, but the…

  9. Feminist Heuristics: Transforming the Foundation of Food Quality and Safety Assurance Systems

    ERIC Educational Resources Information Center

    Kimura, Aya Hirata

    2012-01-01

    Food safety and quality assurance systems have emerged as a key mechanism of food governance in recent years and are also popular among alternative agrofood movements, such as the organic and fair trade movements. Rural sociologists have identified many problems with existing systems, including corporate cooptation, the marginalization of small…

  10. Sociopolitical Development in Educational Systems: From Margins to Center

    ERIC Educational Resources Information Center

    Kirshner, Ben; Hipolito-Delgado, Carlos; Zion, Shelley

    2015-01-01

    This is a challenging moment for supporters of public education: the status quo is untenable but the options offered by "reformers" appear equally dangerous. In this context we need arguments for the democratic purposes of education that offer an alternative to existing inequities on one hand and technocratic or privatized solutions on…

  11. Migrants' Alternative Multi-Lingua Franca Spaces as Emergent Re-Producers of Exclusionary Monolingual Nation-State Regimes

    ERIC Educational Resources Information Center

    Sabaté Dalmau, Maria

    2016-01-01

    From a critical sociolinguistic perspective, this article investigates the written linguistic practices of 20 labor migrants from heterogeneous backgrounds who organized their life trajectories in an "ethnic" call shop in a marginal neighborhood near Barcelona. This was a late capitalist institution informally providing the undocumented…

  12. Growing cottonwoods for biomass: results of a ten-year irrigation study

    Treesearch

    H. Christoph Stuhlinger; Paul F. Doruska; Jeffery A. Earl; Matthew H. Pelkki

    2010-01-01

    Eastern cottonwood (Populus deltoides Bartr.) has potential as a short rotation alternate crop on marginal farmlands in the South to meet increasing biomass demands for pulp and bioenergy applications. Potlatch Corporation supported this study to investigate the effect of irrigation on the growth of cottonwood trees. The study was installed in the...

  13. Land conversion to bioenergy production: water budget and sediment output in a semiarid grassland

    USDA-ARS?s Scientific Manuscript database

    Switchgrass based bioenergy production has been considered a feasible alternative of land use for the mixed-grass prairie and marginal croplands in the High Plains. However, little is known of the effect of this land use change on the water cycle and associated sediment output in this water controll...

  14. Adult Basic Education and the Welfare Roles: An Economic and Social Alternative.

    ERIC Educational Resources Information Center

    Pennsylvania Association for Adult Continuing Education, Harrisburg, PA.

    In Pennsylvania where 30 percent of the adult population is functionally illiterate and another 24 percent has only marginal competence, no state funds are appropriated for adult basic education and general educational development (ABE/GED) programs. All programs are supported by federal aid. Information shows that economic revitalization and a…

  15. 14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...

  16. 14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...

  17. 14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...

  18. 14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...

  19. Montessori and the Mainstream: A Century of Reform on the Margins

    ERIC Educational Resources Information Center

    Whitescarver, Keith; Cossentino, Jacqueline

    2008-01-01

    Background/Context: Montessori education has flourished as an alternative approach to schooling for a hundred years. In the century since the first Montessori school opened in the slums of Rome, the movement has undergone sustained growth while simultaneously enduring efforts to modify the method in order to reach a wider audience. Despite…

  20. Approaching Praxis: YPAR as Critical Pedagogical Process in a College Access Program

    ERIC Educational Resources Information Center

    Scott, Mary Alice; Pyne, Kimberly B.; Means, Darris R.

    2015-01-01

    To address the persistent failure of schooling to support underserved students, youth participatory action research (YPAR) has emerged as an alternative and critical paradigm for educational practice. YPAR re-centers authority on marginalized voices and understands research as a tool for social change. Grounded in critical pedagogy, such projects…

  1. Interpersonal Process Group Counseling for Educationally Marginalized Youth: The MAGNIFY Program

    ERIC Educational Resources Information Center

    Slaten, Christopher D.; Elison, Zachary M.

    2015-01-01

    Youth mental health is an area of profound disparity between the demand and supply of services, particularly in schools that serve students at risk of school dropout. This article describes the conceptual foundations and implementation of "MAGNIFY", a program that provides free group counseling to small alternative schools with students…

  2. 30 CFR 204.210 - What if a property is approved as part of a nonqualifying agreement?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What if a property is approved as part of a nonqualifying agreement? 204.210 Section 204.210 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing...

  3. 30 CFR 204.212 - What if I took relief for which I was ineligible?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What if I took relief for which I was ineligible? 204.212 Section 204.212 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.212...

  4. 30 CFR 204.214 - Is minimum royalty due on a property for which I took relief?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false Is minimum royalty due on a property for which I took relief? 204.214 Section 204.214 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing...

  5. 30 CFR 204.204 - What accounting and auditing relief will MMS not allow?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false What accounting and auditing relief will MMS... INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.204 What accounting and auditing relief will MMS not allow? MMS will not approve your request for...

  6. 30 CFR 204.211 - When may MMS rescind relief for a property?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false When may MMS rescind relief for a property? 204.211 Section 204.211 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 204.211 When may...

  7. The Process and Product: Crafting Community Portraits with Young People in Flexible Learning Settings

    ERIC Educational Resources Information Center

    Baker, Alison M.

    2016-01-01

    Community-based alternative education is situated on the margins in relation to mainstream education. Young people attending these learning sites are often characterised as "disengaged learners", who have fallen through the cracks of the traditional schooling system. The aim of this project was to use participatory visual methods with…

  8. From the Margins to the Mainstream: Potential Impact of Early Colleges on Traditional High Schools

    ERIC Educational Resources Information Center

    Smith, Robert; Fischetti, John; Fort, Deron; Gurley, Tilly; Kelly, Mike

    2012-01-01

    Early colleges are one alternative to the traditional comprehensive high school. This article describes the establishment of an early college in partnership with a university, including the experiences for students, the challenges for teachers, and the difficulties in bridging higher and kindergarten through Grade-12 educations. The article…

  9. Fostering Arts Education through a University-Afterschool Partnership

    ERIC Educational Resources Information Center

    Leonard, Alison E.; Fleming, David S.; Lewis, Melanie; Durham, Sheliah

    2017-01-01

    Regardless of the type of arts activity, the importance of the arts in afterschool programs cannot be overestimated. As the arts are increasingly marginalized in public school systems, afterschool arts education can be an alternative way to integrate the arts into children's academic experiences or build on their in-school arts experiences (Briggs…

  10. Development and Female Crime: A Cross-National Test of Alternative Explanations.

    ERIC Educational Resources Information Center

    Steffensmeier, Darrell; And Others

    1989-01-01

    Interpol data from 69 countries indicate that the relationship between female percentage of arrests and national development status is mediated by opportunity for "female" consumer crimes and formalization of social control (which makes female crime more visible), but not by gender equality or female economic marginality. Contains 49 references.…

  11. Building Intercultural Empathy through Writing: Reflections on Teaching Alternatives to Argumentation

    ERIC Educational Resources Information Center

    Peirce, Karen P.

    2007-01-01

    Writing assignments that focus on nonargumentative discourse can take many forms. Such assignments can prompt students to produce individually constructed writing, or they can be more collaborative in nature. They can focus on traditional formats, following MLA citation guidelines, using Times New Roman 12-point font, maintaining one-inch margins,…

  12. Engaged Scholarship in the Academy: Reflections from the Margins

    ERIC Educational Resources Information Center

    Drame, Elizabeth R.; Martell, Sandra Toro; Mueller, Jennifer; Oxford, Raquel; Wisneski, Debora B.; Xu, Yaoying

    2011-01-01

    This paper represents a series of reflections on collective and individual efforts of diverse women scholars to reconcile alternative views of scholarship within the academy. We document our collective experience with embedding the concept of the "scholarship of engagement" in our practice of research, teaching, and service through a process of…

  13. Gaussian mixture model based identification of arterial wall movement for computation of distension waveform.

    PubMed

    Patil, Ravindra B; Krishnamoorthy, P; Sethuraman, Shriram

    2015-01-01

    This work proposes a novel Gaussian Mixture Model (GMM) based approach for accurate tracking of the arterial wall and subsequent computation of the distension waveform using Radio Frequency (RF) ultrasound signal. The approach was evaluated on ultrasound RF data acquired using a prototype ultrasound system from an artery mimicking flow phantom. The effectiveness of the proposed algorithm is demonstrated by comparing with existing wall tracking algorithms. The experimental results show that the proposed method provides 20% reduction in the error margin compared to the existing approaches in tracking the arterial wall movement. This approach coupled with ultrasound system can be used to estimate the arterial compliance parameters required for screening of cardiovascular related disorders.

  14. Dual pricing algorithm in ISO markets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Neill, Richard P.; Castillo, Anya; Eldridge, Brent

    The challenge to create efficient market clearing prices in centralized day-ahead electricity markets arises from inherent non-convexities in unit commitment problems. When this aspect is ignored, marginal prices may result in economic losses to market participants who are part of the welfare maximizing solution. In this essay, we present an axiomatic approach to efficient prices and cost allocation for a revenue neutral and non-confiscatory day-ahead market. Current cost allocation practices do not adequately attribute costs based on transparent cost causation criteria. Instead we propose an ex post multi-part pricing scheme, which we refer to as the Dual Pricing Algorithm. Lastly,more » our approach can be incorporated into current dayahead markets without altering the market equilibrium.« less

  15. A modified estimation distribution algorithm based on extreme elitism.

    PubMed

    Gao, Shujun; de Silva, Clarence W

    2016-12-01

    An existing estimation distribution algorithm (EDA) with univariate marginal Gaussian model was improved by designing and incorporating an extreme elitism selection method. This selection method highlighted the effect of a few top best solutions in the evolution and advanced EDA to form a primary evolution direction and obtain a fast convergence rate. Simultaneously, this selection can also keep the population diversity to make EDA avoid premature convergence. Then the modified EDA was tested by means of benchmark low-dimensional and high-dimensional optimization problems to illustrate the gains in using this extreme elitism selection. Besides, no-free-lunch theorem was implemented in the analysis of the effect of this new selection on EDAs. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  16. Clustering with Missing Values: No Imputation Required

    NASA Technical Reports Server (NTRS)

    Wagstaff, Kiri

    2004-01-01

    Clustering algorithms can identify groups in large data sets, such as star catalogs and hyperspectral images. In general, clustering methods cannot analyze items that have missing data values. Common solutions either fill in the missing values (imputation) or ignore the missing data (marginalization). Imputed values are treated as just as reliable as the truly observed data, but they are only as good as the assumptions used to create them. In contrast, we present a method for encoding partially observed features as a set of supplemental soft constraints and introduce the KSC algorithm, which incorporates constraints into the clustering process. In experiments on artificial data and data from the Sloan Digital Sky Survey, we show that soft constraints are an effective way to enable clustering with missing values.

  17. Spectrum Orbit Utilization Program documentation: SOUP5 version 3.8 user's manual, volume 1, chapters 1 through 5

    NASA Technical Reports Server (NTRS)

    Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.

    1985-01-01

    The underlying engineering and mathematical models as well as the computational methods used by the Spectrum Orbit Utilization Program 5 (SOUP5) analysis programs are described. Included are the algorithms used to calculate the technical parameters, and references to the technical literature. The organization, capabilities, processing sequences, and processing and data options of the SOUP5 system are described. The details of the geometric calculations are given. Also discussed are the various antenna gain algorithms; rain attenuation and depolarization calculations; calculations of transmitter power and received power flux density; channelization options, interference categories, and protection ratio calculation; generation of aggregrate interference and margins; equivalent gain calculations; and how to enter a protection ratio template.

  18. Dual pricing algorithm in ISO markets

    DOE PAGES

    O'Neill, Richard P.; Castillo, Anya; Eldridge, Brent; ...

    2016-10-10

    The challenge to create efficient market clearing prices in centralized day-ahead electricity markets arises from inherent non-convexities in unit commitment problems. When this aspect is ignored, marginal prices may result in economic losses to market participants who are part of the welfare maximizing solution. In this essay, we present an axiomatic approach to efficient prices and cost allocation for a revenue neutral and non-confiscatory day-ahead market. Current cost allocation practices do not adequately attribute costs based on transparent cost causation criteria. Instead we propose an ex post multi-part pricing scheme, which we refer to as the Dual Pricing Algorithm. Lastly,more » our approach can be incorporated into current dayahead markets without altering the market equilibrium.« less

  19. Immediate performance of self-etching versus system adhesives with multiple light-activated restoratives.

    PubMed

    Irie, M; Suzuki, K; Watts, D C

    2004-11-01

    The purpose of this study was to evaluate the performance of both single and double applications of (Adper Prompt L-Pop) self-etching dental adhesive, when used with three classes of light-activated restorative materials, in comparison to the performance of each restorative system adhesive. Evaluation parameters to be considered for the adhesive systems were (a) immediate marginal adaptation (or gap formation) in tooth cavities, (b) free setting shrinkage-strain determined by the immediate marginal gap-width in a non-bonding Teflon cavity, and (c) their immediate shear bond-strengths to enamel and to dentin. The maximum marginal gap-width and the opposing-width (if any) in the tooth cavities and in the Teflon cavities were measured immediately (3 min) after light-activation. The shear bond-strengths to enamel and to dentin were also measured at 3 min. For light-activated restorative materials during early setting (<3 min), application of Adper Prompt L-Pop exhibited generally superior marginal adaptation to most system adhesives. But there was no additional benefit from double application. The marginal-gaps in tooth cavities and the marginal-gaps in Teflon cavities were highly correlated (r = 0.86-0.89, p < 0.02-0.01). For enamel and dentin shear bond-strengths, there were no significant differences between single and double applications, for all materials tested except Toughwell and Z 250 with enamel. Single application of a self-etch adhesive was a feasible and beneficial alternative to system adhesives for several classes of restorative. Marginal gap-widths in tooth cavities correlated more strongly with free shrinkage-strain magnitudes than with bond-strengths to tooth structure.

  20. Limitations of the planning organ at risk volume (PRV) concept.

    PubMed

    Stroom, Joep C; Heijmen, Ben J M

    2006-09-01

    Previously, we determined a planning target volume (PTV) margin recipe for geometrical errors in radiotherapy equal to M(T) = 2 Sigma + 0.7 sigma, with Sigma and sigma standard deviations describing systematic and random errors, respectively. In this paper, we investigated margins for organs at risk (OAR), yielding the so-called planning organ at risk volume (PRV). For critical organs with a maximum dose (D(max)) constraint, we calculated margins such that D(max) in the PRV is equal to the motion averaged D(max) in the (moving) clinical target volume (CTV). We studied margins for the spinal cord in 10 head-and-neck cases and 10 lung cases, each with two different clinical plans. For critical organs with a dose-volume constraint, we also investigated whether a margin recipe was feasible. For the 20 spinal cords considered, the average margin recipe found was: M(R) = 1.6 Sigma + 0.2 sigma with variations for systematic and random errors of 1.2 Sigma to 1.8 Sigma and -0.2 sigma to 0.6 sigma, respectively. The variations were due to differences in shape and position of the dose distributions with respect to the cords. The recipe also depended significantly on the volume definition of D(max). For critical organs with a dose-volume constraint, the PRV concept appears even less useful because a margin around, e.g., the rectum changes the volume in such a manner that dose-volume constraints stop making sense. The concept of PRV for planning of radiotherapy is of limited use. Therefore, alternative ways should be developed to include geometric uncertainties of OARs in radiotherapy planning.

  1. The effect of surface sealants with different filler content on microleakage of Class V resin composite restorations

    PubMed Central

    Hepdeniz, Ozge Kam; Temel, Ugur Burak; Ugurlu, Muhittin; Koskan, Ozgur

    2016-01-01

    Objective: Microleakage is still one of the most cited reasons for failure of resin composite restorations. Alternative methods to prevent microleakage have been investigated increasingly. The aim of this study is to evaluate the microleakage in Class V resin composite restorations with or without application of surface sealants with different filler content. Materials and Methods: Ninety-six cavities were prepared on the buccal and lingual surfaces with the coronal margins located in enamel and the cervical margins located in dentin. The cavities restored with an adhesive system (Clearfil SE Bond, Kuraray, Tokyo, Japan) and resin composite (Clearfil Majesty ES-2, Kuraray, Tokyo, Japan). Teeth were stored in distilled water for 24 h and separated into four groups according to the surface sealants (Control, Fortify, Fortify Plus, and G-Coat Plus). The teeth were thermocycled (500 cycles, 5–55° C), immersed in basic fuchsine, sectioned, and analyzed for dye penetration using stereomicroscope. The data were submitted to statistical analysis by Kruskal–Wallis and Bonferroni–Dunn test. Results: The results of the study indicated that there was minimum leakage at the enamel margins of all groups. Bonferroni–Dunn tests revealed that Fortify and GC-Coat groups showed significantly less leakage than the Control group and the Fortify Plus group at dentin margins in lingual surfaces (P < 0.05). Conclusion: The all surface sealants used in this study eliminated microleakage at enamel margins. Moreover, unfilled or nanofilled surface sealants were the most effective in decreasing the degree of marginal microleakage at dentin margins. However, viscosity and penetrability of the sealants could be considered for sealing ability besides composition. PMID:27095890

  2. Robust Group Sparse Beamforming for Multicast Green Cloud-RAN With Imperfect CSI

    NASA Astrophysics Data System (ADS)

    Shi, Yuanming; Zhang, Jun; Letaief, Khaled B.

    2015-09-01

    In this paper, we investigate the network power minimization problem for the multicast cloud radio access network (Cloud-RAN) with imperfect channel state information (CSI). The key observation is that network power minimization can be achieved by adaptively selecting active remote radio heads (RRHs) via controlling the group-sparsity structure of the beamforming vector. However, this yields a non-convex combinatorial optimization problem, for which we propose a three-stage robust group sparse beamforming algorithm. In the first stage, a quadratic variational formulation of the weighted mixed l1/l2-norm is proposed to induce the group-sparsity structure in the aggregated beamforming vector, which indicates those RRHs that can be switched off. A perturbed alternating optimization algorithm is then proposed to solve the resultant non-convex group-sparsity inducing optimization problem by exploiting its convex substructures. In the second stage, we propose a PhaseLift technique based algorithm to solve the feasibility problem with a given active RRH set, which helps determine the active RRHs. Finally, the semidefinite relaxation (SDR) technique is adopted to determine the robust multicast beamformers. Simulation results will demonstrate the convergence of the perturbed alternating optimization algorithm, as well as, the effectiveness of the proposed algorithm to minimize the network power consumption for multicast Cloud-RAN.

  3. A Convex Formulation for Learning a Shared Predictive Structure from Multiple Tasks

    PubMed Central

    Chen, Jianhui; Tang, Lei; Liu, Jun; Ye, Jieping

    2013-01-01

    In this paper, we consider the problem of learning from multiple related tasks for improved generalization performance by extracting their shared structures. The alternating structure optimization (ASO) algorithm, which couples all tasks using a shared feature representation, has been successfully applied in various multitask learning problems. However, ASO is nonconvex and the alternating algorithm only finds a local solution. We first present an improved ASO formulation (iASO) for multitask learning based on a new regularizer. We then convert iASO, a nonconvex formulation, into a relaxed convex one (rASO). Interestingly, our theoretical analysis reveals that rASO finds a globally optimal solution to its nonconvex counterpart iASO under certain conditions. rASO can be equivalently reformulated as a semidefinite program (SDP), which is, however, not scalable to large datasets. We propose to employ the block coordinate descent (BCD) method and the accelerated projected gradient (APG) algorithm separately to find the globally optimal solution to rASO; we also develop efficient algorithms for solving the key subproblems involved in BCD and APG. The experiments on the Yahoo webpages datasets and the Drosophila gene expression pattern images datasets demonstrate the effectiveness and efficiency of the proposed algorithms and confirm our theoretical analysis. PMID:23520249

  4. Rational approximations to rational models: alternative algorithms for category learning.

    PubMed

    Sanborn, Adam N; Griffiths, Thomas L; Navarro, Daniel J

    2010-10-01

    Rational models of cognition typically consider the abstract computational problems posed by the environment, assuming that people are capable of optimally solving those problems. This differs from more traditional formal models of cognition, which focus on the psychological processes responsible for behavior. A basic challenge for rational models is thus explaining how optimal solutions can be approximated by psychological processes. We outline a general strategy for answering this question, namely to explore the psychological plausibility of approximation algorithms developed in computer science and statistics. In particular, we argue that Monte Carlo methods provide a source of rational process models that connect optimal solutions to psychological processes. We support this argument through a detailed example, applying this approach to Anderson's (1990, 1991) rational model of categorization (RMC), which involves a particularly challenging computational problem. Drawing on a connection between the RMC and ideas from nonparametric Bayesian statistics, we propose 2 alternative algorithms for approximate inference in this model. The algorithms we consider include Gibbs sampling, a procedure appropriate when all stimuli are presented simultaneously, and particle filters, which sequentially approximate the posterior distribution with a small number of samples that are updated as new data become available. Applying these algorithms to several existing datasets shows that a particle filter with a single particle provides a good description of human inferences.

  5. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  6. Locational Marginal Pricing in the Campus Power System at the Power Distribution Level

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hao, Jun; Gu, Yi; Zhang, Yingchen

    2016-11-14

    In the development of smart grid at distribution level, the realization of real-time nodal pricing is one of the key challenges. The research work in this paper implements and studies the methodology of locational marginal pricing at distribution level based on a real-world distribution power system. The pricing mechanism utilizes optimal power flow to calculate the corresponding distributional nodal prices. Both Direct Current Optimal Power Flow and Alternate Current Optimal Power Flow are utilized to calculate and analyze the nodal prices. The University of Denver campus power grid is used as the power distribution system test bed to demonstrate themore » pricing methodology.« less

  7. African hot spot volcanism: small-scale convection in the upper mantle beneath cratons.

    PubMed

    King, S D; Ritsema, J

    2000-11-10

    Numerical models demonstrate that small-scale convection develops in the upper mantle beneath the transition of thick cratonic lithosphere and thin oceanic lithosphere. These models explain the location and geochemical characteristics of intraplate volcanos on the African and South American plates. They also explain the presence of relatively high seismic shear wave velocities (cold downwellings) in the mantle transition zone beneath the western margin of African cratons and the eastern margin of South American cratons. Small-scale, edge-driven convection is an alternative to plumes for explaining intraplate African and South American hot spot volcanism, and small-scale convection is consistent with mantle downwellings beneath the African and South American lithosphere.

  8. Impacts of Hospital Budget Limits in Rochester, New York

    PubMed Central

    Friedman, Bernard; Wong, Herbert S.

    1995-01-01

    During 1980-87, eight hospitals in the Rochester, New York area participated in an experimental program to limit total revenue. This article analyzes: increase of costs for Rochester hospitals; trends for inputs and compensation; and cash flow margins. Real expense per case grew annually by about 3 percent less in Rochester. However, after 1984, Medicare prospective payment had an effect of similar size outside Rochester. Some capital inputs to hospital care were restrained, as were wages and particularly benefits. The program did not generally raise or stabilize hospital revenue margins, while the ratio of cash flow to debt trended down. Financial stringency of this program relative to alternatives may have contributed to its end. PMID:10151889

  9. Impacts of hospital budget limits in Rochester, New York.

    PubMed

    Friedman, B; Wong, H S

    1995-01-01

    During 1980-87, eight hospitals in the Rochester, New York area participated in an experimental program to limit total revenue. This article analyzes: increase of costs for Rochester hospitals; trends for inputs and compensation; and cash flow margins. Real expense per case grew annually by about 3 percent less in Rochester. However, after 1984, Medicare prospective payment had an effect of similar size outside Rochester. Some capital inputs to hospital care were restrained, as were wages and particularly benefits. The program did not generally raise or stabilize hospital revenue margins, while the ratio of cash flow to debt trended down. Financial stringency of this program relative to alternatives may have contributed to its end.

  10. Queue and stack sorting algorithm optimization and performance analysis

    NASA Astrophysics Data System (ADS)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  11. Engineering calculations for solving the orbital allotment problem

    NASA Technical Reports Server (NTRS)

    Reilly, C.; Walton, E. K.; Mount-Campbell, C.; Caldecott, R.; Aebker, E.; Mata, F.

    1988-01-01

    Four approaches for calculating downlink interferences for shaped-beam antennas are described. An investigation of alternative mixed-integer programming models for satellite synthesis is summarized. Plans for coordinating the various programs developed under this grant are outlined. Two procedures for ordering satellites to initialize the k-permutation algorithm are proposed. Results are presented for the k-permutation algorithms. Feasible solutions are found for 5 of the 6 problems considered. Finally, it is demonstrated that the k-permutation algorithm can be used to solve arc allotment problems.

  12. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  13. The Links Between the Formation of the Gulf of Mexico and the Late Proterozoic to Mesozoic Tectonic Evolution of Southern North America

    NASA Astrophysics Data System (ADS)

    Keller, G. R.; Mickus, K. L.; Gurrola, H.; Harry, D. L.; Pulliam, J.

    2016-12-01

    A full understanding of the Gulf of Mexico's geologic history depends on understanding the tectonic framework along the southern margin of North America. The first step in establishing this framework was the breakup of Laurentia during the Early Paleozoic. At least one tectonic block rifted away from Laurentia's southern margin at this time, and is interpreted to be presently located in Argentina. Rifting resulted in a sinuous margin consisting of alternating ridge and transform segments extending from the southeastern U.S. across Texas into northern Mexico. The Paleozoic margin is associated with a clearly defined gravity high, and ends in the trend of this high are associated with intersections of ridge and transform segments along the margin. By the end of the Paleozoic, continental assembly via the Appalachian-Ouachita orogeny added new terranes to the eastern and southern margins of Laurentia and the assembly of the supercontinent Pangea was complete. Triassic through Late Jurassic opening of the Gulf of Mexico (GOM) created a complex margin, initially mobilizing several crustal blocks that were eventually left behind on the North American margin as seafloor spreading developed within the Gulf and the Yucatan block separated and rotated into its current position. Recent deep seismic reflection profiles along the northern margin of the GOM show that rifted continental crust extends offshore for 250 km before the oceanic crust of the Gulf of Mexico is encountered. Our group has worked to produce four integrated models of the lithospheric structure based upon reflection, refraction, and teleseismic data acquired across this margin integrated with gravity, magnetic, geologic and drilling data. These models define a complex zone of crustal thinning along the Gulf Coastal plain of Texas that is covered by up to 10km of primarily Cretaceous and younger sedimentary rocks. To the east along the coastal plain region, we have defined two large crustal blocks that were essentially left behind by the opening of the Gulf of Mexico.

  14. Deltaic sedimentation and stratigraphic sequences in post-orogenic basins, Western Greece

    NASA Astrophysics Data System (ADS)

    Piper, David J. W.; Kontopoulos, N.; Panagos, A. G.

    1988-03-01

    Post-orogenic basin sediments in the gulfs of Corinth, Patras and Amvrakia, on the western coast of Greece, occur in four tectonic settings: (1) true graben; (2) simple and complex half graben; (3) shallow half graben associated with the high-angel surface traces of thrust faults; and (4) marginal depressions adjacent to graben in which sediment loading has occurred. Late Quaternary facies distribution has been mapped in all three basins. Sea level changes, interacting with the apparently fortuitous elevation of horsts at basin margins, result in a complex alternation of well-mixed marine, stratified marine, brackish and lacustrine facies. Organic carbon contents of muds are high in all but the well-mixed marine facies. Basin margin slope is the most important determinant of facies distribution. The steep slopes of the Gulf of Corinth half graben result in fan-deltas which deliver coarse sediments in turbidity currents to the deep basin floor. Where gradients are reduced by marginal downwarping (Gulf of Patras) or on the gentle slopes of thrust-related half graben (Gulf of Amvrakia) coarse sediments are trapped on the subaerial delta or the coastal zone, and the fine sediment reaching the basin floor appears derived mainly from muddy plumes during winter floods.

  15. Intersectionality in psychotherapy: The experiences of an AfroLatinx queer immigrant.

    PubMed

    Adames, Hector Y; Chavez-Dueñas, Nayeli Y; Sharma, Shweta; La Roche, Martin J

    2018-03-01

    Culturally responsive and racially conscious psychotherapeutic work requires that therapists recognize the ways clients are impacted by their multiple marginalized identities and by systems of oppression (e.g., racism, ethnocentrism, sexism, heterosexism, and nativism). Attending exclusively to clients' marginalized identities (i.e., weak intersectionality) may drive therapists to only focus on internal, subjective, and emotional experiences, hence, missing the opportunity to consider and address how multiple sociostructural dimensions (i.e., strong intersectionality) may be impacting the client's presenting problems. Alternatively, focusing solely on the impact of sociostructural dimensions on the lives of clients may miss the more nuanced and variable individual personal experiences. In this article, we highlight the challenge of maintaining a culturally responsive and racially conscious stance when considering multiple marginalized identities, overlapping systemic inequities, and how both affect clients' lives and experiences. The case of an AfroLatinx queer immigrant is presented to illustrate some of the challenges and opportunities while simultaneously considering (a) the client's multiple marginalized identities, (b) the way clients are impacted by systemic oppression, and (c) integrating the client's personal experiences and narratives in psychotherapy. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. Sedimentary evolution of the Pliocene and Pleistocene Ebro margin, northeastern Spain

    USGS Publications Warehouse

    Alonso, B.; Field, M.E.; Gardner, J.V.; Maldonado, A.

    1990-01-01

    The Pliocene and Pleistocene deposits of the Spanish Ebro margin overlie a regional unconformity and contain a major disconformity. These unconformities, named Reflector M and Reflector G, mark the bases of two seismic sequences. Except for close to the upper boundary where a few small channel deposits are recognized, the lower sequence lacks channels. The upper sequence contains nine channel-levee complexes as well as base-of-slope aprons that represent the proximal part of the Valencia turbidite system. Diverse geometries and variations in seismic units distinguish shelf, slope, base-of-slope and basin-floor facies. Four events characterize the late Miocene to Pleistocene evolution of the Ebro margin: (a) formation of a paleodrainage system and an extensive erosion-to-depositional surface during the latest Miocene (Messinian), (b) deposition of hemipelagic units during the early Pliocene, (c) development of canyons during the late Pliocene to early Pleistocene, and (d) deposition of slope wedges, channel-levee complexes, and base-of-slope aprons alternating with hemipelagic deposition during the Pleistocene. Sea-level fluctuations influenced the evolution of the sedimentary sequences of the Ebro margin, but the major control was the sediment supply from the Ebro River. ?? 1990.

  17. Time evolution of shear-induced particle margination and migration in a cellular suspension

    NASA Astrophysics Data System (ADS)

    Qi, Qin M.; Shaqfeh, Eric S. G.

    2016-11-01

    The inhomogeneous center-of-mass distributions of red blood cells and platelets normal to the flow direction in small vessels play a significant role in hemostasis and drug delivery. Under pressure-driven flow in channels, the migration of deformable red blood cells at steady state is characterized by a cell-free or Fahraeus-Lindqvist layer near the vessel wall. Rigid particles such as platelets, however, "marginate" and thus develop a near-wall excess concentration. In order to evaluate the role of branching and design suitable microfluidic devices, it is important to investigate the time evolution of particle margination and migration from a non-equilibrium state and determine the corresponding entrance lengths. From a mechanistic point of view, deformability-induced hydrodynamic lift and shear-induced diffusion are essential mechanisms for the cross-flow migration and margination. In this talk, we determine the concentration distribution of red blood cells and platelets by solving coupled Boltzmann advection-diffusion equations for both species and explore their time evolution. We verify our model by comparing with large-scale, multi-cell simulations and experiments. Our Boltzmann collision theory serves as a fast alternative to large-scale simulations.

  18. Ambient Mass Spectrometry in Cancer Research.

    PubMed

    Takats, Z; Strittmatter, N; McKenzie, J S

    2017-01-01

    Ambient ionization mass spectrometry was developed as a sample preparation-free alternative to traditional MS-based workflows. Desorption electrospray ionization (DESI)-MS methods were demonstrated to allow the direct analysis of a broad range of samples including unaltered biological tissue specimens. In contrast to this advantageous feature, nowadays DESI-MS is almost exclusively used for sample preparation intensive mass spectrometric imaging (MSI) in the area of cancer research. As an alternative to MALDI, DESI-MSI offers matrix deposition-free experiment with improved signal in the lower (<500m/z) range. DESI-MSI enables the spatial mapping of tumor metabolism and has been broadly demonstrated to offer an alternative to frozen section histology for intraoperative tissue identification and surgical margin assessment. Rapid evaporative ionization mass spectrometry (REIMS) was developed exclusively for the latter purpose by the direct combination of electrosurgical devices and mass spectrometry. In case of the REIMS technology, aerosol particles produced by electrosurgical dissection are subjected to MS analysis, providing spectral information on the structural lipid composition of tissues. REIMS technology was demonstrated to give real-time information on the histological nature of tissues being dissected, deeming it an ideal tool for intraoperative tissue identification including surgical margin control. More recently, the method has also been used for the rapid lipidomic phenotyping of cancer cell lines as it was demonstrated in case of the NCI-60 cell line collection. © 2017 Elsevier Inc. All rights reserved.

  19. Experimental validation of the van Herk margin formula for lung radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ecclestone, Gillian; Heath, Emily; Bissonnette, Jean-Pierre

    2013-11-15

    Purpose: To validate the van Herk margin formula for lung radiation therapy using realistic dose calculation algorithms and respiratory motion modeling. The robustness of the margin formula against variations in lesion size, peak-to-peak motion amplitude, tissue density, treatment technique, and plan conformity was assessed, along with the margin formula assumption of a homogeneous dose distribution with perfect plan conformity.Methods: 3DCRT and IMRT lung treatment plans were generated within the ORBIT treatment planning platform (RaySearch Laboratories, Sweden) on 4DCT datasets of virtual phantoms. Random and systematic respiratory motion induced errors were simulated using deformable registration and dose accumulation tools available withinmore » ORBIT for simulated cases of varying lesion sizes, peak-to-peak motion amplitudes, tissue densities, and plan conformities. A detailed comparison between the margin formula dose profile model, the planned dose profiles, and penumbra widths was also conducted to test the assumptions of the margin formula. Finally, a correction to account for imperfect plan conformity was tested as well as a novel application of the margin formula that accounts for the patient-specific motion trajectory.Results: The van Herk margin formula ensured full clinical target volume coverage for all 3DCRT and IMRT plans of all conformities with the exception of small lesions in soft tissue. No dosimetric trends with respect to plan technique or lesion size were observed for the systematic and random error simulations. However, accumulated plans showed that plan conformity decreased with increasing tumor motion amplitude. When comparing dose profiles assumed in the margin formula model to the treatment plans, discrepancies in the low dose regions were observed for the random and systematic error simulations. However, the margin formula respected, in all experiments, the 95% dose coverage required for planning target volume (PTV) margin derivation, as defined by the ICRU; thus, suitable PTV margins were estimated. The penumbra widths calculated in lung tissue for each plan were found to be very similar to the 6.4 mm value assumed by the margin formula model. The plan conformity correction yielded inconsistent results which were largely affected by image and dose grid resolution while the trajectory modified PTV plans yielded a dosimetric benefit over the standard internal target volumes approach with up to a 5% decrease in the V20 value.Conclusions: The margin formula showed to be robust against variations in tumor size and motion, treatment technique, plan conformity, as well as low tissue density. This was validated by maintaining coverage of all of the derived PTVs by 95% dose level, as required by the formal definition of the PTV. However, the assumption of perfect plan conformity in the margin formula derivation yields conservative margin estimation. Future modifications to the margin formula will require a correction for plan conformity. Plan conformity can also be improved by using the proposed trajectory modified PTV planning approach. This proves especially beneficial for tumors with a large anterior–posterior component of respiratory motion.« less

  20. Pricing hospital care: Global budgets and marginal pricing strategies.

    PubMed

    Sutherland, Jason M

    2015-08-01

    The Canadian province of British Columbia (BC) is adding financial incentives to increase the volume of surgeries provided by hospitals using a marginal pricing approach. The objective of this study is to calculate marginal costs of surgeries based on assumptions regarding hospitals' availability of labor and equipment. This study is based on observational clinical, administrative and financial data generated by hospitals. Hospital inpatient and outpatient discharge summaries from the province are linked with detailed activity-based costing information, stratified by assigned case mix categorizations. To reflect a range of operating constraints governing hospitals' ability to increase their volume of surgeries, a number of scenarios are proposed. Under these scenarios, estimated marginal costs are calculated and compared to prices being offered as incentives to hospitals. Existing data can be used to support alternative strategies for pricing hospital care. Prices for inpatient surgeries do not generate positive margins under a range of operating scenarios. Hip and knee surgeries generate surpluses for hospitals even under the most costly labor conditions and are expected to generate additional volume. In health systems that wish to fine-tune financial incentives, setting prices that create incentives for additional volume should reflect knowledge of hospitals' underlying cost structures. Possible implications of mis-pricing include no response to the incentives or uneven increases in supply. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Importance of flexure in response to sedimentation and erosion along the US Atlantic passive margin in reconciling sea level change and paleoshorelines

    NASA Astrophysics Data System (ADS)

    Moucha, R.; Ruetenik, G.; de Boer, B.

    2017-12-01

    Reconciling elevations of paleoshorelines along the US Atlantic passive margin with estimates of eustatic sea level have long posed to be a challenge. Discrepancies between shoreline elevation and sea level have been attributed to combinations of tectonics, glacial isostatic adjustment, mantle convection, gravitation and/or errors, for example, in the inference of eustatic sea level from the marine 18O record. Herein we present a numerical model of landscape evolution combined with sea level change and solid Earth deformations to demonstrate the importance of flexural effects in response to erosion and sedimentation along the US Atlantic passive margin. We quantify these effects using two different temporal models. One reconciles the Orangeburg scarp, a well-documented 3.5 million-year-old mid-Pliocene shoreline, with a 15 m mid-Pliocene sea level above present-day (Moucha and Ruetenik, 2017). The other model focuses on the evolution of the South Carolina and northern Georgia margin since MIS 11 ( 400 Ka) using a fully coupled ice sheet, sea level and solid Earth model (de Boer et al, 2014) while relating our results to a series of enigmatic sea level high stand markers. de Boer, B., Stocci, P., and van de Wal, R. (2014). A fully coupled 3-d ice-sheet-sea-level model: algorithm and applications. Geoscientific Model Development, 7:2141-2156. Moucha, R. and Ruetenik, G. A. (2017). Interplay between dynamic topography and flexure along the US Atlantic passive margin: Insights from landscape evolution modeling. Global and Planetary Change, 149: 72-78

  2. Efficiency of goal-oriented communicating agents in different graph topologies: A study with Internet crawlers

    NASA Astrophysics Data System (ADS)

    Lőrincz, András; Lázár, Katalin A.; Palotai, Zsolt

    2007-05-01

    To what extent does the communication make a goal-oriented community efficient in different topologies? In order to gain insight into this problem, we study the influence of learning method as well as that of the topology of the environment on the communication efficiency of crawlers in quest of novel information in different topics on the Internet. Individual crawlers employ selective learning, function approximation-based reinforcement learning (RL), and their combination. Selective learning, in effect, modifies the starting URL lists of the crawlers, whilst RL alters the URL orderings. Real data have been collected from the web and scale-free worlds, scale-free small world (SFSW), and random world environments (RWEs) have been created by link reorganization. In our previous experiments [ Zs. Palotai, Cs. Farkas, A. Lőrincz, Is selection optimal in scale-free small worlds?, ComPlexUs 3 (2006) 158-168], the crawlers searched for novel, genuine documents and direct communication was not possible. Herein, our finding is reproduced: selective learning performs the best and RL the worst in SFSW, whereas the combined, i.e., selective learning coupled with RL is the best-by a slight margin-in scale-free worlds. This effect is demonstrated to be more pronounced when the crawlers search for different topic-specific documents: the relative performance of the combined learning algorithm improves in all worlds, i.e., in SFSW, in SFW, and in RWE. If the tasks are more complex and the work sharing is enforced by the environment then the combined learning algorithm becomes at least equal, even superior to both the selective and the RL algorithms in most cases, irrespective of the efficiency of communication. Furthermore, communication improves the performance by a large margin and adaptive communication is advantageous in the majority of the cases.

  3. Systematically Differentiating Functions for Alternatively Spliced Isoforms through Integrating RNA-seq Data

    PubMed Central

    Menon, Rajasree; Wen, Yuchen; Omenn, Gilbert S.; Kretzler, Matthias; Guan, Yuanfang

    2013-01-01

    Integrating large-scale functional genomic data has significantly accelerated our understanding of gene functions. However, no algorithm has been developed to differentiate functions for isoforms of the same gene using high-throughput genomic data. This is because standard supervised learning requires ‘ground-truth’ functional annotations, which are lacking at the isoform level. To address this challenge, we developed a generic framework that interrogates public RNA-seq data at the transcript level to differentiate functions for alternatively spliced isoforms. For a specific function, our algorithm identifies the ‘responsible’ isoform(s) of a gene and generates classifying models at the isoform level instead of at the gene level. Through cross-validation, we demonstrated that our algorithm is effective in assigning functions to genes, especially the ones with multiple isoforms, and robust to gene expression levels and removal of homologous gene pairs. We identified genes in the mouse whose isoforms are predicted to have disparate functionalities and experimentally validated the ‘responsible’ isoforms using data from mammary tissue. With protein structure modeling and experimental evidence, we further validated the predicted isoform functional differences for the genes Cdkn2a and Anxa6. Our generic framework is the first to predict and differentiate functions for alternatively spliced isoforms, instead of genes, using genomic data. It is extendable to any base machine learner and other species with alternatively spliced isoforms, and shifts the current gene-centered function prediction to isoform-level predictions. PMID:24244129

  4. Simulated tempering based on global balance or detailed balance conditions: Suwa-Todo, heat bath, and Metropolis algorithms.

    PubMed

    Mori, Yoshiharu; Okumura, Hisashi

    2015-12-05

    Simulated tempering (ST) is a useful method to enhance sampling of molecular simulations. When ST is used, the Metropolis algorithm, which satisfies the detailed balance condition, is usually applied to calculate the transition probability. Recently, an alternative method that satisfies the global balance condition instead of the detailed balance condition has been proposed by Suwa and Todo. In this study, ST method with the Suwa-Todo algorithm is proposed. Molecular dynamics simulations with ST are performed with three algorithms (the Metropolis, heat bath, and Suwa-Todo algorithms) to calculate the transition probability. Among the three algorithms, the Suwa-Todo algorithm yields the highest acceptance ratio and the shortest autocorrelation time. These suggest that sampling by a ST simulation with the Suwa-Todo algorithm is most efficient. In addition, because the acceptance ratio of the Suwa-Todo algorithm is higher than that of the Metropolis algorithm, the number of temperature states can be reduced by 25% for the Suwa-Todo algorithm when compared with the Metropolis algorithm. © 2015 Wiley Periodicals, Inc.

  5. The effect of orthology and coregulation on detecting regulatory motifs.

    PubMed

    Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen

    2010-02-03

    Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE.

  6. The Effect of Orthology and Coregulation on Detecting Regulatory Motifs

    PubMed Central

    Storms, Valerie; Claeys, Marleen; Sanchez, Aminael; De Moor, Bart; Verstuyf, Annemieke; Marchal, Kathleen

    2010-01-01

    Background Computational de novo discovery of transcription factor binding sites is still a challenging problem. The growing number of sequenced genomes allows integrating orthology evidence with coregulation information when searching for motifs. Moreover, the more advanced motif detection algorithms explicitly model the phylogenetic relatedness between the orthologous input sequences and thus should be well adapted towards using orthologous information. In this study, we evaluated the conditions under which complementing coregulation with orthologous information improves motif detection for the class of probabilistic motif detection algorithms with an explicit evolutionary model. Methodology We designed datasets (real and synthetic) covering different degrees of coregulation and orthologous information to test how well Phylogibbs and Phylogenetic sampler, as representatives of the motif detection algorithms with evolutionary model performed as compared to MEME, a more classical motif detection algorithm that treats orthologs independently. Results and Conclusions Under certain conditions detecting motifs in the combined coregulation-orthology space is indeed more efficient than using each space separately, but this is not always the case. Moreover, the difference in success rate between the advanced algorithms and MEME is still marginal. The success rate of motif detection depends on the complex interplay between the added information and the specificities of the applied algorithms. Insights in this relation provide information useful to both developers and users. All benchmark datasets are available at http://homes.esat.kuleuven.be/~kmarchal/Supplementary_Storms_Valerie_PlosONE. PMID:20140085

  7. Making adjustments to event annotations for improved biological event extraction.

    PubMed

    Baek, Seung-Cheol; Park, Jong C

    2016-09-16

    Current state-of-the-art approaches to biological event extraction train statistical models in a supervised manner on corpora annotated with event triggers and event-argument relations. Inspecting such corpora, we observe that there is ambiguity in the span of event triggers (e.g., "transcriptional activity" vs. 'transcriptional'), leading to inconsistencies across event trigger annotations. Such inconsistencies make it quite likely that similar phrases are annotated with different spans of event triggers, suggesting the possibility that a statistical learning algorithm misses an opportunity for generalizing from such event triggers. We anticipate that adjustments to the span of event triggers to reduce these inconsistencies would meaningfully improve the present performance of event extraction systems. In this study, we look into this possibility with the corpora provided by the 2009 BioNLP shared task as a proof of concept. We propose an Informed Expectation-Maximization (EM) algorithm, which trains models using the EM algorithm with a posterior regularization technique, which consults the gold-standard event trigger annotations in a form of constraints. We further propose four constraints on the possible event trigger annotations to be explored by the EM algorithm. The algorithm is shown to outperform the state-of-the-art algorithm on the development corpus in a statistically significant manner and on the test corpus by a narrow margin. The analysis of the annotations generated by the algorithm shows that there are various types of ambiguity in event annotations, even though they could be small in number.

  8. Design Mining Interacting Wind Turbines.

    PubMed

    Preen, Richard J; Bull, Larry

    2016-01-01

    An initial study has recently been presented of surrogate-assisted evolutionary algorithms used to design vertical-axis wind turbines wherein candidate prototypes are evaluated under fan-generated wind conditions after being physically instantiated by a 3D printer. Unlike other approaches, such as computational fluid dynamics simulations, no mathematical formulations were used and no model assumptions were made. This paper extends that work by exploring alternative surrogate modelling and evolutionary techniques. The accuracy of various modelling algorithms used to estimate the fitness of evaluated individuals from the initial experiments is compared. The effect of temporally windowing surrogate model training samples is explored. A surrogate-assisted approach based on an enhanced local search is introduced; and alternative coevolution collaboration schemes are examined.

  9. Regional Variability and Uncertainty of Electric Vehicle Life Cycle CO₂ Emissions across the United States.

    PubMed

    Tamayao, Mili-Ann M; Michalek, Jeremy J; Hendrickson, Chris; Azevedo, Inês M L

    2015-07-21

    We characterize regionally specific life cycle CO2 emissions per mile traveled for plug-in hybrid electric vehicles (PHEVs) and battery electric vehicles (BEVs) across the United States under alternative assumptions for regional electricity emission factors, regional boundaries, and charging schemes. We find that estimates based on marginal vs average grid emission factors differ by as much as 50% (using National Electricity Reliability Commission (NERC) regional boundaries). Use of state boundaries versus NERC region boundaries results in estimates that differ by as much as 120% for the same location (using average emission factors). We argue that consumption-based marginal emission factors are conceptually appropriate for evaluating the emissions implications of policies that increase electric vehicle sales or use in a region. We also examine generation-based marginal emission factors to assess robustness. Using these two estimates of NERC region marginal emission factors, we find the following: (1) delayed charging (i.e., starting at midnight) leads to higher emissions in most cases due largely to increased coal in the marginal generation mix at night; (2) the Chevrolet Volt has higher expected life cycle emissions than the Toyota Prius hybrid electric vehicle (the most efficient U.S. gasoline vehicle) across the U.S. in nearly all scenarios; (3) the Nissan Leaf BEV has lower life cycle emissions than the Prius in the western U.S. and in Texas, but the Prius has lower emissions in the northern Midwest regardless of assumed charging scheme and marginal emissions estimation method; (4) in other regions the lowest emitting vehicle depends on charge timing and emission factor estimation assumptions.

  10. Pulse shape discrimination of Cs2LiYCl6:Ce3+ detectors at high count rate based on triangular and trapezoidal filters

    NASA Astrophysics Data System (ADS)

    Wen, Xianfei; Enqvist, Andreas

    2017-09-01

    Cs2LiYCl6:Ce3+ (CLYC) detectors have demonstrated the capability to simultaneously detect γ-rays and thermal and fast neutrons with medium energy resolution, reasonable detection efficiency, and substantially high pulse shape discrimination performance. A disadvantage of CLYC detectors is the long scintillation decay times, which causes pulse pile-up at moderate input count rate. Pulse processing algorithms were developed based on triangular and trapezoidal filters to discriminate between neutrons and γ-rays at high count rate. The algorithms were first tested using low-rate data. They exhibit a pulse-shape discrimination performance comparable to that of the charge comparison method, at low rate. Then, they were evaluated at high count rate. Neutrons and γ-rays were adequately identified with high throughput at rates of up to 375 kcps. The algorithm developed using the triangular filter exhibits discrimination capability marginally higher than that of the trapezoidal filter based algorithm irrespective of low or high rate. The algorithms exhibit low computational complexity and are executable on an FPGA in real-time. They are also suitable for application to other radiation detectors whose pulses are piled-up at high rate owing to long scintillation decay times.

  11. Probabilistic fusion of stereo with color and contrast for bilayer segmentation.

    PubMed

    Kolmogorov, Vladimir; Criminisi, Antonio; Blake, Andrew; Cross, Geoffrey; Rother, Carsten

    2006-09-01

    This paper describes models and algorithms for the real-time segmentation of foreground from background layers in stereo video sequences. Automatic separation of layers from color/contrast or from stereo alone is known to be error-prone. Here, color, contrast, and stereo matching information are fused to infer layers accurately and efficiently. The first algorithm, Layered Dynamic Programming (LDP), solves stereo in an extended six-state space that represents both foreground/background layers and occluded regions. The stereo-match likelihood is then fused with a contrast-sensitive color model that is learned on-the-fly and stereo disparities are obtained by dynamic programming. The second algorithm, Layered Graph Cut (LGC), does not directly solve stereo. Instead, the stereo match likelihood is marginalized over disparities to evaluate foreground and background hypotheses and then fused with a contrast-sensitive color model like the one used in LDP. Segmentation is solved efficiently by ternary graph cut. Both algorithms are evaluated with respect to ground truth data and found to have similar performance, substantially better than either stereo or color/ contrast alone. However, their characteristics with respect to computational efficiency are rather different. The algorithms are demonstrated in the application of background substitution and shown to give good quality composite video output.

  12. TU-D-202-03: Gating Is the Best ITV Killer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Low, D.

    Respiratory motion has long been recognized as an important factor affecting the precision of radiotherapy. After the introduction of the 4D CT to visualize the respiratory motion in 3D, the internal target volume (ITV) has been widely adopted as simple method to take the motion into account in treatment planning and delivery. The ITV is generated as the union of the CTVs as the patient goes through the respiratory cycle. Many issues have been identified with the ITV. In this session three alternatives for the ITV will be discussed: 1) An alternative motion-inclusive approach with better imaging and smaller margins,more » called mid-position CT. 2) The tracking approach and 3) The gating approach. The following topics will be addressed by Marcel van Herk (“Is ITV the correct motion encompassing strategy”): Magnitude of respiratory motion, effect of motion on radiotherapy, motion encompassing strategies, and software solutions to assist in motion encompassing strategies. Then Paul Keall (“Make margins simple: Use real-time target tracking”) will discuss tracking with: clinical drivers for tracking, current clinical status of tumor tracking, future tumor tracking technology, and margin margin challenges with and without tracking. Finally Daniel Low will discuss gating (“Gating is the best ITV killer”): why ITV in the first place, requirements for planning, requirements at the machine, benefits and costs. The session will end with a discussion and live demo of motion simulation software to illustrate the issues and explain the relative benefit and appropriate uses for the three methods. Learning Objectives: Explain the 4D imaging and treatment planning process. Summarize the various approaches to deal with respiratory motion during radiotherapy Discuss the tradeoffs involved when choosing one of the three discussed approaches. Explain in which situation each method is the best choice Research is partly funded by Elekta Oncology Systems and the Dutch Cancer Foundation; M. van Herk, Part of the research was funded by Elekta Oncology Systems and the Dutch Cancer Foundation.« less

  13. Physical activity in low-income postpartum women.

    PubMed

    Wilkinson, Susan; Huang, Chiu-Mieh; Walker, Lorraine O; Sterling, Bobbie Sue; Kim, Minseong

    2004-01-01

    To validate the 7-day physical activity recall (PAR), including alternative PAR scoring algorithms, using pedometer readings with low-income postpartum women, and to describe physical activity patterns of a low-income population of postpartum women. Forty-four women (13 African American, 19 Hispanic, and 12 White) from the Austin New Mothers Study (ANMS) were interviewed at 3 months postpartum. Data were scored alternatively according to the Blair (sitting treated as light activity) and Welk (sitting excluded from light activity and treated as rest) algorithms. Step counts based on 3 days of wearing pedometers served as the validation measure. Using the Welk algorithm, PAR components significantly correlated with step counts were: minutes spent in light activity, total activity (sum of light to very hard activity), and energy expenditure. Minutes spent in sitting were negatively correlated with step counts. No PAR component activities derived with the Blair algorithm were significantly related to step counts. The largest amount of active time was spent in light activity: 384.4 minutes with the Welk algorithm. Mothers averaged fewer than 16 minutes per day in moderate or high intensity activity. Step counts measured by pedometers averaged 6,262 (SD = 2,712) per day. The findings indicate support for the validity of the PAR as a measure of physical activity with low-income postpartum mothers when scored according to the Welk algorithm. On average, low-income postpartum women in this study did not meet recommendations for amount of moderate or high intensity physical activity.

  14. Fluorescent quantification of terazosin hydrochloride content in human plasma and tablets using second-order calibration based on both parallel factor analysis and alternating penalty trilinear decomposition.

    PubMed

    Zou, Hong-Yan; Wu, Hai-Long; OuYang, Li-Qun; Zhang, Yan; Nie, Jin-Fang; Fu, Hai-Yan; Yu, Ru-Qin

    2009-09-14

    Two second-order calibration methods based on the parallel factor analysis (PARAFAC) and the alternating penalty trilinear decomposition (APTLD) method, have been utilized for the direct determination of terazosin hydrochloride (THD) in human plasma samples, coupled with the excitation-emission matrix fluorescence spectroscopy. Meanwhile, the two algorithms combing with the standard addition procedures have been applied for the determination of terazosin hydrochloride in tablets and the results were validated by the high-performance liquid chromatography with fluorescence detection. These second-order calibrations all adequately exploited the second-order advantages. For human plasma samples, the average recoveries by the PARAFAC and APTLD algorithms with the factor number of 2 (N=2) were 100.4+/-2.7% and 99.2+/-2.4%, respectively. The accuracy of two algorithms was also evaluated through elliptical joint confidence region (EJCR) tests and t-test. It was found that both algorithms could give accurate results, and only the performance of APTLD was slightly better than that of PARAFAC. Figures of merit, such as sensitivity (SEN), selectivity (SEL) and limit of detection (LOD) were also calculated to compare the performances of the two strategies. For tablets, the average concentrations of THD in tablet were 63.5 and 63.2 ng mL(-1) by using the PARAFAC and APTLD algorithms, respectively. The accuracy was evaluated by t-test and both algorithms could give accurate results, too.

  15. A Comparison of Techniques for Scheduling Earth-Observing Satellites

    NASA Technical Reports Server (NTRS)

    Globus, Al; Crawford, James; Lohn, Jason; Pryor, Anna

    2004-01-01

    Scheduling observations by coordinated fleets of Earth Observing Satellites (EOS) involves large search spaces, complex constraints and poorly understood bottlenecks, conditions where evolutionary and related algorithms are often effective. However, there are many such algorithms and the best one to use is not clear. Here we compare multiple variants of the genetic algorithm: stochastic hill climbing, simulated annealing, squeaky wheel optimization and iterated sampling on ten realistically-sized EOS scheduling problems. Schedules are represented by a permutation (non-temperal ordering) of the observation requests. A simple deterministic scheduler assigns times and resources to each observation request in the order indicated by the permutation, discarding those that violate the constraints created by previously scheduled observations. Simulated annealing performs best. Random mutation outperform a more 'intelligent' mutator. Furthermore, the best mutator, by a small margin, was a novel approach we call temperature dependent random sampling that makes large changes in the early stages of evolution and smaller changes towards the end of search.

  16. A Fuel-Efficient Conflict Resolution Maneuver for Separation Assurance

    NASA Technical Reports Server (NTRS)

    Bowe, Aisha Ruth; Santiago, Confesor

    2012-01-01

    Automated separation assurance algorithms are envisioned to play an integral role in accommodating the forecasted increase in demand of the National Airspace System. Developing a robust, reliable, air traffic management system involves safely increasing efficiency and throughput while considering the potential impact on users. This experiment seeks to evaluate the benefit of augmenting a conflict detection and resolution algorithm to consider a fuel efficient, Zero-Delay Direct-To maneuver, when resolving a given conflict based on either minimum fuel burn or minimum delay. A total of twelve conditions were tested in a fast-time simulation conducted in three airspace regions with mixed aircraft types and light weather. Results show that inclusion of this maneuver has no appreciable effect on the ability of the algorithm to safely detect and resolve conflicts. The results further suggest that enabling the Zero-Delay Direct-To maneuver significantly increases the cumulative fuel burn savings when choosing resolution based on minimum fuel burn while marginally increasing the average delay per resolution.

  17. The Effects of Studying Abroad and Studying Sustainability on Students' Global Perspectives

    ERIC Educational Resources Information Center

    Tarrant, Michael A.; Rubin, Donald L.; Stoner, Lee

    2015-01-01

    Study abroad has shifted from a marginal opportunity for higher education students to a core strategy of U.S. colleges and universities, considered integral in the mission to globalize the academic environment (Sutton, Miller & Rubin, 2007). The assumption is that a broad set of efforts to expose students to alternate ways of viewing the world…

  18. Rating the YouTube Indian: Viewer Ratings of Native American Portrayals on a Viral Video Site

    ERIC Educational Resources Information Center

    Kopacz, Maria A.; Lawton, Bessie Lee

    2011-01-01

    Online outlets for user-generated content (UGC) like YouTube have created environments for alternative depictions of marginalized groups, as UGC can be contributed by anyone with basic technology access. Preliminary findings on UGC relating to Native Americans confirm some favorable departures from the distortions prevalent in the old media. The…

  19. "Making the Margins Chaos": Romantic and Antiromantic Readings of La Maravilla

    ERIC Educational Resources Information Center

    Carlston, Erin G.

    2005-01-01

    Alfredo Vea Jr.'s 1993 novel "La Maravilla" depicts a 1950s squatter community on the edge of Phoenix. The community, Buckeye Road, questions notions of U.S. American identity as middle-class, WASP, and heterosexual. Buckeye can easily be viewed as a romanticized utopia that offers an alternative to consumer capitalism, urban sprawl, the…

  20. Youth Apprenticeships in Canada: On Their Inferior Status Despite Skilled Labour Shortages

    ERIC Educational Resources Information Center

    Lehmann, Wolfgang; Taylor, Alison; Wright, Laura

    2014-01-01

    In Canada, youth apprenticeships have been promoted as an educational alternative that leads to the development of valuable skills, allows for the opportunity to earn an income while learning and helps youth to gain a head start into lucrative, creative and in-demand careers. Yet, these programmes have remained rather marginal and continue to be…

  1. We Are Not Alternative Facts: Feeling, Existing, and Resisting in the Era of Trump

    ERIC Educational Resources Information Center

    Castrellón, Liliana E.; Reyna Rivarola, Alonso R.; López, Gerardo R.

    2017-01-01

    In this article the authors argue that Donald Trump is not simply a presidential figure, but the embodiment of white supremacy, capitalism, racism, neoliberalism, patriarchy, xenophobia, Islamaphobia, homophobia, and more. It is our belief that historically marginalized communities are in a state of constant terror as we try to make sense of how…

  2. Thriving on Challenge: Examining One Teacher's View on Sources of Support for Motivation and Well-Being

    ERIC Educational Resources Information Center

    Perry, Nancy E.; Brenner, Charlotte A.; Collie, Rebecca J.; Hofer, Gigi

    2015-01-01

    Alarmingly high rates of teacher attrition exist in contexts designed for students with considerable needs, such as in alternative education programs serving marginalized youth. Research has linked teachers' levels of motivation and well-being to their effectiveness and retention. Consequently, we explore what distinguishes teachers who thrive in…

  3. Reverberating Echoes: Challenging Teacher Candidates to Tell and Learn from Entwined Narrations of Canadian History

    ERIC Educational Resources Information Center

    Den Heyer, Kent; Abbott, Laurence

    2011-01-01

    A key challenge confronting teacher educators is to help their students identify perspectives that depart from dominant historical narratives of a nation-state's development so as to potentially derive alternative meanings of shared pasts from marginalized perspectives. In this article, we examine the nature of this challenge both as a theoretical…

  4. Refusing to Settle for Pigeons and Parks: Urban Environmental Education in the Age of Neoliberalism

    ERIC Educational Resources Information Center

    Derby, Michael W.; Piersol, Laura; Blenkinsop, Sean

    2015-01-01

    The institutionalization of neoliberal reforms that began to take hold in the 1970s were by and large "common-sense governance" by the 1990s. While the growing predominance of neoliberal discourse and marginalization of alternatives in environmental education is disconcerting on the level of policy, this paper explores an equally…

  5. 30 CFR 1204.215 - Are the information collection requirements in this subpart approved by the Office of Management...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 30 Mineral Resources 3 2011-07-01 2011-07-01 false Are the information collection requirements in this subpart approved by the Office of Management and Budget (OMB)? 1204.215 Section 1204.215 Mineral... Revenue ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 1204.215 Are the information...

  6. 30 CFR 1204.215 - Are the information collection requirements in this subpart approved by the Office of Management...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 30 Mineral Resources 3 2014-07-01 2014-07-01 false Are the information collection requirements in this subpart approved by the Office of Management and Budget (OMB)? 1204.215 Section 1204.215 Mineral... ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 1204.215 Are the information collection...

  7. 30 CFR 1204.215 - Are the information collection requirements in this subpart approved by the Office of Management...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 30 Mineral Resources 3 2012-07-01 2012-07-01 false Are the information collection requirements in this subpart approved by the Office of Management and Budget (OMB)? 1204.215 Section 1204.215 Mineral... ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 1204.215 Are the information collection...

  8. 30 CFR 1204.215 - Are the information collection requirements in this subpart approved by the Office of Management...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 30 Mineral Resources 3 2013-07-01 2013-07-01 false Are the information collection requirements in this subpart approved by the Office of Management and Budget (OMB)? 1204.215 Section 1204.215 Mineral... ALTERNATIVES FOR MARGINAL PROPERTIES Accounting and Auditing Relief § 1204.215 Are the information collection...

  9. 30 CFR 204.6 - May I appeal if MMS denies my request for prepayment or other relief?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 30 Mineral Resources 2 2010-07-01 2010-07-01 false May I appeal if MMS denies my request for prepayment or other relief? 204.6 Section 204.6 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR MINERALS REVENUE MANAGEMENT ALTERNATIVES FOR MARGINAL PROPERTIES General Provisions § 204...

  10. Four-Year Follow-Up of Children in the Leap Randomized Trial: Some Planned and Accidental Findings

    ERIC Educational Resources Information Center

    Strain, Phillip S.

    2017-01-01

    This article reports on a 4-year follow-up study from the Learning Experiences and Alternative Program for Preschoolers and Their Parents (LEAP) randomized trial of early intervention for young children with autism. Overall, participants from LEAP classes were marginally superior to comparison class children on elementary school outcomes specific…

  11. Four-Year Follow-Up of Children in the LEAP Randomized Trial: Some Planned and Accidental Findings

    ERIC Educational Resources Information Center

    Strain, Phillip S.

    2017-01-01

    This article reports on a 4-year follow-up study from the Learning Experiences and Alternative Program for Preschoolers and Their Parents (LEAP) randomized trial of early intervention for young children with autism. Overall, participants from LEAP classes were marginally superior to comparison class children on elementary school outcomes specific…

  12. Architecture and data processing alternatives for the TSE computer. Volume 3: Execution of a parallel counting algorithm using array logic (Tse) devices

    NASA Technical Reports Server (NTRS)

    Metcalfe, A. G.; Bodenheimer, R. E.

    1976-01-01

    A parallel algorithm for counting the number of logic-l elements in a binary array or image developed during preliminary investigation of the Tse concept is described. The counting algorithm is implemented using a basic combinational structure. Modifications which improve the efficiency of the basic structure are also presented. A programmable Tse computer structure is proposed, along with a hardware control unit, Tse instruction set, and software program for execution of the counting algorithm. Finally, a comparison is made between the different structures in terms of their more important characteristics.

  13. Iterative Transform Phase Diversity: An Image-Based Object and Wavefront Recovery

    NASA Technical Reports Server (NTRS)

    Smith, Jeffrey

    2012-01-01

    The Iterative Transform Phase Diversity algorithm is designed to solve the problem of recovering the wavefront in the exit pupil of an optical system and the object being imaged. This algorithm builds upon the robust convergence capability of Variable Sampling Mapping (VSM), in combination with the known success of various deconvolution algorithms. VSM is an alternative method for enforcing the amplitude constraints of a Misell-Gerchberg-Saxton (MGS) algorithm. When provided the object and additional optical parameters, VSM can accurately recover the exit pupil wavefront. By combining VSM and deconvolution, one is able to simultaneously recover the wavefront and the object.

  14. Parallel processors and nonlinear structural dynamics algorithms and software

    NASA Technical Reports Server (NTRS)

    Belytschko, Ted; Gilbertsen, Noreen D.; Neal, Mark O.; Plaskacz, Edward J.

    1989-01-01

    The adaptation of a finite element program with explicit time integration to a massively parallel SIMD (single instruction multiple data) computer, the CONNECTION Machine is described. The adaptation required the development of a new algorithm, called the exchange algorithm, in which all nodal variables are allocated to the element with an exchange of nodal forces at each time step. The architectural and C* programming language features of the CONNECTION Machine are also summarized. Various alternate data structures and associated algorithms for nonlinear finite element analysis are discussed and compared. Results are presented which demonstrate that the CONNECTION Machine is capable of outperforming the CRAY XMP/14.

  15. Cyclical parthenogenesis algorithm for layout optimization of truss structures with frequency constraints

    NASA Astrophysics Data System (ADS)

    Kaveh, A.; Zolghadr, A.

    2017-08-01

    Structural optimization with frequency constraints is seen as a challenging problem because it is associated with highly nonlinear, discontinuous and non-convex search spaces consisting of several local optima. Therefore, competent optimization algorithms are essential for addressing these problems. In this article, a newly developed metaheuristic method called the cyclical parthenogenesis algorithm (CPA) is used for layout optimization of truss structures subjected to frequency constraints. CPA is a nature-inspired, population-based metaheuristic algorithm, which imitates the reproductive and social behaviour of some animal species such as aphids, which alternate between sexual and asexual reproduction. The efficiency of the CPA is validated using four numerical examples.

  16. Parallel algorithms for placement and routing in VLSI design. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Brouwer, Randall Jay

    1991-01-01

    The computational requirements for high quality synthesis, analysis, and verification of very large scale integration (VLSI) designs have rapidly increased with the fast growing complexity of these designs. Research in the past has focused on the development of heuristic algorithms, special purpose hardware accelerators, or parallel algorithms for the numerous design tasks to decrease the time required for solution. Two new parallel algorithms are proposed for two VLSI synthesis tasks, standard cell placement and global routing. The first algorithm, a parallel algorithm for global routing, uses hierarchical techniques to decompose the routing problem into independent routing subproblems that are solved in parallel. Results are then presented which compare the routing quality to the results of other published global routers and which evaluate the speedups attained. The second algorithm, a parallel algorithm for cell placement and global routing, hierarchically integrates a quadrisection placement algorithm, a bisection placement algorithm, and the previous global routing algorithm. Unique partitioning techniques are used to decompose the various stages of the algorithm into independent tasks which can be evaluated in parallel. Finally, results are presented which evaluate the various algorithm alternatives and compare the algorithm performance to other placement programs. Measurements are presented on the parallel speedups available.

  17. MapReduce SVM Game

    DOE PAGES

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.; ...

    2015-08-10

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  18. MapReduce SVM Game

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vineyard, Craig M.; Verzi, Stephen J.; James, Conrad D.

    Despite technological advances making computing devices faster, smaller, and more prevalent in today's age, data generation and collection has outpaced data processing capabilities. Simply having more compute platforms does not provide a means of addressing challenging problems in the big data era. Rather, alternative processing approaches are needed and the application of machine learning to big data is hugely important. The MapReduce programming paradigm is an alternative to conventional supercomputing approaches, and requires less stringent data passing constrained problem decompositions. Rather, MapReduce relies upon defining a means of partitioning the desired problem so that subsets may be computed independently andmore » recom- bined to yield the net desired result. However, not all machine learning algorithms are amenable to such an approach. Game-theoretic algorithms are often innately distributed, consisting of local interactions between players without requiring a central authority and are iterative by nature rather than requiring extensive retraining. Effectively, a game-theoretic approach to machine learning is well suited for the MapReduce paradigm and provides a novel, alternative new perspective to addressing the big data problem. In this paper we present a variant of our Support Vector Machine (SVM) Game classifier which may be used in a distributed manner, and show an illustrative example of applying this algorithm.« less

  19. Novel Kalman filter algorithm for statistical monitoring of extensive landscapes with synoptic sensor data

    Treesearch

    Raymond L. Czaplewski

    2015-01-01

    Wall-to-wall remotely sensed data are increasingly available to monitor landscape dynamics over large geographic areas. However, statistical monitoring programs that use post-stratification cannot fully utilize those sensor data. The Kalman filter (KF) is an alternative statistical estimator. I develop a new KF algorithm that is numerically robust with large numbers of...

  20. A novel washing algorithm for underarm stain removal

    NASA Astrophysics Data System (ADS)

    Acikgoz Tufan, H.; Gocek, I.; Sahin, U. K.; Erdem, I.

    2017-10-01

    After contacting with human sweat which comprise around 27% sebum, anti-perspirants comprising aluminium chloride or its compounds form a jel-like structure whose solubility in water is very poor. In daily use, this jel-like structure closes sweat pores and hinders wetting of skin by sweat. However, when in contact with garments, they form yellowish stains at the underarm of the garments. These stains are very hard to remove with regular machine washing. In this study, first of all, we focused on understanding and simulating such stain formation on the garments. Two alternative procedures are offered to form jel-like structures. On both procedures, commercially available spray or deo-stick type anti-perspirants, standard acidic and basic sweat solutions and artificial sebum are used to form jel-like structures, and they are applied on fabric in order to get hard stains. Secondly, after simulation of the stain on the fabric, we put our efforts on developing a washing algorithm specifically designed for removal of underarm stains. Eight alternative washing algorithms are offered with varying washing temperature, amounts of detergent, and pre-stain removal procedures. Better algorithm is selected by comparison of Tristimulus Y values after washing.

Top