Weighted Least Squares Fitting Using Ordinary Least Squares Algorithms.
ERIC Educational Resources Information Center
Kiers, Henk A. L.
1997-01-01
A general approach for fitting a model to a data matrix by weighted least squares (WLS) is studied. The approach consists of iteratively performing steps of existing algorithms for ordinary least squares fitting of the same model and is based on maximizing a function that majorizes WLS loss function. (Author/SLD)
Exploration and extension of an improved Riemann track fitting algorithm
NASA Astrophysics Data System (ADS)
Strandlie, A.; Frühwirth, R.
2017-09-01
Recently, a new Riemann track fit which operates on translated and scaled measurements has been proposed. This study shows that the new Riemann fit is virtually as precise as popular approaches such as the Kalman filter or an iterative non-linear track fitting procedure, and significantly more precise than other, non-iterative circular track fitting approaches over a large range of measurement uncertainties. The fit is then extended in two directions: first, the measurements are allowed to lie on plane sensors of arbitrary orientation; second, the full error propagation from the measurements to the estimated circle parameters is computed. The covariance matrix of the estimated track parameters can therefore be computed without recourse to asymptotic properties, and is consequently valid for any number of observation. It does, however, assume normally distributed measurement errors. The calculations are validated on a simulated track sample and show excellent agreement with the theoretical expectations.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
NASA Technical Reports Server (NTRS)
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Calibration and Data Analysis of the MC-130 Air Balance
NASA Technical Reports Server (NTRS)
Booth, Dennis; Ulbrich, N.
2012-01-01
Design, calibration, calibration analysis, and intended use of the MC-130 air balance are discussed. The MC-130 balance is an 8.0 inch diameter force balance that has two separate internal air flow systems and one external bellows system. The manual calibration of the balance consisted of a total of 1854 data points with both unpressurized and pressurized air flowing through the balance. A subset of 1160 data points was chosen for the calibration data analysis. The regression analysis of the subset was performed using two fundamentally different analysis approaches. First, the data analysis was performed using a recently developed extension of the Iterative Method. This approach fits gage outputs as a function of both applied balance loads and bellows pressures while still allowing the application of the iteration scheme that is used with the Iterative Method. Then, for comparison, the axial force was also analyzed using the Non-Iterative Method. This alternate approach directly fits loads as a function of measured gage outputs and bellows pressures and does not require a load iteration. The regression models used by both the extended Iterative and Non-Iterative Method were constructed such that they met a set of widely accepted statistical quality requirements. These requirements lead to reliable regression models and prevent overfitting of data because they ensure that no hidden near-linear dependencies between regression model terms exist and that only statistically significant terms are included. Finally, a comparison of the axial force residuals was performed. Overall, axial force estimates obtained from both methods show excellent agreement as the differences of the standard deviation of the axial force residuals are on the order of 0.001 % of the axial force capacity.
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
Iterative spectral methods and spectral solutions to compressible flows
NASA Technical Reports Server (NTRS)
Hussaini, M. Y.; Zang, T. A.
1982-01-01
A spectral multigrid scheme is described which can solve pseudospectral discretizations of self-adjoint elliptic problems in O(N log N) operations. An iterative technique for efficiently implementing semi-implicit time-stepping for pseudospectral discretizations of Navier-Stokes equations is discussed. This approach can handle variable coefficient terms in an effective manner. Pseudospectral solutions of compressible flow problems are presented. These include one dimensional problems and two dimensional Euler solutions. Results are given both for shock-capturing approaches and for shock-fitting ones.
Automatic lung lobe segmentation of COPD patients using iterative B-spline fitting
NASA Astrophysics Data System (ADS)
Shamonin, D. P.; Staring, M.; Bakker, M. E.; Xiao, C.; Stolk, J.; Reiber, J. H. C.; Stoel, B. C.
2012-02-01
We present an automatic lung lobe segmentation algorithm for COPD patients. The method enhances fissures, removes unlikely fissure candidates, after which a B-spline is fitted iteratively through the remaining candidate objects. The iterative fitting approach circumvents the need to classify each object as being part of the fissure or being noise, and allows the fissure to be detected in multiple disconnected parts. This property is beneficial for good performance in patient data, containing incomplete and disease-affected fissures. The proposed algorithm is tested on 22 COPD patients, resulting in accurate lobe-based densitometry, and a median overlap of the fissure (defined 3 voxels wide) with an expert ground truth of 0.65, 0.54 and 0.44 for the three main fissures. This compares to complete lobe overlaps of 0.99, 0.98, 0.98, 0.97 and 0.87 for the five main lobes, showing promise for lobe segmentation on data of patients with moderate to severe COPD.
Inverse modeling and animation of growing single-stemmed trees at interactive rates
S. Rudnick; L. Linsen; E.G. McPherson
2007-01-01
For city planning purposes, animations of growing trees of several species can be used to deduce which species may best fit a particular environment. The models used for the animation must conform to real measured data. We present an approach for inverse modeling to fit global growth parameters. The model comprises local production rules, which are iteratively and...
INFOS: spectrum fitting software for NMR analysis.
Smith, Albert A
2017-02-01
Software for fitting of NMR spectra in MATLAB is presented. Spectra are fitted in the frequency domain, using Fourier transformed lineshapes, which are derived using the experimental acquisition and processing parameters. This yields more accurate fits compared to common fitting methods that use Lorentzian or Gaussian functions. Furthermore, a very time-efficient algorithm for calculating and fitting spectra has been developed. The software also performs initial peak picking, followed by subsequent fitting and refinement of the peak list, by iteratively adding and removing peaks to improve the overall fit. Estimation of error on fitting parameters is performed using a Monte-Carlo approach. Many fitting options allow the software to be flexible enough for a wide array of applications, while still being straightforward to set up with minimal user input.
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2018-02-01
This paper proposes a combined Virtual Reference Feedback Tuning-Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning-Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.
Establishing Factor Validity Using Variable Reduction in Confirmatory Factor Analysis.
ERIC Educational Resources Information Center
Hofmann, Rich
1995-01-01
Using a 21-statement attitude-type instrument, an iterative procedure for improving confirmatory model fit is demonstrated within the context of the EQS program of P. M. Bentler and maximum likelihood factor analysis. Each iteration systematically eliminates the poorest fitting statement as identified by a variable fit index. (SLD)
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2017-03-01
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Iterative-Transform Phase Retrieval Using Adaptive Diversity
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2007-01-01
A phase-diverse iterative-transform phase-retrieval algorithm enables high spatial-frequency, high-dynamic-range, image-based wavefront sensing. [The terms phase-diverse, phase retrieval, image-based, and wavefront sensing are defined in the first of the two immediately preceding articles, Broadband Phase Retrieval for Image-Based Wavefront Sensing (GSC-14899-1).] As described below, no prior phase-retrieval algorithm has offered both high dynamic range and the capability to recover high spatial-frequency components. Each of the previously developed image-based phase-retrieval techniques can be classified into one of two categories: iterative transform or parametric. Among the modifications of the original iterative-transform approach has been the introduction of a defocus diversity function (also defined in the cited companion article). Modifications of the original parametric approach have included minimizing alternative objective functions as well as implementing a variety of nonlinear optimization methods. The iterative-transform approach offers the advantage of ability to recover low, middle, and high spatial frequencies, but has disadvantage of having a limited dynamic range to one wavelength or less. In contrast, parametric phase retrieval offers the advantage of high dynamic range, but is poorly suited for recovering higher spatial frequency aberrations. The present phase-diverse iterative transform phase-retrieval algorithm offers both the high-spatial-frequency capability of the iterative-transform approach and the high dynamic range of parametric phase-recovery techniques. In implementation, this is a focus-diverse iterative-transform phaseretrieval algorithm that incorporates an adaptive diversity function, which makes it possible to avoid phase unwrapping while preserving high-spatial-frequency recovery. The algorithm includes an inner and an outer loop (see figure). An initial estimate of phase is used to start the algorithm on the inner loop, wherein multiple intensity images are processed, each using a different defocus value. The processing is done by an iterative-transform method, yielding individual phase estimates corresponding to each image of the defocus-diversity data set. These individual phase estimates are combined in a weighted average to form a new phase estimate, which serves as the initial phase estimate for either the next iteration of the iterative-transform method or, if the maximum number of iterations has been reached, for the next several steps, which constitute the outerloop portion of the algorithm. The details of the next several steps must be omitted here for the sake of brevity. The overall effect of these steps is to adaptively update the diversity defocus values according to recovery of global defocus in the phase estimate. Aberration recovery varies with differing amounts as the amount of diversity defocus is updated in each image; thus, feedback is incorporated into the recovery process. This process is iterated until the global defocus error is driven to zero during the recovery process. The amplitude of aberration may far exceed one wavelength after completion of the inner-loop portion of the algorithm, and the classical iterative transform method does not, by itself, enable recovery of multi-wavelength aberrations. Hence, in the absence of a means of off-loading the multi-wavelength portion of the aberration, the algorithm would produce a wrapped phase map. However, a special aberration-fitting procedure can be applied to the wrapped phase data to transfer at least some portion of the multi-wavelength aberration to the diversity function, wherein the data are treated as known phase values. In this way, a multiwavelength aberration can be recovered incrementally by successively applying the aberration-fitting procedure to intermediate wrapped phase maps. During recovery, as more of the aberration is transferred to the diversity function following successive iterations around the ter loop, the estimated phase ceases to wrap in places where the aberration values become incorporated as part of the diversity function. As a result, as the aberration content is transferred to the diversity function, the phase estimate resembles that of a reference flat.
Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert
2013-01-01
Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.
A systematic way for the cost reduction of density fitting methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kállay, Mihály, E-mail: kallay@mail.bme.hu
2014-12-28
We present a simple approach for the reduction of the size of auxiliary basis sets used in methods exploiting the density fitting (resolution of identity) approximation for electron repulsion integrals. Starting out of the singular value decomposition of three-center two-electron integrals, new auxiliary functions are constructed as linear combinations of the original fitting functions. The new functions, which we term natural auxiliary functions (NAFs), are analogous to the natural orbitals widely used for the cost reduction of correlation methods. The use of the NAF basis enables the systematic truncation of the fitting basis, and thereby potentially the reduction of themore » computational expenses of the methods, though the scaling with the system size is not altered. The performance of the new approach has been tested for several quantum chemical methods. It is demonstrated that the most pronounced gain in computational efficiency can be expected for iterative models which scale quadratically with the size of the fitting basis set, such as the direct random phase approximation. The approach also has the promise of accelerating local correlation methods, for which the processing of three-center Coulomb integrals is a bottleneck.« less
NASA Astrophysics Data System (ADS)
Ramlau, R.; Saxenhuber, D.; Yudytskiy, M.
2014-07-01
The problem of atmospheric tomography arises in ground-based telescope imaging with adaptive optics (AO), where one aims to compensate in real-time for the rapidly changing optical distortions in the atmosphere. Many of these systems depend on a sufficient reconstruction of the turbulence profiles in order to obtain a good correction. Due to steadily growing telescope sizes, there is a strong increase in the computational load for atmospheric reconstruction with current methods, first and foremost the MVM. In this paper we present and compare three novel iterative reconstruction methods. The first iterative approach is the Finite Element- Wavelet Hybrid Algorithm (FEWHA), which combines wavelet-based techniques and conjugate gradient schemes to efficiently and accurately tackle the problem of atmospheric reconstruction. The method is extremely fast, highly flexible and yields superior quality. Another novel iterative reconstruction algorithm is the three step approach which decouples the problem in the reconstruction of the incoming wavefronts, the reconstruction of the turbulent layers (atmospheric tomography) and the computation of the best mirror correction (fitting step). For the atmospheric tomography problem within the three step approach, the Kaczmarz algorithm and the Gradient-based method have been developed. We present a detailed comparison of our reconstructors both in terms of quality and speed performance in the context of a Multi-Object Adaptive Optics (MOAO) system for the E-ELT setting on OCTOPUS, the ESO end-to-end simulation tool.
Evaluation of the 'Fitting to Outcomes eXpert' (FOX®) with established cochlear implant users.
Buechner, Andreas; Vaerenberg, Bart; Gazibegovic, Dzemal; Brendel, Martina; De Ceulaer, Geert; Govaerts, Paul; Lenarz, Thomas
2015-01-01
To evaluate the possible impact of 'Fitting to Outcomes eXpert (FOX(®))' on cochlear implant (CI) fitting in a clinic with extensive experience of fitting a range of CI systems, as a way to assess whether a software tool such as FOX is able to complement standard clinical procedures. Ten adult post-lingually deafened and unilateral long-term users of the Advanced Bionics(TM) CI system (Clarion CII or HiRes 90K(TM)) underwent speech perception assessment with their current clinical program. One cycle 'iteration' of FOX optimization was performed and the program adjusted accordingly. After a month of using both clinical and FOX programs, a second iteration of FOX optimization was performed. Following this, the assessments were repeated without further acclimatization. FOX prescribed programming modifications in all subjects. Soundfield-aided thresholds were significantly lower for FOX than the clinical program. Group speech scores in noise were not significantly different between the two programs but three individual subjects had improved speech scores with the FOX MAP, two had worse speech scores, and five were the same. FOX provided a standardized approach to fitting based on outcome measures rather than comfort alone. The results indicated that for this group of well-fitted patients, FOX improved outcomes in some individuals. There were significant changes, both better and worse, in individual speech perception scores but median scores remained unchanged. Soundfield-aided thresholds were significantly improved for the FOX group.
Low-memory iterative density fitting.
Grajciar, Lukáš
2015-07-30
A new low-memory modification of the density fitting approximation based on a combination of a continuous fast multipole method (CFMM) and a preconditioned conjugate gradient solver is presented. Iterative conjugate gradient solver uses preconditioners formed from blocks of the Coulomb metric matrix that decrease the number of iterations needed for convergence by up to one order of magnitude. The matrix-vector products needed within the iterative algorithm are calculated using CFMM, which evaluates them with the linear scaling memory requirements only. Compared with the standard density fitting implementation, up to 15-fold reduction of the memory requirements is achieved for the most efficient preconditioner at a cost of only 25% increase in computational time. The potential of the method is demonstrated by performing density functional theory calculations for zeolite fragment with 2592 atoms and 121,248 auxiliary basis functions on a single 12-core CPU workstation. © 2015 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Dardick, William R.; Mislevy, Robert J.
2016-01-01
A new variant of the iterative "data = fit + residual" data-analytical approach described by Mosteller and Tukey is proposed and implemented in the context of item response theory psychometric models. Posterior probabilities from a Bayesian mixture model of a Rasch item response theory model and an unscalable latent class are expressed…
Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2015-01-01
An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.
A New Metrics for Countries' Fitness and Products' Complexity
NASA Astrophysics Data System (ADS)
Tacchella, Andrea; Cristelli, Matthieu; Caldarelli, Guido; Gabrielli, Andrea; Pietronero, Luciano
2012-10-01
Classical economic theories prescribe specialization of countries industrial production. Inspection of the country databases of exported products shows that this is not the case: successful countries are extremely diversified, in analogy with biosystems evolving in a competitive dynamical environment. The challenge is assessing quantitatively the non-monetary competitive advantage of diversification which represents the hidden potential for development and growth. Here we develop a new statistical approach based on coupled non-linear maps, whose fixed point defines a new metrics for the country Fitness and product Complexity. We show that a non-linear iteration is necessary to bound the complexity of products by the fitness of the less competitive countries exporting them. We show that, given the paradigm of economic complexity, the correct and simplest approach to measure the competitiveness of countries is the one presented in this work. Furthermore our metrics appears to be economically well-grounded.
Iterative Track Fitting Using Cluster Classification in Multi Wire Proportional Chamber
NASA Astrophysics Data System (ADS)
Primor, David; Mikenberg, Giora; Etzion, Erez; Messer, Hagit
2007-10-01
This paper addresses the problem of track fitting of a charged particle in a multi wire proportional chamber (MWPC) using cathode readout strips. When a charged particle crosses a MWPC, a positive charge is induced on a cluster of adjacent strips. In the presence of high radiation background, the cluster charge measurements may be contaminated due to background particles, leading to less accurate hit position estimation. The least squares method for track fitting assumes the same position error distribution for all hits and thus loses its optimal properties on contaminated data. For this reason, a new robust algorithm is proposed. The algorithm first uses the known spatial charge distribution caused by a single charged particle over the strips, and classifies the clusters into ldquocleanrdquo and ldquodirtyrdquo clusters. Then, using the classification results, it performs an iterative weighted least squares fitting procedure, updating its optimal weights each iteration. The performance of the suggested algorithm is compared to other track fitting techniques using a simulation of tracks with radiation background. It is shown that the algorithm improves the track fitting performance significantly. A practical implementation of the algorithm is presented for muon track fitting in the cathode strip chamber (CSC) of the ATLAS experiment.
Spatial frequency domain spectroscopy of two layer media
NASA Astrophysics Data System (ADS)
Yudovsky, Dmitry; Durkin, Anthony J.
2011-10-01
Monitoring of tissue blood volume and oxygen saturation using biomedical optics techniques has the potential to inform the assessment of tissue health, healing, and dysfunction. These quantities are typically estimated from the contribution of oxyhemoglobin and deoxyhemoglobin to the absorption spectrum of the dermis. However, estimation of blood related absorption in superficial tissue such as the skin can be confounded by the strong absorption of melanin in the epidermis. Furthermore, epidermal thickness and pigmentation varies with anatomic location, race, gender, and degree of disease progression. This study describes a technique for decoupling the effect of melanin absorption in the epidermis from blood absorption in the dermis for a large range of skin types and thicknesses. An artificial neural network was used to map input optical properties to spatial frequency domain diffuse reflectance of two layer media. Then, iterative fitting was used to determine the optical properties from simulated spatial frequency domain diffuse reflectance. Additionally, an artificial neural network was trained to directly map spatial frequency domain reflectance to sets of optical properties of a two layer medium, thus bypassing the need for iteration. In both cases, the optical thickness of the epidermis and absorption and reduced scattering coefficients of the dermis were determined independently. The accuracy and efficiency of the iterative fitting approach was compared with the direct neural network inversion.
Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan
2015-01-01
Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997
Implementation of the Iterative Proportion Fitting Algorithm for Geostatistical Facies Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Yupeng, E-mail: yupeng@ualberta.ca; Deutsch, Clayton V.
2012-06-15
In geostatistics, most stochastic algorithm for simulation of categorical variables such as facies or rock types require a conditional probability distribution. The multivariate probability distribution of all the grouped locations including the unsampled location permits calculation of the conditional probability directly based on its definition. In this article, the iterative proportion fitting (IPF) algorithm is implemented to infer this multivariate probability. Using the IPF algorithm, the multivariate probability is obtained by iterative modification to an initial estimated multivariate probability using lower order bivariate probabilities as constraints. The imposed bivariate marginal probabilities are inferred from profiles along drill holes or wells.more » In the IPF process, a sparse matrix is used to calculate the marginal probabilities from the multivariate probability, which makes the iterative fitting more tractable and practical. This algorithm can be extended to higher order marginal probability constraints as used in multiple point statistics. The theoretical framework is developed and illustrated with estimation and simulation example.« less
Chaudhuri, Shomesh E; Merfeld, Daniel M
2013-03-01
Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.
Recent advancements in GRACE mascon regularization and uncertainty assessment
NASA Astrophysics Data System (ADS)
Loomis, B. D.; Luthcke, S. B.
2017-12-01
The latest release of the NASA Goddard Space Flight Center (GSFC) global time-variable gravity mascon product applies a new regularization strategy along with new methods for estimating noise and leakage uncertainties. The critical design component of mascon estimation is the construction of the applied regularization matrices, and different strategies exist between the different centers that produce mascon solutions. The new approach from GSFC directly applies the pre-fit Level 1B inter-satellite range-acceleration residuals in the design of time-dependent regularization matrices, which are recomputed at each step of our iterative solution method. We summarize this new approach, demonstrating the simultaneous increase in recovered time-variable gravity signal and reduction in the post-fit inter-satellite residual magnitudes, until solution convergence occurs. We also present our new approach for estimating mascon noise uncertainties, which are calibrated to the post-fit inter-satellite residuals. Lastly, we present a new technique for end users to quickly estimate the signal leakage errors for any selected grouping of mascons, and we test the viability of this leakage assessment procedure on the mascon solutions produced by other processing centers.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
Hsieh, Hong-Po; Ko, Fan-Hua; Sung, Kung-Bin
2018-04-20
An iterative curve fitting method has been applied in both simulation [J. Biomed. Opt.17, 107003 (2012)JBOPFO1083-366810.1117/1.JBO.17.10.107003] and phantom [J. Biomed. Opt.19, 077002 (2014)JBOPFO1083-366810.1117/1.JBO.19.7.077002] studies to accurately extract optical properties and the top layer thickness of a two-layered superficial tissue model from diffuse reflectance spectroscopy (DRS) data. This paper describes a hybrid two-step parameter estimation procedure to address two main issues of the previous method, including (1) high computational intensity and (2) converging to local minima. The parameter estimation procedure contained a novel initial estimation step to obtain an initial guess, which was used by a subsequent iterative fitting step to optimize the parameter estimation. A lookup table was used in both steps to quickly obtain reflectance spectra and reduce computational intensity. On simulated DRS data, the proposed parameter estimation procedure achieved high estimation accuracy and a 95% reduction of computational time compared to previous studies. Furthermore, the proposed initial estimation step led to better convergence of the following fitting step. Strategies used in the proposed procedure could benefit both the modeling and experimental data processing of not only DRS but also related approaches such as near-infrared spectroscopy.
Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levinson, Zachary; Verduijn, Erik; Wood, Obert R.
2016-04-01
Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges
2013-01-01
Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922
Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M; El Fakhri, Georges
2013-10-01
Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%-29% and 32%-70% for 50 × 10(6) and 10 × 10(6) detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40-50 iterations), while more than 500 iterations were needed for CG. The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method.
NASA Astrophysics Data System (ADS)
Jackson, B. V.; Yu, H. S.; Hick, P. P.; Buffington, A.; Odstrcil, D.; Kim, T. K.; Pogorelov, N. V.; Tokumaru, M.; Bisi, M. M.; Kim, J.; Yun, J.
2017-12-01
The University of California, San Diego has developed an iterative remote-sensing time-dependent three-dimensional (3-D) reconstruction technique which provides volumetric maps of density, velocity, and magnetic field. We have applied this technique in near real time for over 15 years with a kinematic model approximation to fit data from ground-based interplanetary scintillation (IPS) observations. Our modeling concept extends volumetric data from an inner boundary placed above the Alfvén surface out to the inner heliosphere. We now use this technique to drive 3-D MHD models at their inner boundary and generate output 3-D data files that are fit to remotely-sensed observations (in this case IPS observations), and iterated. These analyses are also iteratively fit to in-situ spacecraft measurements near Earth. To facilitate this process, we have developed a traceback from input 3-D MHD volumes to yield an updated boundary in density, temperature, and velocity, which also includes magnetic-field components. Here we will show examples of this analysis using the ENLIL 3D-MHD and the University of Alabama Multi-Scale Fluid-Kinetic Simulation Suite (MS-FLUKSS) heliospheric codes. These examples help refine poorly-known 3-D MHD variables (i.e., density, temperature), and parameters (gamma) by fitting heliospheric remotely-sensed data between the region near the solar surface and in-situ measurements near Earth.
A curve fitting method for solving the flutter equation. M.S. Thesis
NASA Technical Reports Server (NTRS)
Cooper, J. L.
1972-01-01
A curve fitting approach was developed to solve the flutter equation for the critical flutter velocity. The psi versus nu curves are approximated by cubic and quadratic equations. The curve fitting technique utilized the first and second derivatives of psi with respect to nu. The method was tested for two structures, one structure being six times the total mass of the other structure. The algorithm never showed any tendency to diverge from the solution. The average time for the computation of a flutter velocity was 3.91 seconds on an IBM Model 50 computer for an accuracy of five per cent. For values of nu close to the critical root of the flutter equation the algorithm converged on the first attempt. The maximum number of iterations for convergence to the critical flutter velocity was five with an assumed value of nu relatively distant from the actual crossover.
Status on Iterative Transform Phase Retrieval Applied to the GBT Data
NASA Technical Reports Server (NTRS)
Dean, Bruce; Aronstein, David; Smith, Scott; Shiri, Ron; Hollis, Jan M.; Lyons, Richard; Prestage, Richard; Hunter, Todd; Ghigo, Frank; Nikolic, Bojan
2007-01-01
This slide presentation reviews the use of iterative transform phase retrieval in the analysis of the Green Bank Radio Telescope (GBT) Data. It reviews the NASA projects that have used phase retrieval, and the testbed for the algorithm to be used for the James Webb Space Telescope. It shows the comparison of phase retrieval with an interferometer, and reviews the two approaches used for phase retrieval, iterative transform (ITA) or parametric (non-linear least squares model fitting). The concept of ITA Phase Retrieval is reviewed, and the application to Radio Antennas is reviewed. The presentation also examines the National Radio Astronomy Observatory (NRAO) data from the GBT, and the Fourier model that NRAO uses to analyze the data. The challenge for ITA phase retrieval is reviewed, and the coherent approximation for incoherent data is shown. The validity of the approximation is good for a large tilt. There is a review of the proof of concept of the Phase Review simulation using the input wavefront, and the initial sampling parameters estimate from the focused GBT data.
An iterative correction approach used to retrieve the refractive index of squid pigment aerosols
NASA Astrophysics Data System (ADS)
Dinneen, Sean R.; Deravi, Leila F.; Greenslade, Margaret E.
2018-03-01
Pigments localized within cephalopod chromatophores are important for dermal coloration. When isolated and used as materials outside of the animal, the pigments can be processed as aerosols, illustrating a potential application for spray-on-coatings. The optical features of the pigment aerosols are difficult to analyze and require a method to correct for the particle charging and solvent effects accumulated during the aerosolizing process. We describe a method to account for these effects using an innovative iterative approach tied to retrieved refractive index (RI) values. RI retrievals were obtained via the best fit between the corrected, experimentally observed extinction efficiencies compared to those calculated by Mie theory for a specific RI at selected sizes. In addition to these retrievals, the impact of solvent on the particles’ optical properties was also examined via the Maxwell-Garnett mixing rule. Ultimately, we obtained a pigment RI with a real portion (n) of 1.66 (±0.05) representing a lower limit and an imaginary portion (k) of 0.13 (±0.08)i representing an upper limit for the generated aerosols. Combined, this approach advances techniques used to retrieve RI values that benefits both atmospheric chemistry and bio-inspired materials.
Exponential Sum-Fitting of Dwell-Time Distributions without Specifying Starting Parameters
Landowne, David; Yuan, Bin; Magleby, Karl L.
2013-01-01
Fitting dwell-time distributions with sums of exponentials is widely used to characterize histograms of open- and closed-interval durations recorded from single ion channels, as well as for other physical phenomena. However, it can be difficult to identify the contributing exponential components. Here we extend previous methods of exponential sum-fitting to present a maximum-likelihood approach that consistently detects all significant exponentials without the need for user-specified starting parameters. Instead of searching for exponentials, the fitting starts with a very large number of initial exponentials with logarithmically spaced time constants, so that none are missed. Maximum-likelihood fitting then determines the areas of all the initial exponentials keeping the time constants fixed. In an iterative manner, with refitting after each step, the analysis then removes exponentials with negligible area and combines closely spaced adjacent exponentials, until only those exponentials that make significant contributions to the dwell-time distribution remain. There is no limit on the number of significant exponentials and no starting parameters need be specified. We demonstrate fully automated detection for both experimental and simulated data, as well as for classical exponential-sum-fitting problems. PMID:23746510
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogden, K; Greene-Donnelly, K; Vallabhaneni, D
Purpose: To investigate the effects of changing iterative reconstruction strength and tube voltage on Hounsfield Unit (HU) values of varying concentrations of Iodinated contrast medium in a phantom. Method: Iodinated contrast (Omnipaque 300, GE Healthcare, Princeton NJ) was diluted with distilled water to concentrations of 0.6, 0.9, 1.8, 3.6, 7.2, and 10.8 mg/mL of Iodine. The solutions were scanned in a patient equivalent water phantom on two MDCT scanners: VCT 64 slice (GE Medical Systems, Waukesha, WI) and an Aquilion One 320 slice scanner (Toshiba America Medical Systems, Tustin CA). The phantom was scanned at 80, 100, 120, 140 kVmore » using 400, 255, 180, and 130 mAs, respectively, for the VCT scanner, and 80, 100, 120, and 135 kV using 400, 250, 200, and 150 mAs, respectively, on the Aquilion One. Images were reconstructed at 2.5 mm (VCT) and 0.5 mm (Aquilion One). The VCT images were reconstructed using Advanced Statistical Iterative Reconstruction (ASIR) at 6 different strengths: 0%, 20%, 40%, 60%, 80%, and 100%. Aquilion One images were reconstructed using Adaptive Iterative Dose Reduction (AIDR) at 4 strengths: no AIDR, Weak AIDR, Standard AIDR, and Strong AIDR. Regions of interest (ROIs) were drawn on the images to measure the HU values and standard deviations of the diluted contrast. Second order polynomials were used to fit the HU values as a function of Iodine concentration. Results: For both scanners, there was no significant effect of changing the iterative reconstruction strength. The polynomial fits yielded goodness-of-fit (R2) values averaging 0.997. Conclusion: Changing the strength of the iterative reconstruction has no significant effect on the HU values of Iodinated contrast in a tissue-equivalent phantom. Fit values of HU vs Iodine concentration are useful in quantitative imaging protocols such as the determination of cardiac output from time-density curves in the main pulmonary artery.« less
On the application of multilevel modeling in environmental and ecological studies
Qian, Song S.; Cuffney, Thomas F.; Alameddine, Ibrahim; McMahon, Gerard; Reckhow, Kenneth H.
2010-01-01
This paper illustrates the advantages of a multilevel/hierarchical approach for predictive modeling, including flexibility of model formulation, explicitly accounting for hierarchical structure in the data, and the ability to predict the outcome of new cases. As a generalization of the classical approach, the multilevel modeling approach explicitly models the hierarchical structure in the data by considering both the within- and between-group variances leading to a partial pooling of data across all levels in the hierarchy. The modeling framework provides means for incorporating variables at different spatiotemporal scales. The examples used in this paper illustrate the iterative process of model fitting and evaluation, a process that can lead to improved understanding of the system being studied.
Manjoro, Munyaradzi; Rowntree, Kate; Kakembo, Vincent; Foster, Ian; Collins, Adrian L
2017-06-01
Sediment source fingerprinting has been successfully deployed to provide information on the surface and subsurface sources of sediment in many catchments around the world. However, there is still scope to re-examine some of the major assumptions of the technique with reference to the number of fingerprint properties used in the model, the number of model iterations and the potential uncertainties of using more than one sediment core collected from the same floodplain sink. We investigated the role of subsurface erosion in the supply of fine sediment to two sediment cores collected from a floodplain in a small degraded catchment in the Eastern Cape, South Africa. The results showed that increasing the number of individual fingerprint properties in the composite signature did not improve the model goodness-of-fit. This is still a much debated issue in sediment source fingerprinting. To test the goodness-of-fit further, the number of model repeat iterations was increased from 5000 to 30,000. However, this did not reduce uncertainty ranges in modelled source proportions nor improve the model goodness-of-fit. The estimated sediment source contributions were not consistent with the available published data on erosion processes in the study catchment. The temporal pattern of sediment source contributions predicted for the two sediment cores was very different despite the cores being collected in close proximity from the same floodplain. This highlights some of the potential limitations associated with using floodplain cores to reconstruct catchment erosion processes and associated sediment source contributions. For the source tracing approach in general, the findings here suggest the need for further investigations into uncertainties related to the number of fingerprint properties included in un-mixing models. The findings support the current widespread use of ≤5000 model repeat iterations for estimating the key sources of sediment samples. Copyright © 2016 Elsevier Ltd. All rights reserved.
Application of separable parameter space techniques to multi-tracer PET compartment modeling.
Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J
2016-02-07
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
NASA Astrophysics Data System (ADS)
Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.
2016-02-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.
A Monte Carlo Study of an Iterative Wald Test Procedure for DIF Analysis
ERIC Educational Resources Information Center
Cao, Mengyang; Tay, Louis; Liu, Yaowu
2017-01-01
This study examined the performance of a proposed iterative Wald approach for detecting differential item functioning (DIF) between two groups when preknowledge of anchor items is absent. The iterative approach utilizes the Wald-2 approach to identify anchor items and then iteratively tests for DIF items with the Wald-1 approach. Monte Carlo…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, Amy N.; Nagle, Nicholas N.
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
Rose, Amy N.; Nagle, Nicholas N.
2016-08-01
Techniques such as Iterative Proportional Fitting have been previously suggested as a means to generate new data with the demographic granularity of individual surveys and the spatial granularity of small area tabulations of censuses and surveys. This article explores internal and external validation approaches for synthetic, small area, household- and individual-level microdata using a case study for Bangladesh. Using data from the Bangladesh Census 2011 and the Demographic and Health Survey, we produce estimates of infant mortality rate and other household attributes for small areas using a variation of an iterative proportional fitting method called P-MEDM. We conduct an internalmore » validation to determine: whether the model accurately recreates the spatial variation of the input data, how each of the variables performed overall, and how the estimates compare to the published population totals. We conduct an external validation by comparing the estimates with indicators from the 2009 Multiple Indicator Cluster Survey (MICS) for Bangladesh to benchmark how well the estimates compared to a known dataset which was not used in the original model. The results indicate that the estimation process is viable for regions that are better represented in the microdata sample, but also revealed the possibility of strong overfitting in sparsely sampled sub-populations.« less
xMDFF: molecular dynamics flexible fitting of low-resolution X-ray structures.
McGreevy, Ryan; Singharoy, Abhishek; Li, Qufei; Zhang, Jingfen; Xu, Dong; Perozo, Eduardo; Schulten, Klaus
2014-09-01
X-ray crystallography remains the most dominant method for solving atomic structures. However, for relatively large systems, the availability of only medium-to-low-resolution diffraction data often limits the determination of all-atom details. A new molecular dynamics flexible fitting (MDFF)-based approach, xMDFF, for determining structures from such low-resolution crystallographic data is reported. xMDFF employs a real-space refinement scheme that flexibly fits atomic models into an iteratively updating electron-density map. It addresses significant large-scale deformations of the initial model to fit the low-resolution density, as tested with synthetic low-resolution maps of D-ribose-binding protein. xMDFF has been successfully applied to re-refine six low-resolution protein structures of varying sizes that had already been submitted to the Protein Data Bank. Finally, via systematic refinement of a series of data from 3.6 to 7 Å resolution, xMDFF refinements together with electrophysiology experiments were used to validate the first all-atom structure of the voltage-sensing protein Ci-VSP.
a New Approach for Subway Tunnel Deformation Monitoring: High-Resolution Terrestrial Laser Scanning
NASA Astrophysics Data System (ADS)
Li, J.; Wan, Y.; Gao, X.
2012-07-01
With the improvement of the accuracy and efficiency of laser scanning technology, high-resolution terrestrial laser scanning (TLS) technology can obtain high precise points-cloud and density distribution and can be applied to high-precision deformation monitoring of subway tunnels and high-speed railway bridges and other fields. In this paper, a new approach using a points-cloud segmentation method based on vectors of neighbor points and surface fitting method based on moving least squares was proposed and applied to subway tunnel deformation monitoring in Tianjin combined with a new high-resolution terrestrial laser scanner (Riegl VZ-400). There were three main procedures. Firstly, a points-cloud consisted of several scanning was registered by linearized iterative least squares approach to improve the accuracy of registration, and several control points were acquired by total stations (TS) and then adjusted. Secondly, the registered points-cloud was resampled and segmented based on vectors of neighbor points to select suitable points. Thirdly, the selected points were used to fit the subway tunnel surface with moving least squares algorithm. Then a series of parallel sections obtained from temporal series of fitting tunnel surfaces were compared to analysis the deformation. Finally, the results of the approach in z direction were compared with the fiber optical displacement sensor approach and the results in x, y directions were compared with TS respectively, and comparison results showed the accuracy errors of x, y, z directions were respectively about 1.5 mm, 2 mm, 1 mm. Therefore the new approach using high-resolution TLS can meet the demand of subway tunnel deformation monitoring.
Iterative fitting method for the evaluation and quantification of PAES spectra
NASA Astrophysics Data System (ADS)
Zimnik, Samantha; Hackenberg, Mathias; Hugenschmidt, Christoph
2017-01-01
The elemental composition of surfaces is of great importance for the understanding of many surface processes such as catalysis. For a reliable analysis and a comparison of results, the quantification of the measured data is indispensable. Positron annihilation induced Auger Electron Spectroscopy (PAES) is a spectroscopic technique that measures the elemental composition with outstanding surface sensitivity, but up to now, no standardized evaluation procedure for PAES spectra is available. In this paper we present a new approach for the evaluation of PAES spectra of compounds, using the spectra obtained for the pure elements as reference. The measured spectrum is then fitted by a linear combination of the reference spectra by varying their intensities. The comparison of the results of the fitting routine with a calculation of the full parameter range shows an excellent agreement. We present the results of the new analysis method to evaluate the PAES spectra of sub-monolayers of Ni on a Pd substrate.
Nested Conjugate Gradient Algorithm with Nested Preconditioning for Non-linear Image Restoration.
Skariah, Deepak G; Arigovindan, Muthuvel
2017-06-19
We develop a novel optimization algorithm, which we call Nested Non-Linear Conjugate Gradient algorithm (NNCG), for image restoration based on quadratic data fitting and smooth non-quadratic regularization. The algorithm is constructed as a nesting of two conjugate gradient (CG) iterations. The outer iteration is constructed as a preconditioned non-linear CG algorithm; the preconditioning is performed by the inner CG iteration that is linear. The inner CG iteration, which performs preconditioning for outer CG iteration, itself is accelerated by an another FFT based non-iterative preconditioner. We prove that the method converges to a stationary point for both convex and non-convex regularization functionals. We demonstrate experimentally that proposed method outperforms the well-known majorization-minimization method used for convex regularization, and a non-convex inertial-proximal method for non-convex regularization functional.
Application of separable parameter space techniques to multi-tracer PET compartment modeling
Zhang, Jeff L; Morey, A Michael; Kadrmas, Dan J
2016-01-01
Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. PMID:26788888
Wang, Yin; Zhao, Nan-jing; Liu, Wen-qing; Yu, Yang; Fang, Li; Meng, De-shuo; Hu, Li; Zhang, Da-hai; Ma, Min-jun; Xiao, Xue; Wang, Yu; Liu, Jian-guo
2015-02-01
In recent years, the technology of laser induced breakdown spectroscopy has been developed rapidly. As one kind of new material composition detection technology, laser induced breakdown spectroscopy can simultaneously detect multi elements fast and simply without any complex sample preparation and realize field, in-situ material composition detection of the sample to be tested. This kind of technology is very promising in many fields. It is very important to separate, fit and extract spectral feature lines in laser induced breakdown spectroscopy, which is the cornerstone of spectral feature recognition and subsequent elements concentrations inversion research. In order to realize effective separation, fitting and extraction of spectral feature lines in laser induced breakdown spectroscopy, the original parameters for spectral lines fitting before iteration were analyzed and determined. The spectral feature line of' chromium (Cr I : 427.480 nm) in fly ash gathered from a coal-fired power station, which was overlapped with another line(FeI: 427.176 nm), was separated from the other one and extracted by using damped least squares method. Based on Gauss-Newton iteration, damped least squares method adds damping factor to step and adjust step length dynamically according to the feedback information after each iteration, in order to prevent the iteration from diverging and make sure that the iteration could converge fast. Damped least squares method helps to obtain better results of separating, fitting and extracting spectral feature lines and give more accurate intensity values of these spectral feature lines: The spectral feature lines of chromium in samples which contain different concentrations of chromium were separated and extracted. And then, the intensity values of corresponding spectral lines were given by using damped least squares method and least squares method separately. The calibration curves were plotted, which showed the relationship between spectral line intensity values and chromium concentrations in different samples. And then their respective linear correlations were compared. The experimental results showed that the linear correlation of the intensity values of spectral feature lines and the concentrations of chromium in different samples, which was obtained by damped least squares method, was better than that one obtained by least squares method. And therefore, damped least squares method was stable, reliable and suitable for separating, fitting and extracting spectral feature lines in laser induced breakdown spectroscopy.
Golze, Dorothea; Iannuzzi, Marcella; Hutter, Jürg
2017-05-09
A local resolution-of-the-identity (LRI) approach is introduced in combination with the Gaussian and plane waves (GPW) scheme to enable large-scale Kohn-Sham density functional theory calculations. In GPW, the computational bottleneck is typically the description of the total charge density on real-space grids. Introducing the LRI approximation, the linear scaling of the GPW approach with respect to system size is retained, while the prefactor for the grid operations is reduced. The density fitting is an O(N) scaling process implemented by approximating the atomic pair densities by an expansion in one-center fit functions. The computational cost for the grid-based operations becomes negligible in LRIGPW. The self-consistent field iteration is up to 30 times faster for periodic systems dependent on the symmetry of the simulation cell and on the density of grid points. However, due to the overhead introduced by the local density fitting, single point calculations and complete molecular dynamics steps, including the calculation of the forces, are effectively accelerated by up to a factor of ∼10. The accuracy of LRIGPW is assessed for different systems and properties, showing that total energies, reaction energies, intramolecular and intermolecular structure parameters are well reproduced. LRIGPW yields also high quality results for extended condensed phase systems such as liquid water, ice XV, and molecular crystals.
Korany, Mohamed A; Gazy, Azza A; Khamis, Essam F; Ragab, Marwa A A; Kamal, Miranda F
2018-06-01
This study outlines two robust regression approaches, namely least median of squares (LMS) and iteratively re-weighted least squares (IRLS) to investigate their application in instrument analysis of nutraceuticals (that is, fluorescence quenching of merbromin reagent upon lipoic acid addition). These robust regression methods were used to calculate calibration data from the fluorescence quenching reaction (∆F and F-ratio) under ideal or non-ideal linearity conditions. For each condition, data were treated using three regression fittings: Ordinary Least Squares (OLS), LMS and IRLS. Assessment of linearity, limits of detection (LOD) and quantitation (LOQ), accuracy and precision were carefully studied for each condition. LMS and IRLS regression line fittings showed significant improvement in correlation coefficients and all regression parameters for both methods and both conditions. In the ideal linearity condition, the intercept and slope changed insignificantly, but a dramatic change was observed for the non-ideal condition and linearity intercept. Under both linearity conditions, LOD and LOQ values after the robust regression line fitting of data were lower than those obtained before data treatment. The results obtained after statistical treatment indicated that the linearity ranges for drug determination could be expanded to lower limits of quantitation by enhancing the regression equation parameters after data treatment. Analysis results for lipoic acid in capsules, using both fluorimetric methods, treated by parametric OLS and after treatment by robust LMS and IRLS were compared for both linearity conditions. Copyright © 2018 John Wiley & Sons, Ltd.
Agile methods in biomedical software development: a multi-site experience report.
Kane, David W; Hohman, Moses M; Cerami, Ethan G; McCormick, Michael W; Kuhlmman, Karl F; Byrd, Jeff A
2006-05-30
Agile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods. We have found that agile methods are well suited to the exploratory and iterative nature of scientific inquiry. They provide a robust framework for reproducing scientific results and for developing clinical support systems. The agile development approach also provides a model for collaboration between software engineers and researchers. We present our experience using agile methodologies in projects at six different biomedical software development organizations. The organizations include academic, commercial and government development teams, and included both bioinformatics and clinical support applications. We found that agile practices were a match for the needs of our biomedical projects and contributed to the success of our organizations. We found that the agile development approach was a good fit for our organizations, and that these practices should be applicable and valuable to other biomedical software development efforts. Although we found differences in how agile methods were used, we were also able to identify a set of core practices that were common to all of the groups, and that could be a focus for others seeking to adopt these methods.
Agile methods in biomedical software development: a multi-site experience report
Kane, David W; Hohman, Moses M; Cerami, Ethan G; McCormick, Michael W; Kuhlmman, Karl F; Byrd, Jeff A
2006-01-01
Background Agile is an iterative approach to software development that relies on strong collaboration and automation to keep pace with dynamic environments. We have successfully used agile development approaches to create and maintain biomedical software, including software for bioinformatics. This paper reports on a qualitative study of our experiences using these methods. Results We have found that agile methods are well suited to the exploratory and iterative nature of scientific inquiry. They provide a robust framework for reproducing scientific results and for developing clinical support systems. The agile development approach also provides a model for collaboration between software engineers and researchers. We present our experience using agile methodologies in projects at six different biomedical software development organizations. The organizations include academic, commercial and government development teams, and included both bioinformatics and clinical support applications. We found that agile practices were a match for the needs of our biomedical projects and contributed to the success of our organizations. Conclusion We found that the agile development approach was a good fit for our organizations, and that these practices should be applicable and valuable to other biomedical software development efforts. Although we found differences in how agile methods were used, we were also able to identify a set of core practices that were common to all of the groups, and that could be a focus for others seeking to adopt these methods. PMID:16734914
Full two-dimensional transient solutions of electrothermal aircraft blade deicing
NASA Technical Reports Server (NTRS)
Masiulaniec, K. C.; Keith, T. G., Jr.; Dewitt, K. J.; Leffel, K. L.
1985-01-01
Two finite difference methods are presented for the analysis of transient, two-dimensional responses of an electrothermal de-icer pad of an aircraft wing or blade with attached variable ice layer thickness. Both models employ a Crank-Nicholson iterative scheme, and use an enthalpy formulation to handle the phase change in the ice layer. The first technique makes use of a 'staircase' approach, fitting the irregular ice boundary with square computational cells. The second technique uses a body fitted coordinate transform, and maps the exact shape of the irregular boundary into a rectangular body, with uniformally square computational cells. The numerical solution takes place in the transformed plane. Initial results accounting for variable ice layer thickness are presented. Details of planned de-icing tests at NASA-Lewis, which will provide empirical verification for the above two methods, are also presented.
Vosmeer, C Ruben; Kooi, Derk P; Capoferri, Luigi; Terpstra, Margreet M; Vermeulen, Nico P E; Geerke, Daan P
2016-01-01
Recently an iterative method was proposed to enhance the accuracy and efficiency of ligand-protein binding affinity prediction through linear interaction energy (LIE) theory. For ligand binding to flexible Cytochrome P450s (CYPs), this method was shown to decrease the root-mean-square error and standard deviation of error prediction by combining interaction energies of simulations starting from different conformations. Thereby, different parts of protein-ligand conformational space are sampled in parallel simulations. The iterative LIE framework relies on the assumption that separate simulations explore different local parts of phase space, and do not show transitions to other parts of configurational space that are already covered in parallel simulations. In this work, a method is proposed to (automatically) detect such transitions during the simulations that are performed to construct LIE models and to predict binding affinities. Using noise-canceling techniques and splines to fit time series of the raw data for the interaction energies, transitions during simulation between different parts of phase space are identified. Boolean selection criteria are then applied to determine which parts of the interaction energy trajectories are to be used as input for the LIE calculations. Here we show that this filtering approach benefits the predictive quality of our previous CYP 2D6-aryloxypropanolamine LIE model. In addition, an analysis is performed of the gain in computational efficiency that can be obtained from monitoring simulations using the proposed filtering method and by prematurely terminating simulations accordingly.
FIREFLY (Fitting IteRativEly For Likelihood analYsis): a full spectral fitting code
NASA Astrophysics Data System (ADS)
Wilkinson, David M.; Maraston, Claudia; Goddard, Daniel; Thomas, Daniel; Parikh, Taniya
2017-12-01
We present a new spectral fitting code, FIREFLY, for deriving the stellar population properties of stellar systems. FIREFLY is a chi-squared minimization fitting code that fits combinations of single-burst stellar population models to spectroscopic data, following an iterative best-fitting process controlled by the Bayesian information criterion. No priors are applied, rather all solutions within a statistical cut are retained with their weight. Moreover, no additive or multiplicative polynomials are employed to adjust the spectral shape. This fitting freedom is envisaged in order to map out the effect of intrinsic spectral energy distribution degeneracies, such as age, metallicity, dust reddening on galaxy properties, and to quantify the effect of varying input model components on such properties. Dust attenuation is included using a new procedure, which was tested on Integral Field Spectroscopic data in a previous paper. The fitting method is extensively tested with a comprehensive suite of mock galaxies, real galaxies from the Sloan Digital Sky Survey and Milky Way globular clusters. We also assess the robustness of the derived properties as a function of signal-to-noise ratio (S/N) and adopted wavelength range. We show that FIREFLY is able to recover age, metallicity, stellar mass, and even the star formation history remarkably well down to an S/N ∼ 5, for moderately dusty systems. Code and results are publicly available.1
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine
NASA Astrophysics Data System (ADS)
Henshaw, J. D.; Longmore, S. N.; Kruijssen, J. M. D.; Davies, B.; Bally, J.; Barnes, A.; Battersby, C.; Burton, M.; Cunningham, M. R.; Dale, J. E.; Ginsburg, A.; Immer, K.; Jones, P. A.; Kendrew, S.; Mills, E. A. C.; Molinari, S.; Moore, T. J. T.; Ott, J.; Pillai, T.; Rathborne, J.; Schilke, P.; Schmiedeke, A.; Testi, L.; Walker, D.; Walsh, A.; Zhang, Q.
2016-01-01
The Semi-automated multi-COmponent Universal Spectral-line fitting Engine (SCOUSE) is a spectral line fitting algorithm that fits Gaussian files to spectral line emission. It identifies the spatial area over which to fit the data and generates a grid of spectral averaging areas (SAAs). The spatially averaged spectra are fitted according to user-provided tolerance levels, and the best fit is selected using the Akaike Information Criterion, which weights the chisq of a best-fitting solution according to the number of free-parameters. A more detailed inspection of the spectra can be performed to improve the fit through an iterative process, after which SCOUSE integrates the new solutions into the solution file.
A TVSCAD approach for image deblurring with impulsive noise
NASA Astrophysics Data System (ADS)
Gu, Guoyong; Jiang, Suhong; Yang, Junfeng
2017-12-01
We consider image deblurring problem in the presence of impulsive noise. It is known that total variation (TV) regularization with L1-norm penalized data fitting (TVL1 for short) works reasonably well only when the level of impulsive noise is relatively low. For high level impulsive noise, TVL1 works poorly. The reason is that all data, both corrupted and noise free, are equally penalized in data fitting, leading to insurmountable difficulty in balancing regularization and data fitting. In this paper, we propose to combine TV regularization with nonconvex smoothly clipped absolute deviation (SCAD) penalty for data fitting (TVSCAD for short). Our motivation is simply that data fitting should be enforced only when an observed data is not severely corrupted, while for those data more likely to be severely corrupted, less or even null penalization should be enforced. A difference of convex functions algorithm is adopted to solve the nonconvex TVSCAD model, resulting in solving a sequence of TVL1-equivalent problems, each of which can then be solved efficiently by the alternating direction method of multipliers. Theoretically, we establish global convergence to a critical point of the nonconvex objective function. The R-linear and at-least-sublinear convergence rate results are derived for the cases of anisotropic and isotropic TV, respectively. Numerically, experimental results are given to show that the TVSCAD approach improves those of the TVL1 significantly, especially for cases with high level impulsive noise, and is comparable with the recently proposed iteratively corrected TVL1 method (Bai et al 2016 Inverse Problems 32 085004).
Grosse, Constantino
2014-04-01
The description and interpretation of dielectric spectroscopy data usually require the use of analytical functions, which include unknown parameters that must be determined iteratively by means of a fitting procedure. This is not a trivial task and much effort has been spent to find the best way to accomplish it. While the theoretical approach based on the Levenberg-Marquardt algorithm is well known, no freely available program specifically adapted to the dielectric spectroscopy problem exists to the best of our knowledge. Moreover, even the more general commercial packages usually fail on the following aspects: (1) allow to keep temporarily fixed some of the parameters, (2) allow to freely specify the uncertainty values for each data point, (3) check that parameter values fall within prescribed bounds during the fitting process, and (4) allow to fit either the real, or the imaginary, or simultaneously both parts of the complex permittivity. A program that satisfies all these requirements and allows fitting any superposition of the Debye, Cole-Cole, Cole-Davidson, and Havriliak-Negami dispersions plus a conductivity term to measured dielectric spectroscopy data is presented. It is available on request from the author. Copyright © 2013 Elsevier Inc. All rights reserved.
Recovering the 3d Pose and Shape of Vehicles from Stereo Images
NASA Astrophysics Data System (ADS)
Coenen, M.; Rottensteiner, F.; Heipke, C.
2018-05-01
The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82 % correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4°.
The maintenance of single-locus polymorphism. IV. Models with mutation from existing alleles.
Spencer, H G; Marks, R W
1992-01-01
The ability of viability selection to maintain allelic polymorphism is investigated using a constructionist approach. In extensions to the models we have previously proposed, a population is bombarded with a series of mutations whose fitnesses in conjunction with other alleles are functions of the corresponding fitnesses with a particular allele, the parent allele, already in the population. Allele frequencies are iterated simultaneously, thus allowing alleles to be driven to extinction by selection. Such models allow very high levels of polymorphism to evolve: up to 38 alleles in one case. Alleles that are lethal as homozygotes can evolve to surprisingly high frequencies. The joint evolution of allele frequencies and viabilities highlights the necessity to consider more than the current morphology of a population. Comparisons are made with the neutral theory of evolution and it is suggested that failure to reject neutrality using the Ewens-Watterson test cannot be regarded as evidence for the neutral theory.
Online Updating of Statistical Inference in the Big Data Setting.
Schifano, Elizabeth D; Wu, Jing; Wang, Chun; Yan, Jun; Chen, Ming-Hui
2016-01-01
We present statistical methods for big data arising from online analytical processing, where large amounts of data arrive in streams and require fast analysis without storage/access to the historical data. In particular, we develop iterative estimating algorithms and statistical inferences for linear models and estimating equations that update as new data arrive. These algorithms are computationally efficient, minimally storage-intensive, and allow for possible rank deficiencies in the subset design matrices due to rare-event covariates. Within the linear model setting, the proposed online-updating framework leads to predictive residual tests that can be used to assess the goodness-of-fit of the hypothesized model. We also propose a new online-updating estimator under the estimating equation setting. Theoretical properties of the goodness-of-fit tests and proposed estimators are examined in detail. In simulation studies and real data applications, our estimator compares favorably with competing approaches under the estimating equation setting.
Iterative approach as alternative to S-matrix in modal methods
NASA Astrophysics Data System (ADS)
Semenikhin, Igor; Zanuccoli, Mauro
2014-12-01
The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.
Applying the scientific method to small catchment studies: Areview of the Panola Mountain experience
Hooper, R.P.
2001-01-01
A hallmark of the scientific method is its iterative application to a problem to increase and refine the understanding of the underlying processes controlling it. A successful iterative application of the scientific method to catchment science (including the fields of hillslope hydrology and biogeochemistry) has been hindered by two factors. First, the scale at which controlled experiments can be performed is much smaller than the scale of the phenomenon of interest. Second, computer simulation models generally have not been used as hypothesis-testing tools as rigorously as they might have been. Model evaluation often has gone only so far as evaluation of goodness of fit, rather than a full structural analysis, which is more useful when treating the model as a hypothesis. An iterative application of a simple mixing model to the Panola Mountain Research Watershed is reviewed to illustrate the increase in understanding gained by this approach and to discern general principles that may be applicable to other studies. The lessons learned include the need for an explicitly stated conceptual model of the catchment, the definition of objective measures of its applicability, and a clear linkage between the scale of observations and the scale of predictions. Published in 2001 by John Wiley & Sons. Ltd.
Stevenson, Fiona A; Gibson, William; Pelletier, Caroline; Chrysikou, Vasiliki; Park, Sophie
2015-05-08
UK-based research conducted within a healthcare setting generally requires approval from the National Research Ethics Service. Research ethics committees are required to assess a vast range of proposals, differing in both their topic and methodology. We argue the methodological benchmarks with which research ethics committees are generally familiar and which form the basis of assessments of quality do not fit with the aims and objectives of many forms of qualitative inquiry and their more iterative goals of describing social processes/mechanisms and making visible the complexities of social practices. We review current debates in the literature related to ethical review and social research, and illustrate the importance of re-visiting the notion of ethics in healthcare research. We present an analysis of two contrasting paradigms of ethics. We argue that the first of these is characteristic of the ways that NHS ethics boards currently tend to operate, and the second is an alternative paradigm, that we have labelled the 'iterative' paradigm, which draws explicitly on methodological issues in qualitative research to produce an alternative vision of ethics. We suggest that there is an urgent need to re-think the ways that ethical issues are conceptualised in NHS ethical procedures. In particular, we argue that embedded in the current paradigm is a restricted notion of 'quality', which frames how ethics are developed and worked through. Specific, pre-defined outcome measures are generally seen as the traditional marker of quality, which means that research questions that focus on processes rather than on 'outcomes' may be regarded as problematic. We show that the alternative 'iterative' paradigm offers a useful starting point for moving beyond these limited views. We conclude that a 'one size fits all' standardisation of ethical procedures and approach to ethical review acts against the production of knowledge about healthcare and dramatically restricts what can be known about the social practices and conditions of healthcare. Our central argument is that assessment of ethical implications is important, but that the current paradigm does not facilitate an adequate understanding of the very issues it aims to invigilate.
Multiple organ definition in CT using a Bayesian approach for 3D model fitting
NASA Astrophysics Data System (ADS)
Boes, Jennifer L.; Weymouth, Terry E.; Meyer, Charles R.
1995-08-01
Organ definition in computed tomography (CT) is of interest for treatment planning and response monitoring. We present a method for organ definition using a priori information about shape encoded in a set of biometric organ models--specifically for the liver and kidney-- that accurately represents patient population shape information. Each model is generated by averaging surfaces from a learning set of organ shapes previously registered into a standard space defined by a small set of landmarks. The model is placed in a specific patient's data set by identifying these landmarks and using them as the basis for model deformation; this preliminary representation is then iteratively fit to the patient's data based on a Bayesian formulation of the model's priors and CT edge information, yielding a complete organ surface. We demonstrate this technique using a set of fifteen abdominal CT data sets for liver surface definition both before and after the addition of a kidney model to the fitting; we demonstrate the effectiveness of this tool for organ surface definition in this low-contrast domain.
NASA Astrophysics Data System (ADS)
Eladj, Said; bansir, fateh; ouadfeul, sid Ali
2016-04-01
The application of genetic algorithm starts with an initial population of chromosomes representing a "model space". Chromosome chains are preferentially Reproduced based on Their fitness Compared to the total population. However, a good chromosome has a Greater opportunity to Produce offspring Compared To other chromosomes in the population. The advantage of the combination HGA / SAA is the use of a global search approach on a large population of local maxima to Improve Significantly the performance of the method. To define the parameters of the Hybrid Genetic Algorithm Steepest Ascent Auto Statics (HGA / SAA) job, we Evaluated by testing in the first stage of "Steepest Ascent," the optimal parameters related to the data used. 1- The number of iterations "Number of hill climbing iteration" is equal to 40 iterations. This parameter defines the participation of the algorithm "SA", in this hybrid approach. 2- The minimum eigenvalue for SA '= 0.8. This is linked to the quality of data and S / N ratio. To find an implementation performance of hybrid genetic algorithms in the inversion for estimating of the residual static corrections, tests Were Performed to determine the number of generation of HGA / SAA. Using the values of residual static corrections already calculated by the Approaches "SAA and CSAA" learning has Proved very effective in the building of the cross-correlation table. To determine the optimal number of generation, we Conducted a series of tests ranging from [10 to 200] generations. The application on real seismic data in southern Algeria allowed us to judge the performance and capacity of the inversion with this hybrid method "HGA / SAA". This experience Clarified the influence of the corrections quality estimated from "SAA / CSAA" and the optimum number of generation hybrid genetic algorithm "HGA" required to have a satisfactory performance. Twenty (20) generations Were enough to Improve continuity and resolution of seismic horizons. This Will allow us to achieve a more accurate structural interpretation Key words: Hybrid Genetic Algorithm, number of generations, model space, local maxima, Number of hill climbing iteration, Minimum eigenvalue, cross-correlation table
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.
2009-09-01
For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.
Iterative Monte Carlo analysis of spin-dependent parton distributions
Sato, Nobuo; Melnitchouk, Wally; Kuhn, Sebastian E.; ...
2016-04-05
We present a comprehensive new global QCD analysis of polarized inclusive deep-inelastic scattering, including the latest high-precision data on longitudinal and transverse polarization asymmetries from Jefferson Lab and elsewhere. The analysis is performed using a new iterative Monte Carlo fitting technique which generates stable fits to polarized parton distribution functions (PDFs) with statistically rigorous uncertainties. Inclusion of the Jefferson Lab data leads to a reduction in the PDF errors for the valence and sea quarks, as well as in the gluon polarization uncertainty at x ≳ 0.1. Furthermore, the study also provides the first determination of the flavor-separated twist-3 PDFsmore » and the d 2 moment of the nucleon within a global PDF analysis.« less
Lobach, Ihar; Benediktovitch, Andrei; Ulyanenkov, Alexander
2017-06-01
Diffraction in multilayers in the presence of interfacial roughness is studied theoretically, the roughness being considered as a transition layer. Exact (within the framework of the two-beam dynamical diffraction theory) differential equations for field amplitudes in a crystalline structure with varying properties along its surface normal are obtained. An iterative scheme for approximate solution of the equations is developed. The presented approach to interfacial roughness is incorporated into the recursion matrix formalism in a way that obviates possible numerical problems. Fitting of the experimental rocking curve is performed in order to test the possibility of reconstructing the roughness value from a diffraction scan. The developed algorithm works substantially faster than the traditional approach to dealing with a transition layer (dividing it into a finite number of thin lamellae). Calculations by the proposed approach are only two to three times longer than calculations for corresponding structures with ideally sharp interfaces.
Can the discharge of a hyperconcentrated flow be estimated from paleoflood evidence?
NASA Astrophysics Data System (ADS)
Bodoque, Jose M.; Eguibar, Miguel A.; DíEz-Herrero, AndréS.; GutiéRrez-PéRez, Ignacio; RuíZ-Villanueva, Virginia
2011-12-01
Many flood events involving water and sediments have been characterized using classic hydraulics principles, assuming the existence of critical flow and many other simplifications. In this paper, hyperconcentrated flow discharge was evaluated by using paleoflood reconstructions (based on paleostage indicators [PSI]) combined with a detailed hydraulic analysis of the critical flow assumption. The exact location where this condition occurred was established by iteratively determining the corresponding cross section, so that specific energy is at a minimum. In addition, all of the factors and parameters involved in the process were assessed, especially those related to the momentum equation, existing shear stresses in the wetted perimeter, and nonhydrostatic and hydrostatic pressure distributions. The superelevation of the hyperconcentrated flow, due to the flow elevation curvature, was also estimated and calibrated with the PSI. The estimated peak discharge was established once the iterative process was unable to improve the fit between the simulated depth and the depth observed from the PSI. The methodological approach proposed here can be applied to other higher-gradient mountainous torrents with a similar geomorphic configuration to the one studied in this paper. Likewise, results have been derived with fewer uncertainties than those obtained from standard hydraulic approaches, whose simplifying assumptions have not been considered.
Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers
NASA Astrophysics Data System (ADS)
Green, David; Berry, Lee; RF-SciDAC Collaboration
2017-10-01
The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Use of a genetic algorithm for the analysis of eye movements from the linear vestibulo-ocular reflex
NASA Technical Reports Server (NTRS)
Shelhamer, M.
2001-01-01
It is common in vestibular and oculomotor testing to use a single-frequency (sine) or combination of frequencies [sum-of-sines (SOS)] stimulus for head or target motion. The resulting eye movements typically contain a smooth tracking component, which follows the stimulus, in which are interspersed rapid eye movements (saccades or fast phases). The parameters of the smooth tracking--the amplitude and phase of each component frequency--are of interest; many methods have been devised that attempt to identify and remove the fast eye movements from the smooth. We describe a new approach to this problem, tailored to both single-frequency and sum-of-sines stimulation of the human linear vestibulo-ocular reflex. An approximate derivative is used to identify fast movements, which are then omitted from further analysis. The remaining points form a series of smooth tracking segments. A genetic algorithm is used to fit these segments together to form a smooth (but disconnected) wave form, by iteratively removing biases due to the missing fast phases. A genetic algorithm is an iterative optimization procedure; it provides a basis for extending this approach to more complex stimulus-response situations. In the SOS case, the genetic algorithm estimates the amplitude and phase values of the component frequencies as well as removing biases.
Surface Fitting for Quasi Scattered Data from Coordinate Measuring Systems.
Mao, Qing; Liu, Shugui; Wang, Sen; Ma, Xinhui
2018-01-13
Non-uniform rational B-spline (NURBS) surface fitting from data points is wildly used in the fields of computer aided design (CAD), medical imaging, cultural relic representation and object-shape detection. Usually, the measured data acquired from coordinate measuring systems is neither gridded nor completely scattered. The distribution of this kind of data is scattered in physical space, but the data points are stored in a way consistent with the order of measurement, so it is named quasi scattered data in this paper. Therefore they can be organized into rows easily but the number of points in each row is random. In order to overcome the difficulty of surface fitting from this kind of data, a new method based on resampling is proposed. It consists of three major steps: (1) NURBS curve fitting for each row, (2) resampling on the fitted curve and (3) surface fitting from the resampled data. Iterative projection optimization scheme is applied in the first and third step to yield advisable parameterization and reduce the time cost of projection. A resampling approach based on parameters, local peaks and contour curvature is proposed to overcome the problems of nodes redundancy and high time consumption in the fitting of this kind of scattered data. Numerical experiments are conducted with both simulation and practical data, and the results show that the proposed method is fast, effective and robust. What's more, by analyzing the fitting results acquired form data with different degrees of scatterness it can be demonstrated that the error introduced by resampling is negligible and therefore it is feasible.
A non-iterative extension of the multivariate random effects meta-analysis.
Makambi, Kepher H; Seung, Hyunuk
2015-01-01
Multivariate methods in meta-analysis are becoming popular and more accepted in biomedical research despite computational issues in some of the techniques. A number of approaches, both iterative and non-iterative, have been proposed including the multivariate DerSimonian and Laird method by Jackson et al. (2010), which is non-iterative. In this study, we propose an extension of the method by Hartung and Makambi (2002) and Makambi (2001) to multivariate situations. A comparison of the bias and mean square error from a simulation study indicates that, in some circumstances, the proposed approach perform better than the multivariate DerSimonian-Laird approach. An example is presented to demonstrate the application of the proposed approach.
Designing a freeform optic for oblique illumination
NASA Astrophysics Data System (ADS)
Uthoff, Ross D.; Ulanch, Rachel N.; Williams, Kaitlyn E.; Ruiz Diaz, Liliana; King, Page; Koshel, R. John
2017-11-01
The Functional Freeform Fitting (F4) method is utilized to design a freeform optic for oblique illumination of Mark Rothko's Green on Blue (1956). Shown are preliminary results from an iterative freeform design process; from problem definition and specification development to surface fit, ray tracing results, and optimization. This method is applicable to both point and extended sources of various geometries.
Mediatory myths in the U.S. military: tobacco use as “stress relief”
Smith, Elizabeth A.; Malone, Ruth E.
2013-01-01
Purpose To examine perceptions of military personnel about tobacco use. Design Secondary analysis of a) focus group and b) interview data. Setting U.S. Military Subjects Total participants (n= 241): Enlisted personnel, supervisors (n=189 individuals participating in 23 focus groups), tobacco control managers, and policy leaders (n=52 interview participants). Intervention Not applicable. Measures Not applicable. Analysis Inductive, iterative coding for salient themes using an interpretive approach. Application of the concept of mediatory myths, used by institutions to cover over internal contradictions. Results All types of participants endorsed the idea that tobacco was needed in the military for stress relief. Types of stress identified included fitting in, (relationships with co-workers and superiors) and control of workflow (taking breaks). Participants also discussed beliefs about the impact of tobacco on the military mission, and institutional sanction of tobacco use. Conclusions Despite tobacco's well-documented negative effects on fitness, the myth that tobacco relieves stress serves several institutional functions in the military. It serves to minimize perceptions of stress on the fitness of personnel, suggests that stress can be managed solely by individuals, and institutionalizes tobacco use. Growing recognition among military leadership that countering stress is essential to fitness offers an opportunity to challenge this myth. PMID:24359178
Estimating the Instantaneous Drag-Wind Relationship for a Horizontally Homogeneous Canopy
NASA Astrophysics Data System (ADS)
Pan, Ying; Chamecki, Marcelo; Nepf, Heidi M.
2016-07-01
The mean drag-wind relationship is usually investigated assuming that field data are representative of spatially-averaged metrics of statistically stationary flow within and above a horizontally homogeneous canopy. Even if these conditions are satisfied, large-eddy simulation (LES) data suggest two major issues in the analysis of observational data. Firstly, the streamwise mean pressure gradient is usually neglected in the analysis of data from terrestrial canopies, which compromises the estimates of mean canopy drag and provides misleading information for the dependence of local mean drag coefficients on local velocity scales. Secondly, no standard approach has been proposed to investigate the instantaneous drag-wind relationship, a critical component of canopy representation in LES. Here, a practical approach is proposed to fit the streamwise mean pressure gradient using observed profiles of the mean vertical momentum flux within the canopy. Inclusion of the fitted mean pressure gradient enables reliable estimates of the mean drag-wind relationship. LES data show that a local mean drag coefficient that characterizes the relationship between mean canopy drag and the velocity scale associated with total kinetic energy can be used to identify the dependence of the local instantaneous drag coefficient on instantaneous velocity. Iterative approaches are proposed to fit specific models of velocity-dependent instantaneous drag coefficients that represent the effects of viscous drag and the reconfiguration of flexible canopy elements. LES data are used to verify the assumptions and algorithms employed by these new approaches. The relationship between mean canopy drag and mean velocity, which is needed in models based on the Reynolds-averaged Navier-Stokes equations, is parametrized to account for both the dependence on velocity and the contribution from velocity variances. Finally, velocity-dependent drag coefficients lead to significant variations of the calculated displacement height and roughness length with wind speed.
Perceptron Genetic to Recognize Openning Strategy Ruy Lopez
NASA Astrophysics Data System (ADS)
Azmi, Zulfian; Mawengkang, Herman
2018-01-01
The application of Perceptron method is not effective for coding on hardware based systems because it is not real time learning. With Genetic algorithm approach in calculating and searching the best weight (fitness value) system will do learning only one iteration. And the results of this analysis were tested in the case of the introduction of the opening pattern of chess Ruy Lopez. The Analysis with Perceptron Model with Algorithm Approach Genetics from group Artificial Neural Network for open Ruy Lopez. The data is processed with base open chess, with step eight a position white Pion from end open chess. Using perceptron method have many input and one output process many weight and refraction until output equal goal. Data trained and test with software Matlab and system can recognize the chess opening Ruy Lopez or Not open Ruy Lopez with Real time.
NASA Technical Reports Server (NTRS)
Alvertos, Nicolas; Dcunha, Ivan
1992-01-01
Pose and orientation of an object is one of the central issues in 3-D recognition problems. Most of today's available techniques require considerable pre-processing such as detecting edges or joints, fitting curves or surfaces to segment images, and trying to extract higher order features from the input images. We present a method based on analytical geometry, whereby all the rotation parameters of any quadric surface are determined and subsequently eliminated. This procedure is iterative in nature and was found to converge to the desired results in as few as three iterations. The approach enables us to position the quadric surface in a desired coordinate system, and then to utilize the presented shape information to explicitly represent and recognize the 3-D surface. Experiments were conducted with simulated data for objects such as hyperboloid of one and two sheets, elliptic and hyperbolic paraboloid, elliptic and hyperbolic cylinders, ellipsoids, and quadric cones. Real data of quadric cones and cylinders were also utilized. Both of these sets yielded excellent results.
NASA Astrophysics Data System (ADS)
Luo, Y.; Nissen-Meyer, T.; Morency, C.; Tromp, J.
2008-12-01
Seismic imaging in the exploration industry is often based upon ray-theoretical migration techniques (e.g., Kirchhoff) or other ideas which neglect some fraction of the seismic wavefield (e.g., wavefield continuation for acoustic-wave first arrivals) in the inversion process. In a companion paper we discuss the possibility of solving the full physical forward problem (i.e., including visco- and poroelastic, anisotropic media) using the spectral-element method. With such a tool at hand, we can readily apply the adjoint method to tomographic inversions, i.e., iteratively improving an initial 3D background model to fit the data. In the context of this inversion process, we draw connections between kernels in adjoint tomography and basic imaging principles in migration. We show that the images obtained by migration are nothing but particular kinds of adjoint kernels (mainly density kernels). Migration is basically a first step in the iterative inversion process of adjoint tomography. We apply the approach to basic 2D problems involving layered structures, overthrusting faults, topography, salt domes, and poroelastic regions.
Chatterji, Madhabi
2002-01-01
This study examines validity of data generated by the School Readiness for Reforms: Leader Questionnaire (SRR-LQ) using an iterative procedure that combines classical and Rasch rating scale analysis. Following content-validation and pilot-testing, principal axis factor extraction and promax rotation of factors yielded a five factor structure consistent with the content-validated subscales of the original instrument. Factors were identified based on inspection of pattern and structure coefficients. The rotated factor pattern, inter-factor correlations, convergent validity coefficients, and Cronbach's alpha reliability estimates supported the hypothesized construct properties. To further examine unidimensionality and efficacy of the rating scale structures, item-level data from each factor-defined subscale were subjected to analysis with the Rasch rating scale model. Data-to-model fit statistics and separation reliability for items and persons met acceptable criteria. Rating scale results suggested consistency of expected and observed step difficulties in rating categories, and correspondence of step calibrations with increases in the underlying variables. The combined approach yielded more comprehensive diagnostic information on the quality of the five SRR-LQ subscales; further research is continuing.
A Multi-Fidelity Surrogate Model for the Equation of State for Mixtures of Real Gases
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Koneru, Rahul; Balachandar, S.; Rollin, Bertrand
2017-11-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the products of detonated explosives must be treated as real gases while the ideal gas equation of state is used for the ambient air. As the products expand outward, they mix with the air and create a region where both state equations must be satisfied. One of the most accurate, yet expensive, methods to handle this problem is an algorithm that iterates between both state equations until both pressure and thermal equilibrium are achieved inside of each computational cell. This work creates a multi-fidelity surrogate model to replace this process. This is achieved by using a Kriging model to produce a curve fit which interpolates selected data from the iterative algorithm. The surrogate is optimized for computing speed and model accuracy by varying the number of sampling points chosen to construct the model. The performance of the surrogate with respect to the iterative method is tested in simulations using a finite volume code. The model's computational speed and accuracy are analyzed to show the benefits of this novel approach. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program, as a Cooperative Agreement under the Predictive Science Academic Alliance Program, under Contract No. DE-NA00023.
Analysis of a Multi-Fidelity Surrogate for Handling Real Gas Equations of State
NASA Astrophysics Data System (ADS)
Ouellet, Frederick; Park, Chanyoung; Rollin, Bertrand; Balachandar, S.
2017-06-01
The explosive dispersal of particles is a complex multiphase and multi-species fluid flow problem. In these flows, the detonation products of the explosive must be treated as real gas while the ideal gas equation of state is used for the surrounding air. As the products expand outward from the detonation point, they mix with ambient air and create a mixing region where both state equations must be satisfied. One of the most accurate, yet computationally expensive, methods to handle this problem is an algorithm that iterates between both equations of state until pressure and thermal equilibrium are achieved inside of each computational cell. This work aims to use a multi-fidelity surrogate model to replace this process. A Kriging model is used to produce a curve fit which interpolates selected data from the iterative algorithm using Bayesian statistics. We study the model performance with respect to the iterative method in simulations using a finite volume code. The model's (i) computational speed, (ii) memory requirements and (iii) computational accuracy are analyzed to show the benefits of this novel approach. Also, optimizing the combination of model accuracy and computational speed through the choice of sampling points is explained. This work was supported by the U.S. Department of Energy, National Nuclear Security Administration, Advanced Simulation and Computing Program as a Cooperative Agreement under the Predictive Science Academic Alliance Program under Contract No. DE-NA0002378.
NASA Astrophysics Data System (ADS)
van Gent, P. L.; Michaelis, D.; van Oudheusden, B. W.; Weiss, P.-É.; de Kat, R.; Laskari, A.; Jeon, Y. J.; David, L.; Schanz, D.; Huhn, F.; Gesemann, S.; Novara, M.; McPhaden, C.; Neeteson, N. J.; Rival, D. E.; Schneiders, J. F. G.; Schrijer, F. F. J.
2017-04-01
A test case for pressure field reconstruction from particle image velocimetry (PIV) and Lagrangian particle tracking (LPT) has been developed by constructing a simulated experiment from a zonal detached eddy simulation for an axisymmetric base flow at Mach 0.7. The test case comprises sequences of four subsequent particle images (representing multi-pulse data) as well as continuous time-resolved data which can realistically only be obtained for low-speed flows. Particle images were processed using tomographic PIV processing as well as the LPT algorithm `Shake-The-Box' (STB). Multiple pressure field reconstruction techniques have subsequently been applied to the PIV results (Eulerian approach, iterative least-square pseudo-tracking, Taylor's hypothesis approach, and instantaneous Vortex-in-Cell) and LPT results (FlowFit, Vortex-in-Cell-plus, Voronoi-based pressure evaluation, and iterative least-square pseudo-tracking). All methods were able to reconstruct the main features of the instantaneous pressure fields, including methods that reconstruct pressure from a single PIV velocity snapshot. Highly accurate reconstructed pressure fields could be obtained using LPT approaches in combination with more advanced techniques. In general, the use of longer series of time-resolved input data, when available, allows more accurate pressure field reconstruction. Noise in the input data typically reduces the accuracy of the reconstructed pressure fields, but none of the techniques proved to be critically sensitive to the amount of noise added in the present test case.
NASA Astrophysics Data System (ADS)
Mitchell, N.
2007-01-01
Nb3Sn cable in conduit-type conductors were expected to provide an efficient way of achieving large conductor currents at high field (up to 13 T) combined with good stability to electromagnetic disturbances due to the extensive helium contact area with the strands. Although ITER model coils successfully reached their design performance (Kato et al 2001 Fusion Eng. Des. 56/57 59-70), initial indications (Mitchell 2003 Fusion Eng. Des. 66-68 971-94) that there were unexplained performance shortfalls have been confirmed. Recent conductor tests (Pasztor et al 2004 IEEE Trans. Appl. Supercond. 14 1527-30) and modelling work (Mitchell 2005 Supercond. Sci. Technol. 18 396-404) suggest that the shortfalls are due to a combination of strand bending and filament fracture under the transverse magnetic loads. Using the new model, the extensive database from the ITER CS insert coil has been reassessed. A parametric fit based on a loss of filament area and n (the exponent of the power-law fit to the electric field) combined with a more rigorous consideration of the conductor field gradient has enabled the coil behaviour to be explained much more consistently than in earlier assessments, now fitting the Nb3Sn strain scaling laws when used with measurements of the conductor operating strain, including conditions when the insert coil current (and hence operating strain) were reversed. The coil superconducting performance also shows a fatigue-type behaviour consistent with recent measurements on conductor samples (Martovetsky et al 2005 IEEE Trans. Appl. Supercond. 15 1367-70). The ITER conductor design has already been modified compared to the CS insert, to increase the margin and provide increased resistance to the degradation, by using a steel jacket to provide thermal pre-compression to reduce tensile strain levels, reducing the void fraction from 36% to 33% and increasing the non-copper material by 25%. Test results are not yet available for the new design and performance predictions at present rely on models with limited verification.
The Maintenance of Single-Locus Polymorphism. IV. Models with Mutation from Existing Alleles
Spencer, H. G.; Marks, R. W.
1992-01-01
The ability of viability selection to maintain allelic polymorphism is investigated using a constructionist approach. In extensions to the models we have previously proposed, a population is bombarded with a series of mutations whose fitnesses in conjunction with other alleles are functions of the corresponding fitnesses with a particular allele, the parent allele, already in the population. Allele frequencies are iterated simultaneously, thus allowing alleles to be driven to extinction by selection. Such models allow very high levels of polymorphism to evolve: up to 38 alleles in one case. Alleles that are lethal as homozygotes can evolve to surprisingly high frequencies. The joint evolution of allele frequencies and viabilities highlights the necessity to consider more than the current morphology of a population. Comparisons are made with the neutral theory of evolution and it is suggested that failure to reject neutrality using the Ewens-Watterson test cannot be regarded as evidence for the neutral theory. PMID:1732162
Jerosch-Herold, Christina; Chester, Rachel; Shepstone, Lee; Vincent, Joshua I; MacDermid, Joy C
2018-02-01
The shoulder pain and disability index (SPADI) has been extensively evaluated for its psychometric properties using classical test theory (CTT). The purpose of this study was to evaluate its structural validity using Rasch model analysis. Responses to the SPADI from 1030 patients referred for physiotherapy with shoulder pain and enrolled in a prospective cohort study were available for Rasch model analysis. Overall fit, individual person and item fit, response format, dependence, unidimensionality, targeting, reliability and differential item functioning (DIF) were examined. The SPADI pain subscale initially demonstrated a misfit due to DIF by age and gender. After iterative analysis it showed good fit to the Rasch model with acceptable targeting and unidimensionality (overall fit Chi-square statistic 57.2, p = 0.1; mean item fit residual 0.19 (1.5) and mean person fit residual 0.44 (1.1); person separation index (PSI) of 0.83. The disability subscale however shows significant misfit due to uniform DIF even after iterative analyses were used to explore different solutions to the sources of misfit (overall fit (Chi-square statistic 57.2, p = 0.1); mean item fit residual 0.54 (1.26) and mean person fit residual 0.38 (1.0); PSI 0.84). Rasch Model analysis of the SPADI has identified some strengths and limitations not previously observed using CTT methods. The SPADI should be treated as two separate subscales. The SPADI is a widely used outcome measure in clinical practice and research; however, the scores derived from it must be interpreted with caution. The pain subscale fits the Rasch model expectations well. The disability subscale does not fit the Rasch model and its current format does not meet the criteria for true interval-level measurement required for use as a primary endpoint in clinical trials. Clinicians should therefore exercise caution when interpreting score changes on the disability subscale and attempt to compare their scores to age- and sex-stratified data.
An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions
NASA Astrophysics Data System (ADS)
Choi, Suk; Lee, Kang Seog
2016-04-01
A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.
NASA Astrophysics Data System (ADS)
Mohammadian-Behbahani, Mohammad-Reza; Saramad, Shahyar
2018-07-01
In high count rate radiation spectroscopy and imaging, detector output pulses tend to pile up due to high interaction rate of the particles with the detector. Pile-up effects can lead to a severe distortion of the energy and timing information. Pile-up events are conventionally prevented or rejected by both analog and digital electronics. However, for decreasing the exposure times in medical imaging applications, it is important to maintain the pulses and extract their true information by pile-up correction methods. The single-event reconstruction method is a relatively new model-based approach for recovering the pulses one-by-one using a fitting procedure, for which a fast fitting algorithm is a prerequisite. This article proposes a fast non-iterative algorithm based on successive integration which fits the bi-exponential model to experimental data. After optimizing the method, the energy spectra, energy resolution and peak-to-peak count ratios are calculated for different counting rates using the proposed algorithm as well as the rejection method for comparison. The obtained results prove the effectiveness of the proposed method as a pile-up processing scheme designed for spectroscopic and medical radiation detection applications.
Training models of anatomic shape variability
Merck, Derek; Tracton, Gregg; Saboo, Rohit; Levy, Joshua; Chaney, Edward; Pizer, Stephen; Joshi, Sarang
2008-01-01
Learning probability distributions of the shape of anatomic structures requires fitting shape representations to human expert segmentations from training sets of medical images. The quality of statistical segmentation and registration methods is directly related to the quality of this initial shape fitting, yet the subject is largely overlooked or described in an ad hoc way. This article presents a set of general principles to guide such training. Our novel method is to jointly estimate both the best geometric model for any given image and the shape distribution for the entire population of training images by iteratively relaxing purely geometric constraints in favor of the converging shape probabilities as the fitted objects converge to their target segmentations. The geometric constraints are carefully crafted both to obtain legal, nonself-interpenetrating shapes and to impose the model-to-model correspondences required for useful statistical analysis. The paper closes with example applications of the method to synthetic and real patient CT image sets, including same patient male pelvis and head and neck images, and cross patient kidney and brain images. Finally, we outline how this shape training serves as the basis for our approach to IGRT∕ART. PMID:18777919
NASA Astrophysics Data System (ADS)
Shedge, Sapana V.; Pal, Sourav; Köster, Andreas M.
2011-07-01
Recently, two non-iterative approaches have been proposed to calculate response properties within density functional theory (DFT). These approaches are auxiliary density perturbation theory (ADPT) and the non-iterative approach to the coupled-perturbed Kohn-Sham (NIA-CPKS) method. Though both methods are non-iterative, they use different techniques to obtain the perturbed Kohn-Sham matrix. In this Letter, for the first time, both of these two independent methods have been used for the calculation of dipole-quadrupole polarizabilities. To validate these methods, three tetrahedral molecules viz., P4,CH4 and adamantane (C10H16) have been used as examples. The comparison with MP2 and CCSD proves the reliability of the methodology.
Hongtao, Li; Shichao, Chen; Yanjun, Han; Yi, Luo
2013-01-14
A feedback method combined with fitting technique based on variable separation mapping is proposed to design freeform optical systems for an extended LED source with prescribed illumination patterns, especially with uniform illuminance distribution. Feedback process performs well with extended sources, while fitting technique contributes not only to the decrease of pieces of sub-surfaces in discontinuous freeform lenses which may cause loss in manufacture, but also the reduction in the number of feedback iterations. It is proved that light control efficiency can be improved by 5%, while keeping a high uniformity of 82%, with only two feedback iterations and one fitting operation can improve. Furthermore, the polar angle θ and azimuthal angle φ is used to specify the light direction from the light source, and the (θ,φ)-(x,y) based mapping and feedback strategy makes sure that even few discontinuous sections along the equi-φ plane exist in the system, they are perpendicular to the base plane, making it eligible for manufacturing the surfaces using injection molding.
Analytical methods in multivariate highway safety exposure data estimation
DOT National Transportation Integrated Search
1984-01-01
Three general analytical techniques which may be of use in : extending, enhancing, and combining highway accident exposure data are : discussed. The techniques are log-linear modelling, iterative propor : tional fitting and the expectation maximizati...
Comments on Different techniques for finding best-fit parameters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.; Triplett, Laurie A.
2014-07-01
A common data analysis problem is to find best-fit parameters through chi-square minimization. Levenberg-Marquardt is an often used system that depends on gradients and converges when successive iterations do not change chi-square more than a specified amount. We point out in cases where the sought-after parameter weakly affects the fit and cases where the overall scale factor is a parameter, that a Golden Search technique can often do better. The Golden Search converges when the best-fit point is within a specified range and that range can be made arbitrarily small. It does not depend on the value of chi-square.
Real-Time Exponential Curve Fits Using Discrete Calculus
NASA Technical Reports Server (NTRS)
Rowe, Geoffrey
2010-01-01
An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.
The motional Stark effect diagnostic for ITER using a line-shift approach.
Foley, E L; Levinton, F M; Yuh, H Y; Zakharov, L E
2008-10-01
The United States has been tasked with the development and implementation of a motional Stark effect (MSE) system on ITER. In the harsh ITER environment, MSE is particularly susceptible to degradation, as it depends on polarimetry, and the polarization reflection properties of surfaces are highly sensitive to thin film effects due to plasma deposition and erosion of a first mirror. Here we present the results of a comprehensive study considering a new MSE-based approach to internal plasma magnetic field measurements for ITER. The proposed method uses the line shifts in the MSE spectrum (MSE-LS) to provide a radial profile of the magnetic field magnitude. To determine the utility of MSE-LS for equilibrium reconstruction, studies were performed using the ESC-ERV code system. A near-term opportunity to test the use of MSE-LS for equilibrium reconstruction is being pursued in the implementation of MSE with laser-induced fluorescence on NSTX. Though the field values and beam energies are very different from ITER, the use of a laser allows precision spectroscopy with a similar ratio of linewidth to line spacing on NSTX as would be achievable with a passive system on ITER. Simulation results for ITER and NSTX are presented, and the relative merits of the traditional line polarization approach and the new line-shift approach are discussed.
Single-agent parallel window search
NASA Technical Reports Server (NTRS)
Powley, Curt; Korf, Richard E.
1991-01-01
Parallel window search is applied to single-agent problems by having different processes simultaneously perform iterations of Iterative-Deepening-A(asterisk) (IDA-asterisk) on the same problem but with different cost thresholds. This approach is limited by the time to perform the goal iteration. To overcome this disadvantage, the authors consider node ordering. They discuss how global node ordering by minimum h among nodes with equal f = g + h values can reduce the time complexity of serial IDA-asterisk by reducing the time to perform the iterations prior to the goal iteration. Finally, the two ideas of parallel window search and node ordering are combined to eliminate the weaknesses of each approach while retaining the strengths. The resulting approach, called simply parallel window search, can be used to find a near-optimal solution quickly, improve the solution until it is optimal, and then finally guarantee optimality, depending on the amount of time available.
Effect of thick blanket modules on neoclassical tearing mode locking in ITER
La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.
2016-11-03
The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less
Effect of thick blanket modules on neoclassical tearing mode locking in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
La Haye, R. J.; Paz-Soldan, C.; Liu, Y. Q.
The rotation of m/n = 2/1 tearing modes can be slowed and stopped (i.e. locked) by eddy currents induced in resistive walls in conjunction with residual error fields that provide a final 'notch' point. This is a particular issue in ITER with large inertia and low applied torque (m and n are poloidal and toroidal mode numbers respectively). Previous estimates of tolerable 2/1 island widths in ITER found that the ITER electron cyclotron current drive (ECCD) system could catch and subdue such islands before they persisted long enough and grew large enough to lock. These estimates were based on amore » forecast of initial island rotation using the n = 1 resistive penetration time of the inner vacuum vessel wall and benchmarked to DIII-D high-rotation plasmas, However, rotating tearing modes in ITER will also induce eddy currents in the blanket as the effective first wall that can shield the inner vessel. The closer fitting blanket wall has a much shorter time constant and should allow several times smaller islands to lock several times faster in ITER than previously considered; this challenges the ECCD stabilization. Here, recent DIII-D ITER baseline scenario (IBS) plasmas with low rotation through small applied torque allow better modeling and scaling to ITER with the blanket as the first resistive wall.« less
Thompson, Alexander E; Meredig, Bryce; Wolverton, C
2014-03-12
We have created an improved xenon interatomic potential for use with existing UO2 potentials. This potential was fit to density functional theory calculations with the Hubbard U correction (DFT + U) using a genetic algorithm approach called iterative potential refinement (IPR). We examine the defect energetics of the IPR-fitted xenon interatomic potential as well as other, previously published xenon potentials. We compare these potentials to DFT + U derived energetics for a series of xenon defects in a variety of incorporation sites (large, intermediate, and small vacant sites). We find the existing xenon potentials overestimate the energy needed to add a xenon atom to a wide set of defect sites representing a range of incorporation sites, including failing to correctly rank the energetics of the small incorporation site defects (xenon in an interstitial and xenon in a uranium site neighboring uranium in an interstitial). These failures are due to problematic descriptions of Xe-O and/or Xe-U interactions of the previous xenon potentials. These failures are corrected by our newly created xenon potential: our IPR-generated potential gives good agreement with DFT + U calculations to which it was not fitted, such as xenon in an interstitial (small incorporation site) and xenon in a double Schottky defect cluster (large incorporation site). Finally, we note that IPR is very flexible and can be applied to a wide variety of potential forms and materials systems, including metals and EAM potentials.
A composite computational model of liver glucose homeostasis. I. Building the composite model.
Hetherington, J; Sumner, T; Seymour, R M; Li, L; Rey, M Varela; Yamaji, S; Saffrey, P; Margoninski, O; Bogle, I D L; Finkelstein, A; Warner, A
2012-04-07
A computational model of the glucagon/insulin-driven liver glucohomeostasis function, focusing on the buffering of glucose into glycogen, has been developed. The model exemplifies an 'engineering' approach to modelling in systems biology, and was produced by linking together seven component models of separate aspects of the physiology. The component models use a variety of modelling paradigms and degrees of simplification. Model parameters were determined by an iterative hybrid of fitting to high-scale physiological data, and determination from small-scale in vitro experiments or molecular biological techniques. The component models were not originally designed for inclusion within such a composite model, but were integrated, with modification, using our published modelling software and computational frameworks. This approach facilitates the development of large and complex composite models, although, inevitably, some compromises must be made when composing the individual models. Composite models of this form have not previously been demonstrated.
Cat Swarm Optimization algorithm for optimal linear phase FIR filter design.
Saha, Suman Kumar; Ghoshal, Sakti Prasad; Kar, Rajib; Mandal, Durbadal
2013-11-01
In this paper a new meta-heuristic search method, called Cat Swarm Optimization (CSO) algorithm is applied to determine the best optimal impulse response coefficients of FIR low pass, high pass, band pass and band stop filters, trying to meet the respective ideal frequency response characteristics. CSO is generated by observing the behaviour of cats and composed of two sub-models. In CSO, one can decide how many cats are used in the iteration. Every cat has its' own position composed of M dimensions, velocities for each dimension, a fitness value which represents the accommodation of the cat to the fitness function, and a flag to identify whether the cat is in seeking mode or tracing mode. The final solution would be the best position of one of the cats. CSO keeps the best solution until it reaches the end of the iteration. The results of the proposed CSO based approach have been compared to those of other well-known optimization methods such as Real Coded Genetic Algorithm (RGA), standard Particle Swarm Optimization (PSO) and Differential Evolution (DE). The CSO based results confirm the superiority of the proposed CSO for solving FIR filter design problems. The performances of the CSO based designed FIR filters have proven to be superior as compared to those obtained by RGA, conventional PSO and DE. The simulation results also demonstrate that the CSO is the best optimizer among other relevant techniques, not only in the convergence speed but also in the optimal performances of the designed filters. Copyright © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores
NASA Astrophysics Data System (ADS)
Döring, Michael; Leuenberger, Markus C.
2018-06-01
Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess) extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc) datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values
(targets) to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective
impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc) by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0.51 K for temperature (2σ, respectively). In addition to Holocene temperature reconstructions, the fitting approach can also be used for glacial temperature reconstructions. This is shown by fitting of the North Greenland Ice Core Project (NGRIP) δ15N data for two Dansgaard-Oeschger events using the presented approach, leading to results comparable to other studies.
Fast iterative censoring CFAR algorithm for ship detection from SAR images
NASA Astrophysics Data System (ADS)
Gu, Dandan; Yue, Hui; Zhang, Yuan; Gao, Pengcheng
2017-11-01
Ship detection is one of the essential techniques for ship recognition from synthetic aperture radar (SAR) images. This paper presents a fast iterative detection procedure to eliminate the influence of target returns on the estimation of local sea clutter distributions for constant false alarm rate (CFAR) detectors. A fast block detector is first employed to extract potential target sub-images; and then, an iterative censoring CFAR algorithm is used to detect ship candidates from each target blocks adaptively and efficiently, where parallel detection is available, and statistical parameters of G0 distribution fitting local sea clutter well can be quickly estimated based on an integral image operator. Experimental results of TerraSAR-X images demonstrate the effectiveness of the proposed technique.
Attractive electron-electron interactions within robust local fitting approximations.
Merlot, Patrick; Kjærgaard, Thomas; Helgaker, Trygve; Lindh, Roland; Aquilante, Francesco; Reine, Simen; Pedersen, Thomas Bondo
2013-06-30
An analysis of Dunlap's robust fitting approach reveals that the resulting two-electron integral matrix is not manifestly positive semidefinite when local fitting domains or non-Coulomb fitting metrics are used. We present a highly local approximate method for evaluating four-center two-electron integrals based on the resolution-of-the-identity (RI) approximation and apply it to the construction of the Coulomb and exchange contributions to the Fock matrix. In this pair-atomic resolution-of-the-identity (PARI) approach, atomic-orbital (AO) products are expanded in auxiliary functions centered on the two atoms associated with each product. Numerical tests indicate that in 1% or less of all Hartree-Fock and Kohn-Sham calculations, the indefinite integral matrix causes nonconvergence in the self-consistent-field iterations. In these cases, the two-electron contribution to the total energy becomes negative, meaning that the electronic interaction is effectively attractive, and the total energy is dramatically lower than that obtained with exact integrals. In the vast majority of our test cases, however, the indefiniteness does not interfere with convergence. The total energy accuracy is comparable to that of the standard Coulomb-metric RI method. The speed-up compared with conventional algorithms is similar to the RI method for Coulomb contributions; exchange contributions are accelerated by a factor of up to eight with a triple-zeta quality basis set. A positive semidefinite integral matrix is recovered within PARI by introducing local auxiliary basis functions spanning the full AO product space, as may be achieved by using Cholesky-decomposition techniques. Local completion, however, slows down the algorithm to a level comparable with or below conventional calculations. Copyright © 2013 Wiley Periodicals, Inc.
Projected 1981 exposure estimates using iterative proportional fitting
DOT National Transportation Integrated Search
1985-10-01
1981 VMT estimates categorized by eight driver, vehicle, and environmental : variables are produced. These 1981 estimates are produced using analytical : methods developed in a previous report. The estimates are based on 1977 : NPTS data (the latest ...
Performing Systematic Literature Reviews with Novices: An Iterative Approach
ERIC Educational Resources Information Center
Lavallée, Mathieu; Robillard, Pierre-N.; Mirsalari, Reza
2014-01-01
Reviewers performing systematic literature reviews require understanding of the review process and of the knowledge domain. This paper presents an iterative approach for conducting systematic literature reviews that addresses the problems faced by reviewers who are novices in one or both levels of understanding. This approach is derived from…
Line Segmentation of 2d Laser Scanner Point Clouds for Indoor Slam Based on a Range of Residuals
NASA Astrophysics Data System (ADS)
Peter, M.; Jafri, S. R. U. N.; Vosselman, G.
2017-09-01
Indoor mobile laser scanning (IMLS) based on the Simultaneous Localization and Mapping (SLAM) principle proves to be the preferred method to acquire data of indoor environments at a large scale. In previous work, we proposed a backpack IMLS system containing three 2D laser scanners and an according SLAM approach. The feature-based SLAM approach solves all six degrees of freedom simultaneously and builds on the association of lines to planes. Because of the iterative character of the SLAM process, the quality and reliability of the segmentation of linear segments in the scanlines plays a crucial role in the quality of the derived poses and consequently the point clouds. The orientations of the lines resulting from the segmentation can be influenced negatively by narrow objects which are nearly coplanar with walls (like e.g. doors) which will cause the line to be tilted if those objects are not detected as separate segments. State-of-the-art methods from the robotics domain like Iterative End Point Fit and Line Tracking were found to not handle such situations well. Thus, we describe a novel segmentation method based on the comparison of a range of residuals to a range of thresholds. For the definition of the thresholds we employ the fact that the expected value for the average of residuals of n points with respect to the line is σ / √n. Our method, as shown by the experiments and the comparison to other methods, is able to deliver more accurate results than the two approaches it was tested against.
A variable-gain output feedback control design approach
NASA Technical Reports Server (NTRS)
Haylo, Nesim
1989-01-01
A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.
Improved image decompression for reduced transform coding artifacts
NASA Technical Reports Server (NTRS)
Orourke, Thomas P.; Stevenson, Robert L.
1994-01-01
The perceived quality of images reconstructed from low bit rate compression is severely degraded by the appearance of transform coding artifacts. This paper proposes a method for producing higher quality reconstructed images based on a stochastic model for the image data. Quantization (scalar or vector) partitions the transform coefficient space and maps all points in a partition cell to a representative reconstruction point, usually taken as the centroid of the cell. The proposed image estimation technique selects the reconstruction point within the quantization partition cell which results in a reconstructed image which best fits a non-Gaussian Markov random field (MRF) image model. This approach results in a convex constrained optimization problem which can be solved iteratively. At each iteration, the gradient projection method is used to update the estimate based on the image model. In the transform domain, the resulting coefficient reconstruction points are projected to the particular quantization partition cells defined by the compressed image. Experimental results will be shown for images compressed using scalar quantization of block DCT and using vector quantization of subband wavelet transform. The proposed image decompression provides a reconstructed image with reduced visibility of transform coding artifacts and superior perceived quality.
Novel aspects of plasma control in ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphreys, D.; Jackson, G.; Walker, M.
2015-02-15
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g., current profile regulation, tearing mode (TM) suppression), control mathematics (e.g., algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g., methods for management of highly subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
Novel aspects of plasma control in ITER
Humphreys, David; Ambrosino, G.; de Vries, Peter; ...
2015-02-12
ITER plasma control design solutions and performance requirements are strongly driven by its nuclear mission, aggressive commissioning constraints, and limited number of operational discharges. In addition, high plasma energy content, heat fluxes, neutron fluxes, and very long pulse operation place novel demands on control performance in many areas ranging from plasma boundary and divertor regulation to plasma kinetics and stability control. Both commissioning and experimental operations schedules provide limited time for tuning of control algorithms relative to operating devices. Although many aspects of the control solutions required by ITER have been well-demonstrated in present devices and even designed satisfactorily formore » ITER application, many elements unique to ITER including various crucial integration issues are presently under development. We describe selected novel aspects of plasma control in ITER, identifying unique parts of the control problem and highlighting some key areas of research remaining. Novel control areas described include control physics understanding (e.g. current profile regulation, tearing mode suppression (TM)), control mathematics (e.g. algorithmic and simulation approaches to high confidence robust performance), and integration solutions (e.g. methods for management of highly-subscribed control resources). We identify unique aspects of the ITER TM suppression scheme, which will pulse gyrotrons to drive current within a magnetic island, and turn the drive off following suppression in order to minimize use of auxiliary power and maximize fusion gain. The potential role of active current profile control and approaches to design in ITER are discussed. Finally, issues and approaches to fault handling algorithms are described, along with novel aspects of actuator sharing in ITER.« less
2014-01-01
Background Striking a balance between the degree of model complexity and parameter identifiability, while still producing biologically feasible simulations using modelling is a major challenge in computational biology. While these two elements of model development are closely coupled, parameter fitting from measured data and analysis of model mechanisms have traditionally been performed separately and sequentially. This process produces potential mismatches between model and data complexities that can compromise the ability of computational frameworks to reveal mechanistic insights or predict new behaviour. In this study we address this issue by presenting a generic framework for combined model parameterisation, comparison of model alternatives and analysis of model mechanisms. Results The presented methodology is based on a combination of multivariate metamodelling (statistical approximation of the input–output relationships of deterministic models) and a systematic zooming into biologically feasible regions of the parameter space by iterative generation of new experimental designs and look-up of simulations in the proximity of the measured data. The parameter fitting pipeline includes an implicit sensitivity analysis and analysis of parameter identifiability, making it suitable for testing hypotheses for model reduction. Using this approach, under-constrained model parameters, as well as the coupling between parameters within the model are identified. The methodology is demonstrated by refitting the parameters of a published model of cardiac cellular mechanics using a combination of measured data and synthetic data from an alternative model of the same system. Using this approach, reduced models with simplified expressions for the tropomyosin/crossbridge kinetics were found by identification of model components that can be omitted without affecting the fit to the parameterising data. Our analysis revealed that model parameters could be constrained to a standard deviation of on average 15% of the mean values over the succeeding parameter sets. Conclusions Our results indicate that the presented approach is effective for comparing model alternatives and reducing models to the minimum complexity replicating measured data. We therefore believe that this approach has significant potential for reparameterising existing frameworks, for identification of redundant model components of large biophysical models and to increase their predictive capacity. PMID:24886522
DEVELOPMENT OF INTERATOMIC POTENTIALS IN TUNGSTEN-RHENIUM SYSTEMS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Setyawan, Wahyu; Nandipati, Giridhar; Kurtz, Richard J.
2016-09-01
Reference data are generated using the ab initio method to fit interatomic potentials for the W-Re system. The reference data include single phases of W and Re, strained structures, slabs, systems containing several concentrations of vacancies, systems containing various types of interstitial defects, melt structures, structures in the σ and χ phases, and structures containing several concentrations of solid solutions of Re in bcc W and W in hcp Re. Future work will start the fitting iterations.
Tang, Jinghua; McGrath, Michael; Laszczak, Piotr; Jiang, Liudi; Bader, Dan L; Moser, David; Zahedi, Saeed
2015-12-01
Design and fitting of artificial limbs to lower limb amputees are largely based on the subjective judgement of the prosthetist. Understanding the science of three-dimensional (3D) dynamic coupling at the residuum/socket interface could potentially aid the design and fitting of the socket. A new method has been developed to characterise the 3D dynamic coupling at the residuum/socket interface using 3D motion capture based on a single case study of a trans-femoral amputee. The new model incorporated a Virtual Residuum Segment (VRS) and a Socket Segment (SS) which combined to form the residuum/socket interface. Angular and axial couplings between the two segments were subsequently determined. Results indicated a non-rigid angular coupling in excess of 10° in the quasi-sagittal plane and an axial coupling of between 21 and 35 mm. The corresponding angular couplings of less than 4° and 2° were estimated in the quasi-coronal and quasi-transverse plane, respectively. We propose that the combined experimental and analytical approach adopted in this case study could aid the iterative socket fitting process and could potentially lead to a new socket design. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Spatial uncertainty of a geoid undulation model in Guayaquil, Ecuador
NASA Astrophysics Data System (ADS)
Chicaiza, E. G.; Leiva, C. A.; Arranz, J. J.; Buenańo, X. E.
2017-06-01
Geostatistics is a discipline that deals with the statistical analysis of regionalized variables. In this case study, geostatistics is used to estimate geoid undulation in the rural area of Guayaquil town in Ecuador. The geostatistical approach was chosen because the estimation error of prediction map is getting. Open source statistical software R and mainly geoR, gstat and RGeostats libraries were used. Exploratory data analysis (EDA), trend and structural analysis were carried out. An automatic model fitting by Iterative Least Squares and other fitting procedures were employed to fit the variogram. Finally, Kriging using gravity anomaly of Bouguer as external drift and Universal Kriging were used to get a detailed map of geoid undulation. The estimation uncertainty was reached in the interval [-0.5; +0.5] m for errors and a maximum estimation standard deviation of 2 mm in relation with the method of interpolation applied. The error distribution of the geoid undulation map obtained in this study provides a better result than Earth gravitational models publicly available for the study area according the comparison with independent validation points. The main goal of this paper is to confirm the feasibility to use geoid undulations from Global Navigation Satellite Systems and leveling field measurements and geostatistical techniques methods in order to use them in high-accuracy engineering projects.
Application of recursive approaches to differential orbit correction of near Earth asteroids
NASA Astrophysics Data System (ADS)
Dmitriev, Vasily; Lupovka, Valery; Gritsevich, Maria
2016-10-01
Comparison of three approaches to the differential orbit correction of celestial bodies was performed: batch least squares fitting, Kalman filter, and recursive least squares filter. The first two techniques are well known and widely used (Montenbruck, O. & Gill, E., 2000). The most attention is paid to the algorithm and details of program realization of recursive least squares filter. The filter's algorithm was derived based on recursive least squares technique that are widely used in data processing applications (Simon, D, 2006). Usage recursive least squares filter, makes possible to process a new set of observational data, without reprocessing data, which has been processed before. Specific feature of such approach is that number of observation in data set may be variable. This feature makes recursive least squares filter more flexible approach compare to batch least squares (process complete set of observations in each iteration) and Kalman filtering (suppose updating state vector on each epoch with measurements).Advantages of proposed approach are demonstrated by processing of real astrometric observations of near Earth asteroids. The case of 2008 TC3 was studied. 2008 TC3 was discovered just before its impact with Earth. There are a many closely spaced observations of 2008 TC3 on the interval between discovering and impact, which creates favorable conditions for usage of recursive approaches. Each of approaches has very similar precision in case of 2008 TC3. At the same time, recursive least squares approaches have much higher performance. Thus, this approach more favorable for orbit fitting of a celestial body, which was detected shortly before the collision or close approach to the Earth.This work was carried out at MIIGAiK and supported by the Russian Science Foundation, Project no. 14-22-00197.References:O. Montenbruck and E. Gill, "Satellite Orbits, Models, Methods and Applications," Springer-Verlag, 2000, pp. 1-369.D. Simon, "Optimal State Estimation: Kalman, H Infinity, and Nonlinear Approaches",1 edition. Hoboken, N.J.: Wiley-Interscience, 2006.
Snow, M Elizabeth; Tweedie, Katherine; Pederson, Ann
2018-03-15
Recently, patient engagement has been identified as a promising strategy for supporting healthcare planning. However, the context and structure of universalistic, "one-size-fits-all" approaches often used for patient engagement may not enable diverse patients to participate in decision-making about programs intended to meet their needs. Specifically, standard patient engagement approaches are gender-blind and might not facilitate the engagement of those marginalized by, for example, substance use, low income, experiences of violence, homelessness, and/or mental health challenges-highly gendered health and social experiences. The project's purpose was to develop a heuristic model to assist planners to engage patients who are not traditionally included in healthcare planning. Using a qualitative research approach, we reviewed literature and conducted interviews with patients and healthcare planners regarding engaging marginalized populations in health services planning. From these inputs, we created a model and planning manual to assist healthcare planners to engage marginalized patients in health services planning, which we piloted in two clinical programs undergoing health services design. The findings from the pilots were used to refine the model. The analysis of the interviews and literature identified power and gender as barriers to participation, and generated suggestions to support diverse populations both to attend patient engagement events and to participate meaningfully. Engaging marginalized populations cannot be reduced to a single defined process, but instead needs to be understood as an iterative process of fitting engagement methods to a particular situation. Underlying this process are principles for meaningfully engaging marginalized people in healthcare planning. A one-size-fits-all approach to patient engagement is not appropriate given patients' diverse barriers to meaningful participation in healthcare planning. Instead, planners need a repertoire of skills and strategies to align the purpose of engagement with the capacities and needs of patient participants. Just as services need to meet diverse patients' needs, so too must patient engagement experiences.
A Symmetric Positive Definite Formulation for Monolithic Fluid Structure Interaction
2010-08-09
more likely to converge than simply iterating the partitioned approach to convergence in a simple Gauss - Seidel manner. Our approach allows the use of...conditions in a second step. These approaches can also be iterated within a given time step for increased stability, noting that in the limit if one... converges one obtains a monolithic (albeit expensive) approach. Other approaches construct strongly coupled systems and then solve them in one of several
Advances in iterative non-uniformity correction techniques for infrared scene projection
NASA Astrophysics Data System (ADS)
Danielson, Tom; Franks, Greg; LaVeigne, Joe; Prewarski, Marcus; Nehring, Brian
2015-05-01
Santa Barbara Infrared (SBIR) is continually developing improved methods for non-uniformity correction (NUC) of its Infrared Scene Projectors (IRSPs) as part of its comprehensive efforts to achieve the best possible projector performance. The most recent step forward, Advanced Iterative NUC (AI-NUC), improves upon previous NUC approaches in several ways. The key to NUC performance is achieving the most accurate possible input drive-to-radiance output mapping for each emitter pixel. This requires many highly-accurate radiance measurements of emitter output, as well as sophisticated manipulation of the resulting data set. AI-NUC expands the available radiance data set to include all measurements made of emitter output at any point. In addition, it allows the user to efficiently manage that data for use in the construction of a new NUC table that is generated from an improved fit of the emitter response curve. Not only does this improve the overall NUC by offering more statistics for interpolation than previous approaches, it also simplifies the removal of erroneous data from the set so that it does not propagate into the correction tables. AI-NUC is implemented by SBIR's IRWindows4 automated test software as part its advanced turnkey IRSP product (the Calibration Radiometry System or CRS), which incorporates all necessary measurement, calibration and NUC table generation capabilities. By employing AI-NUC on the CRS, SBIR has demonstrated the best uniformity results on resistive emitter arrays to date.
Parallel iterative methods for sparse linear and nonlinear equations
NASA Technical Reports Server (NTRS)
Saad, Youcef
1989-01-01
As three-dimensional models are gaining importance, iterative methods will become almost mandatory. Among these, preconditioned Krylov subspace methods have been viewed as the most efficient and reliable, when solving linear as well as nonlinear systems of equations. There has been several different approaches taken to adapt iterative methods for supercomputers. Some of these approaches are discussed and the methods that deal more specifically with general unstructured sparse matrices, such as those arising from finite element methods, are emphasized.
DiaFit: The Development of a Smart App for Patients with Type 2 Diabetes and Obesity.
Modave, François; Bian, Jiang; Rosenberg, Eric; Mendoza, Tonatiuh; Liang, Zhan; Bhosale, Ravi; Maeztu, Carlos; Rodriguez, Camila; Cardel, Michelle I
2016-01-01
Optimal management of chronic diseases, such as type 2 diabetes (T2D) and obesity, requires patient-provider communication and proactive self-management from the patient. Mobile apps could be an effective strategy for improving patient-provider communication and provide resources for self-management to patients themselves. The objective of this paper is to describe the development of a mobile tool for patients with T2D and obesity that utilizes an integrative approach to facilitate patient-centered app development, with patient and physician interfaces. Our implementation strategy focused on the building of a multidisciplinary team to create a user-friendly and evidence-based app, to be used by patients in a home setting or at the point-of-care. We present the iterative design, development, and testing of DiaFit, an app designed to improve the self-management of T2D and obesity, using an adapted Agile approach to software implementation. The production team consisted of experts in mobile health, nutrition sciences, and obesity; software engineers; and clinicians. Additionally, the team included citizen scientists and clinicians who acted as the de facto software clients for DiaFit and therefore interacted with the production team throughout the entire app creation, from design to testing. DiaFit (version 1.0) is an open-source, inclusive iOS app that incorporates nutrition data, physical activity data, and medication and glucose values, as well as patient-reported outcomes. DiaFit supports the uploading of data from sensory devices via Bluetooth for physical activity (iOS step counts, FitBit, Apple watch) and glucose monitoring (iHealth glucose meter). The app provides summary statistics and graphics for step counts, dietary information, and glucose values that can be used by patients and their providers to make informed health decisions. The DiaFit iOS app was developed in Swift (version 2.2) with a Web back-end deployed on the Health Insurance Portability and Accountability Act compliant-ready Amazon Web Services cloud computing platform. DiaFit is publicly available on GitHub to the diabetes community at large, under the GNU General Public License agreement. Given the proliferation of health-related apps available to health consumers, it is essential to ensure that apps are evidence-based and user-oriented, with specific health conditions in mind. To this end, we have used a software development approach focusing on community and clinical engagement to create DiaFit, an app that assists patients with T2D and obesity to better manage their health through active communication with their providers and proactive self-management of their diseases.
DiaFit: The Development of a Smart App for Patients with Type 2 Diabetes and Obesity
Modave, François; Bian, Jiang; Rosenberg, Eric; Mendoza, Tonatiuh; Liang, Zhan; Bhosale, Ravi; Maeztu, Carlos; Rodriguez, Camila; Cardel, Michelle I
2018-01-01
Background Optimal management of chronic diseases, such as type 2 diabetes (T2D) and obesity, requires patient-provider communication and proactive self-management from the patient. Mobile apps could be an effective strategy for improving patient-provider communication and provide resources for self-management to patients themselves. Objective The objective of this paper is to describe the development of a mobile tool for patients with T2D and obesity that utilizes an integrative approach to facilitate patient-centered app development, with patient and physician interfaces. Our implementation strategy focused on the building of a multidisciplinary team to create a user-friendly and evidence-based app, to be used by patients in a home setting or at the point-of-care. Methods We present the iterative design, development, and testing of DiaFit, an app designed to improve the self-management of T2D and obesity, using an adapted Agile approach to software implementation. The production team consisted of experts in mobile health, nutrition sciences, and obesity; software engineers; and clinicians. Additionally, the team included citizen scientists and clinicians who acted as the de facto software clients for DiaFit and therefore interacted with the production team throughout the entire app creation, from design to testing. Results DiaFit (version 1.0) is an open-source, inclusive iOS app that incorporates nutrition data, physical activity data, and medication and glucose values, as well as patient-reported outcomes. DiaFit supports the uploading of data from sensory devices via Bluetooth for physical activity (iOS step counts, FitBit, Apple watch) and glucose monitoring (iHealth glucose meter). The app provides summary statistics and graphics for step counts, dietary information, and glucose values that can be used by patients and their providers to make informed health decisions. The DiaFit iOS app was developed in Swift (version 2.2) with a Web back-end deployed on the Health Insurance Portability and Accountability Act compliant-ready Amazon Web Services cloud computing platform. DiaFit is publicly available on GitHub to the diabetes community at large, under the GNU General Public License agreement. Conclusions Given the proliferation of health-related apps available to health consumers, it is essential to ensure that apps are evidence-based and user-oriented, with specific health conditions in mind. To this end, we have used a software development approach focusing on community and clinical engagement to create DiaFit, an app that assists patients with T2D and obesity to better manage their health through active communication with their providers and proactive self-management of their diseases. PMID:29388609
Alaska/Yukon Geoid Improvement by a Data-Driven Stokes's Kernel Modification Approach
NASA Astrophysics Data System (ADS)
Li, Xiaopeng; Roman, Daniel R.
2015-04-01
Geoid modeling over Alaska of USA and Yukon Canada being a trans-national issue faces a great challenge primarily due to the inhomogeneous surface gravity data (Saleh et al, 2013) and the dynamic geology (Freymueller et al, 2008) as well as its complex geological rheology. Previous study (Roman and Li 2014) used updated satellite models (Bruinsma et al 2013) and newly acquired aerogravity data from the GRAV-D project (Smith 2007) to capture the gravity field changes in the targeting areas primarily in the middle-to-long wavelength. In CONUS, the geoid model was largely improved. However, the precision of the resulted geoid model in Alaska was still in the decimeter level, 19cm at the 32 tide bench marks and 24cm on the 202 GPS/Leveling bench marks that gives a total of 23.8cm at all of these calibrated surface control points, where the datum bias was removed. Conventional kernel modification methods in this area (Li and Wang 2011) had limited effects on improving the precision of the geoid models. To compensate the geoid miss fits, a new Stokes's kernel modification method based on a data-driven technique is presented in this study. First, the method was tested on simulated data sets (Fig. 1), where the geoid errors have been reduced by 2 orders of magnitude (Fig 2). For the real data sets, some iteration steps are required to overcome the rank deficiency problem caused by the limited control data that are irregularly distributed in the target area. For instance, after 3 iterations, the standard deviation dropped about 2.7cm (Fig 3). Modification at other critical degrees can further minimize the geoid model miss fits caused either by the gravity error or the remaining datum error in the control points.
Estimating pole/zero errors in GSN-IRIS/USGS network calibration metadata
Ringler, A.T.; Hutt, C.R.; Aster, R.; Bolton, H.; Gee, L.S.; Storm, T.
2012-01-01
Mapping the digital record of a seismograph into true ground motion requires the correction of the data by some description of the instrument's response. For the Global Seismographic Network (Butler et al., 2004), as well as many other networks, this instrument response is represented as a Laplace domain pole–zero model and published in the Standard for the Exchange of Earthquake Data (SEED) format. This Laplace representation assumes that the seismometer behaves as a linear system, with any abrupt changes described adequately via multiple time-invariant epochs. The SEED format allows for published instrument response errors as well, but these typically have not been estimated or provided to users. We present an iterative three-step method to estimate the instrument response parameters (poles and zeros) and their associated errors using random calibration signals. First, we solve a coarse nonlinear inverse problem using a least-squares grid search to yield a first approximation to the solution. This approach reduces the likelihood of poorly estimated parameters (a local-minimum solution) caused by noise in the calibration records and enhances algorithm convergence. Second, we iteratively solve a nonlinear parameter estimation problem to obtain the least-squares best-fit Laplace pole–zero–gain model. Third, by applying the central limit theorem, we estimate the errors in this pole–zero model by solving the inverse problem at each frequency in a two-thirds octave band centered at each best-fit pole–zero frequency. This procedure yields error estimates of the 99% confidence interval. We demonstrate the method by applying it to a number of recent Incorporated Research Institutions in Seismology/United States Geological Survey (IRIS/USGS) network calibrations (network code IU).
CUDA GPU based full-Stokes finite difference modelling of glaciers
NASA Astrophysics Data System (ADS)
Brædstrup, C. F.; Egholm, D. L.
2012-04-01
Many have stressed the limitations of using the shallow shelf and shallow ice approximations when modelling ice streams or surging glaciers. Using a full-stokes approach requires either large amounts of computer power or time and is therefore seldom an option for most glaciologists. Recent advances in graphics card (GPU) technology for high performance computing have proven extremely efficient in accelerating many large scale scientific computations. The general purpose GPU (GPGPU) technology is cheap, has a low power consumption and fits into a normal desktop computer. It could therefore provide a powerful tool for many glaciologists. Our full-stokes ice sheet model implements a Red-Black Gauss-Seidel iterative linear solver to solve the full stokes equations. This technique has proven very effective when applied to the stokes equation in geodynamics problems, and should therefore also preform well in glaciological flow probems. The Gauss-Seidel iterator is known to be robust but several other linear solvers have a much faster convergence. To aid convergence, the solver uses a multigrid approach where values are interpolated and extrapolated between different grid resolutions to minimize the short wavelength errors efficiently. This reduces the iteration count by several orders of magnitude. The run-time is further reduced by using the GPGPU technology where each card has up to 448 cores. Researchers utilizing the GPGPU technique in other areas have reported between 2 - 11 times speedup compared to multicore CPU implementations on similar problems. The goal of these initial investigations into the possible usage of GPGPU technology in glacial modelling is to apply the enhanced resolution of a full-stokes solver to ice streams and surging glaciers. This is a area of growing interest because ice streams are the main drainage conjugates for large ice sheets. It is therefore crucial to understand this streaming behavior and it's impact up-ice.
Neelakantan, Anand; Ilankumaran, Mookkiah; Ray, Sougat
2017-10-01
A healthy habitable environment onboard warships is vital to operational fleet efficiency and fit sea-warrier force. Unique man-machine-armament interface issues and consequent constraints on habitability necessitate a multi-disciplinary approach toward optimizing habitability standards. Study of the basic 'human factor', including crew awareness on what determines shipboard habitability, and its association with habitation specifications is an essential step in such an approach. The aim of this study was to assess crew awareness on shipboard habitability and the association between awareness and maintenance of optimal habitability as per specifications. A cross-sectional descriptive study was carried out among 552 naval personnel onboard warships in Mumbai. Data on crew awareness on habitability was collected using a standardized questionnaire, and correlated with basic habitability requirement specifications. Data was analyzed using Microsoft Excel, Epi-info, and SPSS version 17. Awareness level on basic habitability aspects was very good in 65.3% of crew. Area-specific awareness was maximum with respect to living area (95.3%). Knowledge levels on waste management were among the lowest (65.2%) in the category of aspect-wise awareness. Statistically significant association was found between awareness levels and habitability standards (OR = 7.27). The new benchmarks set in the form of high crew awareness levels on basic shipboard habitability specifications and its significant association with standards needs to be sustained. It entails re-iteration of healthy habitation essentials into training; and holds the key to a fit fighting force.
Zipf's law in city size from a resource utilization model.
Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S; Chakrabarti, Bikas K
2014-10-01
We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.
Zipf's law in city size from a resource utilization model
NASA Astrophysics Data System (ADS)
Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Anindya S.; Chakrabarti, Bikas K.
2014-10-01
We study a resource utilization scenario characterized by intrinsic fitness. To describe the growth and organization of different cities, we consider a model for resource utilization where many restaurants compete, as in a game, to attract customers using an iterative learning process. Results for the case of restaurants with uniform fitness are reported. When fitness is uniformly distributed, it gives rise to a Zipf law for the number of customers. We perform an exact calculation for the utilization fraction for the case when choices are made independent of fitness. A variant of the model is also introduced where the fitness can be treated as an ability to stay in the business. When a restaurant loses customers, its fitness is replaced by a random fitness. The steady state fitness distribution is characterized by a power law, while the distribution of the number of customers still follows the Zipf law, implying the robustness of the model. Our model serves as a paradigm for the emergence of Zipf law in city size distribution.
A Model and Simple Iterative Algorithm for Redundancy Analysis.
ERIC Educational Resources Information Center
Fornell, Claes; And Others
1988-01-01
This paper shows that redundancy maximization with J. K. Johansson's extension can be accomplished via a simple iterative algorithm based on H. Wold's Partial Least Squares. The model and the iterative algorithm for the least squares approach to redundancy maximization are presented. (TJH)
NASA Astrophysics Data System (ADS)
Wang, Yi-Hong; Wu, Guo-Cheng; Baleanu, Dumitru
2013-10-01
The variational iteration method is newly used to construct various integral equations of fractional order. Some iterative schemes are proposed which fully use the method and the predictor-corrector approach. The fractional Bagley-Torvik equation is then illustrated as an example of multi-order and the results show the efficiency of the variational iteration method's new role.
Ali, S. J.; Kraus, R. G.; Fratanduono, D. E.; ...
2017-05-18
Here, we developed an iterative forward analysis (IFA) technique with the ability to use hydrocode simulations as a fitting function for analysis of dynamic compression experiments. The IFA method optimizes over parameterized quantities in the hydrocode simulations, breaking the degeneracy of contributions to the measured material response. Velocity profiles from synthetic data generated using a hydrocode simulation are analyzed as a first-order validation of the technique. We also analyze multiple magnetically driven ramp compression experiments on copper and compare with more conventional techniques. Excellent agreement is obtained in both cases.
NASA Astrophysics Data System (ADS)
Quan, Haiyang; Wu, Fan; Hou, Xi
2015-10-01
New method for reconstructing rotationally asymmetric surface deviation with pixel-level spatial resolution is proposed. It is based on basic iterative scheme and accelerates the Gauss-Seidel method by introducing an acceleration parameter. This modified Successive Over-relaxation (SOR) is effective for solving the rotationally asymmetric components with pixel-level spatial resolution, without the usage of a fitting procedure. Compared to the Jacobi and Gauss-Seidel method, the modified SOR method with an optimal relaxation factor converges much faster and saves more computational costs and memory space without reducing accuracy. It has been proved by real experimental results.
NASA Technical Reports Server (NTRS)
Lennington, R. K.; Malek, H.
1978-01-01
A clustering method, CLASSY, was developed, which alternates maximum likelihood iteration with a procedure for splitting, combining, and eliminating the resulting statistics. The method maximizes the fit of a mixture of normal distributions to the observed first through fourth central moments of the data and produces an estimate of the proportions, means, and covariances in this mixture. The mathematical model which is the basic for CLASSY and the actual operation of the algorithm is described. Data comparing the performances of CLASSY and ISOCLS on simulated and actual LACIE data are presented.
NASA Technical Reports Server (NTRS)
Mukherjee, Rinku; Gopalarathnam, Ashok; Kim, Sung Wan
2003-01-01
An iterative decambering approach for the post stall prediction of wings using known section data as inputs is presented. The method can currently be used for incompressible .ow and can be extended to compressible subsonic .ow using Mach number correction schemes. A detailed discussion of past work on this topic is presented first. Next, an overview of the decambering approach is presented and is illustrated by applying the approach to the prediction of the two-dimensional C(sub l) and C(sub m) curves for an airfoil. The implementation of the approach for iterative decambering of wing sections is then discussed. A novel feature of the current e.ort is the use of a multidimensional Newton iteration for taking into consideration the coupling between the di.erent sections of the wing. The approach lends itself to implementation in a variety of finite-wing analysis methods such as lifting-line theory, discrete-vortex Weissinger's method, and vortex lattice codes. Results are presented for a rectangular wing for a from 0 to 25 deg. The results are compared for both increasing and decreasing directions of a, and they show that a hysteresis loop can be predicted for post-stall angles of attack.
Baseline Estimation and Outlier Identification for Halocarbons
NASA Astrophysics Data System (ADS)
Wang, D.; Schuck, T.; Engel, A.; Gallman, F.
2017-12-01
The aim of this paper is to build a baseline model for halocarbons and to statistically identify the outliers under specific conditions. In this paper, time series of regional CFC-11 and Chloromethane measurements was discussed, which taken over the last 4 years at two locations, including a monitoring station at northwest of Frankfurt am Main (Germany) and Mace Head station (Ireland). In addition to analyzing time series of CFC-11 and Chloromethane, more importantly, a statistical approach of outlier identification is also introduced in this paper in order to make a better estimation of baseline. A second-order polynomial plus harmonics are fitted to CFC-11 and chloromethane mixing ratios data. Measurements with large distance to the fitting curve are regard as outliers and flagged. Under specific requirement, the routine is iteratively adopted without the flagged measurements until no additional outliers are found. Both model fitting and the proposed outlier identification method are realized with the help of a programming language, Python. During the period, CFC-11 shows a gradual downward trend. And there is a slightly upward trend in the mixing ratios of Chloromethane. The concentration of chloromethane also has a strong seasonal variation, mostly due to the seasonal cycle of OH. The usage of this statistical method has a considerable effect on the results. This method efficiently identifies a series of outliers according to the standard deviation requirements. After removing the outliers, the fitting curves and trend estimates are more reliable.
Seward, Kirsty; Wolfenden, Luke; Wiggers, John; Finch, Meghan; Wyse, Rebecca; Oldmeadow, Christopher; Presseau, Justin; Clinton-McHarg, Tara; Yoong, Sze Lin
2017-04-04
While there are number of frameworks which focus on supporting the implementation of evidence based approaches, few psychometrically valid measures exist to assess constructs within these frameworks. This study aimed to develop and psychometrically assess a scale measuring each domain of the Theoretical Domains Framework for use in assessing the implementation of dietary guidelines within a non-health care setting (childcare services). A 75 item 14-domain Theoretical Domains Framework Questionnaire (TDFQ) was developed and administered via telephone interview to 202 centre based childcare service cooks who had a role in planning the service menu. Confirmatory factor analysis (CFA) was undertaken to assess the reliability, discriminant validity and goodness of fit of the 14-domain theoretical domain framework measure. For the CFA, five iterative processes of adjustment were undertaken where 14 items were removed, resulting in a final measure consisting of 14 domains and 61 items. For the final measure: the Chi-Square goodness of fit statistic was 3447.19; the Standardized Root Mean Square Residual (SRMR) was 0.070; the Root Mean Square Error of Approximation (RMSEA) was 0.072; and the Comparative Fit Index (CFI) had a value of 0.78. While only one of the three indices support goodness of fit of the measurement model tested, a 14-domain model with 61 items showed good discriminant validity and internally consistent items. Future research should aim to assess the psychometric properties of the developed TDFQ in other community-based settings.
Automating Microbial Directed Evolution For Bioengineering Applications
NASA Astrophysics Data System (ADS)
Lee, A.; Demachkie, I. S.; Sardesh, N.; Arismendi, D.; Ouandji, C.; Wang, J.; Blaich, J.; Gentry, D.
2016-12-01
From a micro-biology perspective, directed evolution is a technique that uses controlled environmental pressures to select for a desired phenotype. Directed evolution has the distinct advantage over rational design of not needing extensive knowledge of the genome or pathways associated with a microorganism to induce phenotypes. However, there are currently limitations to the applicability of this technique including being time-consuming, error-prone, and dependent on existing assays that may lack selectivity for the given phenotype. The AADEC (Autonomous Adaptive Directed Evolution Chamber) system is a proof-of-concept instrument to automate and improve the technique such that directed evolution can be used more effectively as a general bioengineering tool. A series of tests using the automated system and comparable by-hand survival assay measurements have been carried out using UV-C radiation and Escherichia coli cultures in order to demonstrate the advantages of the AADEC versus traditional implementations of directed evolution such as random mutagenesis. AADEC uses UV-C exposure as both a source of environmental stress and mutagenesis, so in order to evaluate the UV-C tolerance obtained from the cultures, a manual UV-C exposure survival assay was developed alongside the device to compare the survival fractions at a fixed dosage. This survival assay involves exposing E.coli to UV-C radiation using a custom-designed exposure hood to control the flux and dose. Surviving cells are counted then transferred to the next iteration and so on for several iterations to calculate the survival fractions for each exposure iteration. This survival assay primarily serves as a baseline for the AADEC device, allowing quantification of the differences between the AADEC system over the manual approach. The primary data of comparison is survival fractions; this is obtained by optical density and plate counts in the manual assay and by optical density growth curve fits pre- and post-exposure in the automated case. This data can then be compiled to calculate trends over the iterations to characterize increasing UV-C resistance of the E.coli strains. The observed trends are statistically indistinguishable through several iterations from both sources.
Kushniruk, Andre W; Borycki, Elizabeth M
2015-01-01
The development of more usable and effective healthcare information systems has become a critical issue. In the software industry methodologies such as agile and iterative development processes have emerged to lead to more effective and usable systems. These approaches highlight focusing on user needs and promoting iterative and flexible development practices. Evaluation and testing of iterative agile development cycles is considered an important part of the agile methodology and iterative processes for system design and re-design. However, the issue of how to effectively integrate usability testing methods into rapid and flexible agile design cycles has remained to be fully explored. In this paper we describe our application of an approach known as low-cost rapid usability testing as it has been applied within agile system development in healthcare. The advantages of the integrative approach are described, along with current methodological considerations.
NASA Astrophysics Data System (ADS)
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-01
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety.
Iterative Strain-Gage Balance Calibration Data Analysis for Extended Independent Variable Sets
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert Manfred
2011-01-01
A new method was developed that makes it possible to use an extended set of independent calibration variables for an iterative analysis of wind tunnel strain gage balance calibration data. The new method permits the application of the iterative analysis method whenever the total number of balance loads and other independent calibration variables is greater than the total number of measured strain gage outputs. Iteration equations used by the iterative analysis method have the limitation that the number of independent and dependent variables must match. The new method circumvents this limitation. It simply adds a missing dependent variable to the original data set by using an additional independent variable also as an additional dependent variable. Then, the desired solution of the regression analysis problem can be obtained that fits each gage output as a function of both the original and additional independent calibration variables. The final regression coefficients can be converted to data reduction matrix coefficients because the missing dependent variables were added to the data set without changing the regression analysis result for each gage output. Therefore, the new method still supports the application of the two load iteration equation choices that the iterative method traditionally uses for the prediction of balance loads during a wind tunnel test. An example is discussed in the paper that illustrates the application of the new method to a realistic simulation of temperature dependent calibration data set of a six component balance.
Reducing the latency of the Fractal Iterative Method to half an iteration
NASA Astrophysics Data System (ADS)
Béchet, Clémentine; Tallon, Michel
2013-12-01
The fractal iterative method for atmospheric tomography (FRiM-3D) has been introduced to solve the wavefront reconstruction at the dimensions of an ELT with a low-computational cost. Previous studies reported the requirement of only 3 iterations of the algorithm in order to provide the best adaptive optics (AO) performance. Nevertheless, any iterative method in adaptive optics suffer from the intrinsic latency induced by the fact that one iteration can start only once the previous one is completed. Iterations hardly match the low-latency requirement of the AO real-time computer. We present here a new approach to avoid iterations in the computation of the commands with FRiM-3D, thus allowing low-latency AO response even at the scale of the European ELT (E-ELT). The method highlights the importance of "warm-start" strategy in adaptive optics. To our knowledge, this particular way to use the "warm-start" has not been reported before. Futhermore, removing the requirement of iterating to compute the commands, the computational cost of the reconstruction with FRiM-3D can be simplified and at least reduced to half the computational cost of a classical iteration. Thanks to simulations of both single-conjugate and multi-conjugate AO for the E-ELT,with FRiM-3D on Octopus ESO simulator, we demonstrate the benefit of this approach. We finally enhance the robustness of this new implementation with respect to increasing measurement noise, wind speed and even modeling errors.
EPA announced the availability of the final report, An Iterative Approach for Identifying the Causes of Reduced Benthic Macroinvertebrate Diversity in the Willimantic River, Connecticut. This study demonstrates that a screening assessment can help to focus sampling for ...
Kister, Alexander
2015-01-01
We present an alternative approach to protein 3D folding prediction based on determination of rules that specify distribution of “favorable” residues, that are mainly responsible for a given fold formation, and “unfavorable” residues, that are incompatible with that fold, in polypeptide sequences. The process of determining favorable and unfavorable residues is iterative. The starting assumptions are based on the general principles of protein structure formation as well as structural features peculiar to a protein fold under investigation. The initial assumptions are tested one-by-one for a set of all known proteins with a given structure. The assumption is accepted as a “rule of amino acid distribution” for the protein fold if it holds true for all, or near all, structures. If the assumption is not accepted as a rule, it can be modified to better fit the data and then tested again in the next step of the iterative search algorithm, or rejected. We determined the set of amino acid distribution rules for a large group of beta sandwich-like proteins characterized by a specific arrangement of strands in two beta sheets. It was shown that this set of rules is highly sensitive (~90%) and very specific (~99%) for identifying sequences of proteins with specified beta sandwich fold structure. The advantage of the proposed approach is that it does not require that query proteins have a high degree of homology to proteins with known structure. So long as the query protein satisfies residue distribution rules, it can be confidently assigned to its respective protein fold. Another advantage of our approach is that it allows for a better understanding of which residues play an essential role in protein fold formation. It may, therefore, facilitate rational protein engineering design. PMID:25625198
NASA Astrophysics Data System (ADS)
Fakhri, G. El; Kijewski, M. F.; Moore, S. C.
2001-06-01
Estimates of SPECT activity within certain deep brain structures could be useful for clinical tasks such as early prediction of Alzheimer's disease with Tc-99m or Parkinson's disease with I-123; however, such estimates are biased by poor spatial resolution and inaccurate scatter and attenuation corrections. We compared an analytical approach (AA) of more accurate quantitation to a slower iterative approach (IA). Monte Carlo simulated projections of 12 normal and 12 pathologic Tc-99m perfusion studies, as well as 12, normal and 12 pathologic I-123 neurotransmission studies, were generated using a digital brain phantom and corrected for scatter by a multispectral fitting procedure. The AA included attenuation correction by a modified Metz-Fan algorithm and activity estimation by a technique that incorporated Metz filtering to compensate for variable collimator response (VCR), IA-modeled attenuation, and VCR in the projector/backprojector of an ordered subsets-expectation maximization (OSEM) algorithm. Bias and standard deviation over the 12 normal and 12 pathologic patients were calculated with respect to the reference values in the corpus callosum, caudate nucleus, and putamen. The IA and AA yielded similar quantitation results in both Tc-99m and I-123 studies in all brain structures considered in both normal and pathologic patients. The bias with respect to the reference activity distributions was less than 7% for Tc-99m studies, but greater than 30% for I-123 studies, due to partial volume effect in the striata. Our results were validated using I-123 physical acquisitions of an anthropomorphic brain phantom. The IA yielded quantitation accuracy comparable to that obtained with IA, while requiring much less processing time. However, in most conditions, IA yielded lower noise for the same bias than did AA.
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results. PMID:28125609
Jaddi, Najmeh Sadat; Abdullah, Salwani; Abdul Malek, Marlinda
2017-01-01
Artificial neural networks (ANNs) have been employed to solve a broad variety of tasks. The selection of an ANN model with appropriate weights is important in achieving accurate results. This paper presents an optimization strategy for ANN model selection based on the cuckoo search (CS) algorithm, which is rooted in the obligate brood parasitic actions of some cuckoo species. In order to enhance the convergence ability of basic CS, some modifications are proposed. The fraction Pa of the n nests replaced by new nests is a fixed parameter in basic CS. As the selection of Pa is a challenging issue and has a direct effect on exploration and therefore on convergence ability, in this work the Pa is set to a maximum value at initialization to achieve more exploration in early iterations and it is decreased during the search to achieve more exploitation in later iterations until it reaches the minimum value in the final iteration. In addition, a novel master-leader-slave multi-population strategy is used where the slaves employ the best fitness function among all slaves, which is selected by the leader under a certain condition. This fitness function is used for subsequent Lévy flights. In each iteration a copy of the best solution of each slave is migrated to the master and then the best solution is found by the master. The method is tested on benchmark classification and time series prediction problems and the statistical analysis proves the ability of the method. This method is also applied to a real-world water quality prediction problem with promising results.
NASA Astrophysics Data System (ADS)
Eck, Brendan; Fahmi, Rachid; Brown, Kevin M.; Raihani, Nilgoun; Wilson, David L.
2014-03-01
Model observers were created and compared to human observers for the detection of low contrast targets in computed tomography (CT) images reconstructed with an advanced, knowledge-based, iterative image reconstruction method for low x-ray dose imaging. A 5-channel Laguerre-Gauss Hotelling Observer (CHO) was used with internal noise added to the decision variable (DV) and/or channel outputs (CO). Models were defined by parameters: (k1) DV-noise with standard deviation (std) proportional to DV std; (k2) DV-noise with constant std; (k3) CO-noise with constant std across channels; and (k4) CO-noise in each channel with std proportional to CO variance. Four-alternative forced choice (4AFC) human observer studies were performed on sub-images extracted from phantom images with and without a "pin" target. Model parameters were estimated using maximum likelihood comparison to human probability correct (PC) data. PC in human and all model observers increased with dose, contrast, and size, and was much higher for advanced iterative reconstruction (IMR) as compared to filtered back projection (FBP). Detection in IMR was better than FPB at 1/3 dose, suggesting significant dose savings. Model(k1,k2,k3,k4) gave the best overall fit to humans across independent variables (dose, size, contrast, and reconstruction) at fixed display window. However Model(k1) performed better when considering model complexity using the Akaike information criterion. Model(k1) fit the extraordinary detectability difference between IMR and FBP, despite the different noise quality. It is anticipated that the model observer will predict results from iterative reconstruction methods having similar noise characteristics, enabling rapid comparison of methods.
Li, Haichen; Yaron, David J
2016-11-08
A least-squares commutator in the iterative subspace (LCIIS) approach is explored for accelerating self-consistent field (SCF) calculations. LCIIS is similar to direct inversion of the iterative subspace (DIIS) methods in that the next iterate of the density matrix is obtained as a linear combination of past iterates. However, whereas DIIS methods find the linear combination by minimizing a sum of error vectors, LCIIS minimizes the Frobenius norm of the commutator between the density matrix and the Fock matrix. This minimization leads to a quartic problem that can be solved iteratively through a constrained Newton's method. The relationship between LCIIS and DIIS is discussed. Numerical experiments suggest that LCIIS leads to faster convergence than other SCF convergence accelerating methods in a statistically significant sense, and in a number of cases LCIIS leads to stable SCF solutions that are not found by other methods. The computational cost involved in solving the quartic minimization problem is small compared to the typical cost of SCF iterations and the approach is easily integrated into existing codes. LCIIS can therefore serve as a powerful addition to SCF convergence accelerating methods in computational quantum chemistry packages.
NASA Astrophysics Data System (ADS)
Zhong, XiaoXu; Liao, ShiJun
2018-01-01
Analytic approximations of the Von Kármán's plate equations in integral form for a circular plate under external uniform pressure to arbitrary magnitude are successfully obtained by means of the homotopy analysis method (HAM), an analytic approximation technique for highly nonlinear problems. Two HAM-based approaches are proposed for either a given external uniform pressure Q or a given central deflection, respectively. Both of them are valid for uniform pressure to arbitrary magnitude by choosing proper values of the so-called convergence-control parameters c 1 and c 2 in the frame of the HAM. Besides, it is found that the HAM-based iteration approaches generally converge much faster than the interpolation iterative method. Furthermore, we prove that the interpolation iterative method is a special case of the first-order HAM iteration approach for a given external uniform pressure Q when c 1 = - θ and c 2 = -1, where θ denotes the interpolation iterative parameter. Therefore, according to the convergence theorem of Zheng and Zhou about the interpolation iterative method, the HAM-based approaches are valid for uniform pressure to arbitrary magnitude at least in the special case c 1 = - θ and c 2 = -1. In addition, we prove that the HAM approach for the Von Kármán's plate equations in differential form is just a special case of the HAM for the Von Kármán's plate equations in integral form mentioned in this paper. All of these illustrate the validity and great potential of the HAM for highly nonlinear problems, and its superiority over perturbation techniques.
A novel iterative scheme and its application to differential equations.
Khan, Yasir; Naeem, F; Šmarda, Zdeněk
2014-01-01
The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.
Network news: prime time for systems biology of the plant circadian clock.
McClung, C Robertson; Gutiérrez, Rodrigo A
2010-12-01
Whole-transcriptome analyses have established that the plant circadian clock regulates virtually every plant biological process and most prominently hormonal and stress response pathways. Systems biology efforts have successfully modeled the plant central clock machinery and an iterative process of model refinement and experimental validation has contributed significantly to the current view of the central clock machinery. The challenge now is to connect this central clock to the output pathways for understanding how the plant circadian clock contributes to plant growth and fitness in a changing environment. Undoubtedly, systems approaches will be needed to integrate and model the vastly increased volume of experimental data in order to extract meaningful biological information. Thus, we have entered an era of systems modeling, experimental testing, and refinement. This approach, coupled with advances from the genetic and biochemical analyses of clock function, is accelerating our progress towards a comprehensive understanding of the plant circadian clock network. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Radac, Mircea-Bogdan; Precup, Radu-Emil; Roman, Raul-Cristian
2017-04-01
This paper proposes the combination of two model-free controller tuning techniques, namely linear virtual reference feedback tuning (VRFT) and nonlinear state-feedback Q-learning, referred to as a new mixed VRFT-Q learning approach. VRFT is first used to find stabilising feedback controller using input-output experimental data from the process in a model reference tracking setting. Reinforcement Q-learning is next applied in the same setting using input-state experimental data collected under perturbed VRFT to ensure good exploration. The Q-learning controller learned with a batch fitted Q iteration algorithm uses two neural networks, one for the Q-function estimator and one for the controller, respectively. The VRFT-Q learning approach is validated on position control of a two-degrees-of-motion open-loop stable multi input-multi output (MIMO) aerodynamic system (AS). Extensive simulations for the two independent control channels of the MIMO AS show that the Q-learning controllers clearly improve performance over the VRFT controllers.
Iterative Authoring Using Story Generation Feedback: Debugging or Co-creation?
NASA Astrophysics Data System (ADS)
Swartjes, Ivo; Theune, Mariët
We explore the role that story generation feedback may play within the creative process of interactive story authoring. While such feedback is often used as 'debugging' information, we explore here a 'co-creation' view, in which the outcome of the story generator influences authorial intent. We illustrate an iterative authoring approach in which each iteration consists of idea generation, implementation and simulation. We find that the tension between authorial intent and the partially uncontrollable story generation outcome may be relieved by taking such a co-creation approach.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, and produce modified schedules quickly. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. These experiments were performed within the domain of Space Shuttle ground processing.
Superpixel Based Factor Analysis and Target Transformation Method for Martian Minerals Detection
NASA Astrophysics Data System (ADS)
Wu, X.; Zhang, X.; Lin, H.
2018-04-01
The Factor analysis and target transformation (FATT) is an effective method to test for the presence of particular mineral on Martian surface. It has been used both in thermal infrared (Thermal Emission Spectrometer, TES) and near-infrared (Compact Reconnaissance Imaging Spectrometer for Mars, CRISM) hyperspectral data. FATT derived a set of orthogonal eigenvectors from a mixed system and typically selected first 10 eigenvectors to least square fit the library mineral spectra. However, minerals present only in a limited pixels will be ignored because its weak spectral features compared with full image signatures. Here, we proposed a superpixel based FATT method to detect the mineral distributions on Mars. The simple linear iterative clustering (SLIC) algorithm was used to partition the CRISM image into multiple connected image regions with spectral homogeneous to enhance the weak signatures by increasing their proportion in a mixed system. A least square fitting was used in target transformation and performed to each region iteratively. Finally, the distribution of the specific minerals in image was obtained, where fitting residual less than a threshold represent presence and otherwise absence. We validate our method by identifying carbonates in a well analysed CRISM image in Nili Fossae on Mars. Our experimental results indicate that the proposed method work well both in simulated and real data sets.
Iterative deep convolutional encoder-decoder network for medical image segmentation.
Jung Uk Kim; Hak Gu Kim; Yong Man Ro
2017-07-01
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, K.
1994-12-31
By means of the author`s earlier theory of antieigenvalues and antieigenvectors, a new computational approach to iterative methods is presented. This enables an explicit trigonometric understanding of iterative convergence and provides new insights into the sharpness of error bounds. Direct applications to Gradient descent, Conjugate gradient, GCR(k), Orthomin, CGN, GMRES, CGS, and other matrix iterative schemes will be given.
NASA Astrophysics Data System (ADS)
Chen, Hao; Lv, Wen; Zhang, Tongtong
2018-05-01
We study preconditioned iterative methods for the linear system arising in the numerical discretization of a two-dimensional space-fractional diffusion equation. Our approach is based on a formulation of the discrete problem that is shown to be the sum of two Kronecker products. By making use of an alternating Kronecker product splitting iteration technique we establish a class of fixed-point iteration methods. Theoretical analysis shows that the new method converges to the unique solution of the linear system. Moreover, the optimal choice of the involved iteration parameters and the corresponding asymptotic convergence rate are computed exactly when the eigenvalues of the system matrix are all real. The basic iteration is accelerated by a Krylov subspace method like GMRES. The corresponding preconditioner is in a form of a Kronecker product structure and requires at each iteration the solution of a set of discrete one-dimensional fractional diffusion equations. We use structure preserving approximations to the discrete one-dimensional fractional diffusion operators in the action of the preconditioning matrix. Numerical examples are presented to illustrate the effectiveness of this approach.
Improved interpretation of satellite altimeter data using genetic algorithms
NASA Technical Reports Server (NTRS)
Messa, Kenneth; Lybanon, Matthew
1992-01-01
Genetic algorithms (GA) are optimization techniques that are based on the mechanics of evolution and natural selection. They take advantage of the power of cumulative selection, in which successive incremental improvements in a solution structure become the basis for continued development. A GA is an iterative procedure that maintains a 'population' of 'organisms' (candidate solutions). Through successive 'generations' (iterations) the population as a whole improves in simulation of Darwin's 'survival of the fittest'. GA's have been shown to be successful where noise significantly reduces the ability of other search techniques to work effectively. Satellite altimetry provides useful information about oceanographic phenomena. It provides rapid global coverage of the oceans and is not as severely hampered by cloud cover as infrared imagery. Despite these and other benefits, several factors lead to significant difficulty in interpretation. The GA approach to the improved interpretation of satellite data involves the representation of the ocean surface model as a string of parameters or coefficients from the model. The GA searches in parallel, a population of such representations (organisms) to obtain the individual that is best suited to 'survive', that is, the fittest as measured with respect to some 'fitness' function. The fittest organism is the one that best represents the ocean surface model with respect to the altimeter data.
Iterative simulated quenching for designing irregular-spot-array generators.
Gillet, J N; Sheng, Y
2000-07-10
We propose a novel, to our knowledge, algorithm of iterative simulated quenching with temperature rescaling for designing diffractive optical elements, based on an analogy between simulated annealing and statistical thermodynamics. The temperature is iteratively rescaled at the end of each quenching process according to ensemble statistics to bring the system back from a frozen imperfect state with a local minimum of energy to a dynamic state in a Boltzmann heat bath in thermal equilibrium at the rescaled temperature. The new algorithm achieves much lower cost function and reconstruction error and higher diffraction efficiency than conventional simulated annealing with a fast exponential cooling schedule and is easy to program. The algorithm is used to design binary-phase generators of large irregular spot arrays. The diffractive phase elements have trapezoidal apertures of varying heights, which fit ideal arbitrary-shaped apertures better than do trapezoidal apertures of fixed heights.
Precise and fast spatial-frequency analysis using the iterative local Fourier transform.
Lee, Sukmock; Choi, Heejoo; Kim, Dae Wook
2016-09-19
The use of the discrete Fourier transform has decreased since the introduction of the fast Fourier transform (fFT), which is a numerically efficient computing process. This paper presents the iterative local Fourier transform (ilFT), a set of new processing algorithms that iteratively apply the discrete Fourier transform within a local and optimal frequency domain. The new technique achieves 210 times higher frequency resolution than the fFT within a comparable computation time. The method's superb computing efficiency, high resolution, spectrum zoom-in capability, and overall performance are evaluated and compared to other advanced high-resolution Fourier transform techniques, such as the fFT combined with several fitting methods. The effectiveness of the ilFT is demonstrated through the data analysis of a set of Talbot self-images (1280 × 1024 pixels) obtained with an experimental setup using grating in a diverging beam produced by a coherent point source.
ERIC Educational Resources Information Center
Pill, Shane
2012-01-01
"Game sense" is a sport-specific iteration of the teaching games for understanding model, designed to balance physical development of motor skill and fitness with the development of game understanding. Game sense can foster a shared vision for sport learning that bridges school physical education and community sport. This article explains how to…
A realistic evaluation: the case of protocol-based care
2010-01-01
Background 'Protocol based care' was envisioned by policy makers as a mechanism for delivering on the service improvement agenda in England. Realistic evaluation is an increasingly popular approach, but few published examples exist, particularly in implementation research. To fill this gap, within this paper we describe the application of a realistic evaluation approach to the study of protocol-based care, whilst sharing findings of relevance about standardising care through the use of protocols, guidelines, and pathways. Methods Situated between positivism and relativism, realistic evaluation is concerned with the identification of underlying causal mechanisms, how they work, and under what conditions. Fundamentally it focuses attention on finding out what works, for whom, how, and in what circumstances. Results In this research, we were interested in understanding the relationships between the type and nature of particular approaches to protocol-based care (mechanisms), within different clinical settings (context), and what impacts this resulted in (outcomes). An evidence review using the principles of realist synthesis resulted in a number of propositions, i.e., context, mechanism, and outcome threads (CMOs). These propositions were then 'tested' through multiple case studies, using multiple methods including non-participant observation, interviews, and document analysis through an iterative analysis process. The initial propositions (conjectured CMOs) only partially corresponded to the findings that emerged during analysis. From the iterative analysis process of scrutinising mechanisms, context, and outcomes we were able to draw out some theoretically generalisable features about what works, for whom, how, and what circumstances in relation to the use of standardised care approaches (refined CMOs). Conclusions As one of the first studies to apply realistic evaluation in implementation research, it was a good fit, particularly given the growing emphasis on understanding how context influences evidence-based practice. The strengths and limitations of the approach are considered, including how to operationalise it and some of the challenges. This approach provided a useful interpretive framework with which to make sense of the multiple factors that were simultaneously at play and being observed through various data sources, and for developing explanatory theory about using standardised care approaches in practice. PMID:20504293
A robust and fast active contour model for image segmentation with intensity inhomogeneity
NASA Astrophysics Data System (ADS)
Ding, Keyan; Weng, Guirong
2018-04-01
In this paper, a robust and fast active contour model is proposed for image segmentation in the presence of intensity inhomogeneity. By introducing the local image intensities fitting functions before the evolution of curve, the proposed model can effectively segment images with intensity inhomogeneity. And the computation cost is low because the fitting functions do not need to be updated in each iteration. Experiments have shown that the proposed model has a higher segmentation efficiency compared to some well-known active contour models based on local region fitting energy. In addition, the proposed model is robust to initialization, which allows the initial level set function to be a small constant function.
Efficient and robust analysis of complex scattering data under noise in microwave resonators.
Probst, S; Song, F B; Bushev, P A; Ustinov, A V; Weides, M
2015-02-01
Superconducting microwave resonators are reliable circuits widely used for detection and as test devices for material research. A reliable determination of their external and internal quality factors is crucial for many modern applications, which either require fast measurements or operate in the single photon regime with small signal to noise ratios. Here, we use the circle fit technique with diameter correction and provide a step by step guide for implementing an algorithm for robust fitting and calibration of complex resonator scattering data in the presence of noise. The speedup and robustness of the analysis are achieved by employing an algebraic rather than an iterative fit technique for the resonance circle.
A Two-Dimensional Helmholtz Equation Solution for the Multiple Cavity Scattering Problem
2013-02-01
obtained by using the block Gauss – Seidel iterative meth- od. To show the convergence of the iterative method, we define the error between two...models to the general multiple cavity setting. Numerical examples indicate that the convergence of the Gauss – Seidel iterative method depends on the...variational approach. A block Gauss – Seidel iterative method is introduced to solve the cou- pled system of the multiple cavity scattering problem, where
Hudson, H M; Ma, J; Green, P
1994-01-01
Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.
NASA Astrophysics Data System (ADS)
Klimina, L. A.
2018-05-01
The modification of the Picard approach is suggested that is targeted to the construction of a bifurcation diagram of 2π -periodic motions of mechanical system with a cylindrical phase space. Each iterative step is based on principles of averaging and energy balance similar to the Poincare-Pontryagin approach. If the iterative procedure converges, it provides the periodic trajectory of the system depending on the bifurcation parameter of the model. The method is applied to describe self-sustained rotations in the model of an aerodynamic pendulum.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Kawaguchi, Tomoya; Liu, Yihua; Reiter, Anthony; ...
2018-04-20
Here, a one-dimensional non-iterative direct method was employed for normalized crystal truncation rod analysis. The non-iterative approach, utilizing the Kramers–Kronig relation, avoids the ambiguities due to an improper initial model or incomplete convergence in the conventional iterative methods. The validity and limitations of the present method are demonstrated through both numerical simulations and experiments with Pt(111) in a 0.1 M CsF aqueous solution. The present method is compared with conventional iterative phase-retrieval methods.
Mapping raised bogs with an iterative one-class classification approach
NASA Astrophysics Data System (ADS)
Mack, Benjamin; Roscher, Ribana; Stenzel, Stefanie; Feilhauer, Hannes; Schmidtlein, Sebastian; Waske, Björn
2016-10-01
Land use and land cover maps are one of the most commonly used remote sensing products. In many applications the user only requires a map of one particular class of interest, e.g. a specific vegetation type or an invasive species. One-class classifiers are appealing alternatives to common supervised classifiers because they can be trained with labeled training data of the class of interest only. However, training an accurate one-class classification (OCC) model is challenging, particularly when facing a large image, a small class and few training samples. To tackle these problems we propose an iterative OCC approach. The presented approach uses a biased Support Vector Machine as core classifier. In an iterative pre-classification step a large part of the pixels not belonging to the class of interest is classified. The remaining data is classified by a final classifier with a novel model and threshold selection approach. The specific objective of our study is the classification of raised bogs in a study site in southeast Germany, using multi-seasonal RapidEye data and a small number of training sample. Results demonstrate that the iterative OCC outperforms other state of the art one-class classifiers and approaches for model selection. The study highlights the potential of the proposed approach for an efficient and improved mapping of small classes such as raised bogs. Overall the proposed approach constitutes a feasible approach and useful modification of a regular one-class classifier.
LIDAR TS for ITER core plasma. Part II: simultaneous two wavelength LIDAR TS
NASA Astrophysics Data System (ADS)
Gowers, C.; Nielsen, P.; Salzmann, H.
2017-12-01
We have shown recently, and in more detail at this conference (Salzmann et al) that the LIDAR approach to ITER core TS measurements requires only two mirrors in the inaccessible port plug area of the machine. This leads to simplified and robust alignment, lower risk of mirror damage by plasma contamination and much simpler calibration, compared with the awkward and vulnerable optical geometry of the conventional imaging TS approach, currently under development by ITER. In the present work we have extended the simulation code used previously to include the case of launching two laser pulses, of different wavelengths, simultaneously in LIDAR geometry. The aim of this approach is to broaden the choice of lasers available for the diagnostic. In the simulation code it is assumed that two short duration (300 ps) laser pulses of different wavelengths, from an Nd:YAG laser are launched through the plasma simultaneously. The temperature and density profiles are deduced in the usual way but from the resulting combined scattered signals in the different spectral channels of the single spectrometer. The spectral response and quantum efficiencies of the detectors used in the simulation are taken from catalogue data for commercially available Hamamatsu MCP-PMTs. The response times, gateability and tolerance to stray light levels of this type of photomultiplier have already been demonstrated in the JET LIDAR system and give sufficient spatial resolution to meet the ITER specification. Here we present the new simulation results from the code. They demonstrate that when the detectors are combined with this two laser, LIDAR approach, the full range of the specified ITER core plasma Te and ne can be measured with sufficient accuracy. So, with commercially available detectors and a simple modification of a Nd:YAG laser similar to that currently being used in the design of the conventional ITER core TS design mentioned above, the ITER requirements can be met.
Greer, Amy L; Spence, Kelsey; Gardner, Emma
2017-01-05
The United States swine industry was first confronted with porcine epidemic diarrhea virus (PEDV) in 2013. In young pigs, the virus is highly pathogenic and the associated morbidity and mortality has a significant negative impact on the swine industry. We have applied the IDEA model to better understand the 2014 PEDV outbreak in Ontario, Canada. Using our simple, 2-parameter IDEA model, we have evaluated the early epidemic dynamics of PEDV on Ontario swine farms. We estimated the best-fit R 0 and control parameter (d) for the between farm transmission component of the outbreak by fitting the model to publically available cumulative incidence data. We used maximum likelihood to compare model fit estimates for different combinations of the R 0 and d parameters. Using our initial findings from the iterative fitting procedure, we projected the time course of the epidemic using only a subset of the early epidemic data. The IDEA model projections showed excellent agreement with the observed data based on a 7-day generation time estimate. The best-fit estimate for R 0 was 1.87 (95% CI: 1.52 - 2.34) and for the control parameter (d) was 0.059 (95% CI: 0.022 - 0.117). Using data from the first three generations of the outbreak, our iterative fitting procedure suggests that R 0 and d had stabilized sufficiently to project the time course of the outbreak with reasonable accuracy. The emergence and spread of PEDV represents an important agricultural emergency. The virus presents a significant ongoing threat to the Canadian swine industry. Developing an understanding of the important epidemiological characteristics and disease transmission dynamics of a novel pathogen such as PEDV is critical for helping to guide the implementation of effective, efficient, and economically feasible disease control and prevention strategies that are able to help decrease the impact of an outbreak.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Three-dimensional deformable-model-based localization and recognition of road vehicles.
Zhang, Zhaoxiang; Tan, Tieniu; Huang, Kaiqi; Wang, Yunhong
2012-01-01
We address the problem of model-based object recognition. Our aim is to localize and recognize road vehicles from monocular images or videos in calibrated traffic scenes. A 3-D deformable vehicle model with 12 shape parameters is set up as prior information, and its pose is determined by three parameters, which are its position on the ground plane and its orientation about the vertical axis under ground-plane constraints. An efficient local gradient-based method is proposed to evaluate the fitness between the projection of the vehicle model and image data, which is combined into a novel evolutionary computing framework to estimate the 12 shape parameters and three pose parameters by iterative evolution. The recovery of pose parameters achieves vehicle localization, whereas the shape parameters are used for vehicle recognition. Numerous experiments are conducted in this paper to demonstrate the performance of our approach. It is shown that the local gradient-based method can evaluate accurately and efficiently the fitness between the projection of the vehicle model and the image data. The evolutionary computing framework is effective for vehicles of different types and poses is robust to all kinds of occlusion.
Bednarkiewicz, Artur; Whelan, Maurice P
2008-01-01
Fluorescence lifetime imaging (FLIM) is very demanding from a technical and computational perspective, and the output is usually a compromise between acquisition/processing time and data accuracy and precision. We present a new approach to acquisition, analysis, and reconstruction of microscopic FLIM images by employing a digital micromirror device (DMD) as a spatial illuminator. In the first step, the whole field fluorescence image is collected by a color charge-coupled device (CCD) camera. Further qualitative spectral analysis and sample segmentation are performed to spatially distinguish between spectrally different regions on the sample. Next, the fluorescence of the sample is excited segment by segment, and fluorescence lifetimes are acquired with a photon counting technique. FLIM image reconstruction is performed by either raster scanning the sample or by directly accessing specific regions of interest. The unique features of the DMD illuminator allow the rapid on-line measurement of global good initial parameters (GIP), which are supplied to the first iteration of the fitting algorithm. As a consequence, a decrease of the computation time required to obtain a satisfactory quality-of-fit is achieved without compromising the accuracy and precision of the lifetime measurements.
Remote monitoring of lower-limb prosthetic socket fit using wireless technologies.
Sahandi, R; Sewell, P; Noroozi, S; Hewitt, M
2012-01-01
Accurate fitting of a lower-limb prosthetic socket is the most important factor affecting amputee satisfaction and rehabilitation. The technology is now available to allow real-time monitoring of in-service pressure distribution of prosthetic limbs. This paper proposes a remote interfacial pressure monitoring system necessary for the assessment of fit. The suitability of a wireless ZigBee network due to its relevant technical specification is investigated. The system enables remote monitoring of a prosthetic socket and its fit under different operating conditions thereby improving design, efficiency and effectiveness. The data can be used by prosthetists and may also be recorded for future training or for patient progress monitoring. This can minimize the number of iterations by getting it right first time, thereby minimizing the number of replacement prostheses. Copyright © 2012 Informa UK, Ltd.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
AIR-MRF: Accelerated iterative reconstruction for magnetic resonance fingerprinting.
Cline, Christopher C; Chen, Xiao; Mailhe, Boris; Wang, Qiu; Pfeuffer, Josef; Nittka, Mathias; Griswold, Mark A; Speier, Peter; Nadar, Mariappan S
2017-09-01
Existing approaches for reconstruction of multiparametric maps with magnetic resonance fingerprinting (MRF) are currently limited by their estimation accuracy and reconstruction time. We aimed to address these issues with a novel combination of iterative reconstruction, fingerprint compression, additional regularization, and accelerated dictionary search methods. The pipeline described here, accelerated iterative reconstruction for magnetic resonance fingerprinting (AIR-MRF), was evaluated with simulations as well as phantom and in vivo scans. We found that the AIR-MRF pipeline provided reduced parameter estimation errors compared to non-iterative and other iterative methods, particularly at shorter sequence lengths. Accelerated dictionary search methods incorporated into the iterative pipeline reduced the reconstruction time at little cost of quality. Copyright © 2017 Elsevier Inc. All rights reserved.
3D Printing: Exploring Capabilities
ERIC Educational Resources Information Center
Samuels, Kyle; Flowers, Jim
2015-01-01
As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…
Estimation in SEM: A Concrete Example
ERIC Educational Resources Information Center
Ferron, John M.; Hess, Melinda R.
2007-01-01
A concrete example is used to illustrate maximum likelihood estimation of a structural equation model with two unknown parameters. The fitting function is found for the example, as are the vector of first-order partial derivatives, the matrix of second-order partial derivatives, and the estimates obtained from each iteration of the Newton-Raphson…
Investigating Convergence Patterns for Numerical Methods Using Data Analysis
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2013-01-01
The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…
He, Shixuan; Xie, Wanyi; Zhang, Wei; Zhang, Liqun; Wang, Yunxia; Liu, Xiaoling; Liu, Yulong; Du, Chunlei
2015-02-25
A novel strategy which combines iteratively cubic spline fitting baseline correction method with discriminant partial least squares qualitative analysis is employed to analyze the surface enhanced Raman scattering (SERS) spectroscopy of banned food additives, such as Sudan I dye and Rhodamine B in food, Malachite green residues in aquaculture fish. Multivariate qualitative analysis methods, using the combination of spectra preprocessing iteratively cubic spline fitting (ICSF) baseline correction with principal component analysis (PCA) and discriminant partial least squares (DPLS) classification respectively, are applied to investigate the effectiveness of SERS spectroscopy for predicting the class assignments of unknown banned food additives. PCA cannot be used to predict the class assignments of unknown samples. However, the DPLS classification can discriminate the class assignment of unknown banned additives using the information of differences in relative intensities. The results demonstrate that SERS spectroscopy combined with ICSF baseline correction method and exploratory analysis methodology DPLS classification can be potentially used for distinguishing the banned food additives in field of food safety. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Konovalov, Igor; Breitenstein, Otwin
2001-01-01
An iterative algorithm for the derivation of depth profiles of the minority carrier collection probability in a semiconductor with or without a coating on the top is presented using energy-resolved electron-beam-induced current measurements in planar geometry. The calculation is based on the depth-dose function of Everhart and Hoff (Everhart T E and Hoff P H 1971 J. Appl. Phys. 42 5837) and on the penetration-range function of Kanaya and Okayama (Kanaya K and Okayama S 1972 J. Phys. D: Appl. Phys. 5 43) or on that of Fitting (Fitting H-J 1974 Phys. Status Solidi/ a 26 525). It can also be performed with any other depth-dose functions. Using this algorithm does not require us to make any assumptions on the shape of the collection profile within the depth of interest. The influence of an absorbing top contact and/or a limited thickness of the semiconductor layer appear in the result, but can also be taken explicitly into account. Examples using silicon and CIS solar cells as well as a GaAs LED are presented.
Land, P E; Haigh, J D
1997-12-20
In algorithms for the atmospheric correction of visible and near-IR satellite observations of the Earth's surface, it is generally assumed that the spectral variation of aerosol optical depth is characterized by an Angström power law or similar dependence. In an iterative fitting algorithm for atmospheric correction of ocean color imagery over case 2 waters, this assumption leads to an inability to retrieve the aerosol type and to the attribution to aerosol spectral variations of spectral effects actually caused by the water contents. An improvement to this algorithm is described in which the spectral variation of optical depth is calculated as a function of aerosol type and relative humidity, and an attempt is made to retrieve the relative humidity in addition to aerosol type. The aerosol is treated as a mixture of aerosol components (e.g., soot), rather than of aerosol types (e.g., urban). We demonstrate the improvement over the previous method by using simulated case 1 and case 2 sea-viewing wide field-of-view sensor data, although the retrieval of relative humidity was not successful.
Modeling defect trends for iterative development
NASA Technical Reports Server (NTRS)
Powell, J. D.; Spanguolo, J. N.
2003-01-01
The Employment of Defects (EoD) approach to measuring and analyzing defects seeks to identify and capture trends and phenomena that are critical to managing software quality in the iterative software development lifecycle at JPL.
A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.
De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc
2010-09-01
In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.
Unsupervised iterative detection of land mines in highly cluttered environments.
Batman, Sinan; Goutsias, John
2003-01-01
An unsupervised iterative scheme is proposed for land mine detection in heavily cluttered scenes. This scheme is based on iterating hybrid multispectral filters that consist of a decorrelating linear transform coupled with a nonlinear morphological detector. Detections extracted from the first pass are used to improve results in subsequent iterations. The procedure stops after a predetermined number of iterations. The proposed scheme addresses several weaknesses associated with previous adaptations of morphological approaches to land mine detection. Improvement in detection performance, robustness with respect to clutter inhomogeneities, a completely unsupervised operation, and computational efficiency are the main highlights of the method. Experimental results reveal excellent performance.
ITER activities and fusion technology
NASA Astrophysics Data System (ADS)
Seki, M.
2007-10-01
At the 21st IAEA Fusion Energy Conference, 68 and 67 papers were presented in the categories of ITER activities and fusion technology, respectively. ITER performance prediction, results of technology R&D and the construction preparation provide good confidence in ITER realization. The superconducting tokamak EAST achieved the first plasma just before the conference. The construction of other new experimental machines has also shown steady progress. Future reactor studies stress the importance of down sizing and a steady-state approach. Reactor technology in the field of blanket including the ITER TBM programme and materials for the demonstration power plant showed sound progress in both R&D and design activities.
Hearing aid fine-tuning based on Dutch descriptions.
Thielemans, Thijs; Pans, Donné; Chenault, Michelene; Anteunis, Lucien
2017-07-01
The aim of this study was to derive an independent fitting assistant based on expert consensus. Two questions were asked: (1) what (Dutch) terms do hearing impaired listeners use nowadays to describe their specific hearing aid fitting problems? (2) What is the expert consensus on how to resolve these complaints by adjusting hearing aid parameters? Hearing aid dispensers provided descriptors that impaired listeners use to describe their reactions to specific hearing aid fitting problems. Hearing aid fitting experts were asked "How would you adjust the hearing aid if its user reports that the aid sounds…?" with the blank filled with each of the 40 most frequently mentioned descriptors. 112 hearing aid dispensers and 15 hearing aid experts. The expert solution with the highest weight value was considered the best solution for that descriptor. Principal component analysis (PCA) was performed to identify a factor structure in fitting problems. Nine fitting problems could be identified resulting in an expert-based, hearing aid manufacturer independent, fine-tuning fitting assistant for clinical use. The construction of an expert-based, hearing aid manufacturer independent, fine-tuning fitting assistant to be used as an additional tool in the iterative fitting process is feasible.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection.
Gürsoy, Doğa; Hong, Young P; He, Kuan; Hujsak, Karl; Yoo, Seunghwan; Chen, Si; Li, Yue; Ge, Mingyuan; Miller, Lisa M; Chu, Yong S; De Andrade, Vincent; He, Kai; Cossairt, Oliver; Katsaggelos, Aggelos K; Jacobsen, Chris
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the same error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.
Fine‐resolution conservation planning with limited climate‐change information
Shah, Payal; Mallory, Mindy L.; Ando , Amy W.; Guntenspergen, Glenn R.
2017-01-01
Climate‐change induced uncertainties in future spatial patterns of conservation‐related outcomes make it difficult to implement standard conservation‐planning paradigms. A recent study translates Markowitz's risk‐diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate‐change scenarios for carrying out fine‐resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk‐return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate‐change information and full climate‐change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate‐change forecasts such that the best possible risk‐return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate‐change information could be reduced by 17% relative to other iterative approaches.
NASA Technical Reports Server (NTRS)
Ronan, R. S.; Mickey, D. L.; Orrall, F. Q.
1987-01-01
The results of two methods for deriving photospheric vector magnetic fields from the Zeeman effect, as observed in the Fe I line at 6302.5 A at high spectral resolution (45 mA), are compared. The first method does not take magnetooptical effects into account, but determines the vector magnetic field from the integral properties of the Stokes profiles. The second method is an iterative least-squares fitting technique which fits the observed Stokes profiles to the profiles predicted by the Unno-Rachkovsky solution to the radiative transfer equation. For sunspot fields above about 1500 gauss, the two methods are found to agree in derived azimuthal and inclination angles to within about + or - 20 deg.
A stepladder approach to a tokamak fusion power plant
NASA Astrophysics Data System (ADS)
Zohm, H.; Träuble, F.; Biel, W.; Fable, E.; Kemp, R.; Lux, H.; Siccinio, M.; Wenninger, R.
2017-08-01
We present an approach to design in a consistent way a stepladder connecting ITER, DEMO and an FPP, starting from an attractive FPP and then locating DEMO such that main similarity parameters for the core scenario are constant. The approach presented suggests how to use ITER such that DEMO can be extrapolated with maximum confidence and a development path for plasma scenarios in ITER follows from our approach, moving from low β N and q typical for the present Q = 10 scenario to higher values needed for steady state. A numerical example is given, indicative of the feasibility of the approach, and it is backed up by more detailed 1.5-D calculation using the ASTRA code. We note that ideal MHD stability analysis of the DEMO operating point indicates that it is located between the no-wall and the ideal wall β-limit, which may require active stabilization. The DEMO design could also be a pulsed fallback solution should a stationary operation turn out to be impossible.
Rescheduling with iterative repair
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene; Daun, Brian; Deale, Michael
1992-01-01
This paper presents a new approach to rescheduling called constraint-based iterative repair. This approach gives our system the ability to satisfy domain constraints, address optimization concerns, minimize perturbation to the original schedule, produce modified schedules, quickly, and exhibits 'anytime' behavior. The system begins with an initial, flawed schedule and then iteratively repairs constraint violations until a conflict-free schedule is produced. In an empirical demonstration, we vary the importance of minimizing perturbation and report how fast the system is able to resolve conflicts in a given time bound. We also show the anytime characteristics of the system. These experiments were performed within the domain of Space Shuttle ground processing.
Patino, Manuel; Fuentes, Jorge M; Singh, Sarabjeet; Hahn, Peter F; Sahani, Dushyant V
2015-07-01
This article discusses the clinical challenge of low-radiation-dose examinations, the commonly used approaches for dose optimization, and their effect on image quality. We emphasize practical aspects of the different iterative reconstruction techniques, along with their benefits, pitfalls, and clinical implementation. The widespread use of CT has raised concerns about potential radiation risks, motivating diverse strategies to reduce the radiation dose associated with CT. CT manufacturers have developed alternative reconstruction algorithms intended to improve image quality on dose-optimized CT studies, mainly through noise and artifact reduction. Iterative reconstruction techniques take unique approaches to noise reduction and provide distinct strength levels or settings.
Scenario-based fitted Q-iteration for adaptive control of water reservoir systems under uncertainty
NASA Astrophysics Data System (ADS)
Bertoni, Federica; Giuliani, Matteo; Castelletti, Andrea
2017-04-01
Over recent years, mathematical models have largely been used to support planning and management of water resources systems. Yet, the increasing uncertainties in their inputs - due to increased variability in the hydrological regimes - are a major challenge to the optimal operations of these systems. Such uncertainty, boosted by projected changing climate, violates the stationarity principle generally used for describing hydro-meteorological processes, which assumes time persisting statistical characteristics of a given variable as inferred by historical data. As this principle is unlikely to be valid in the future, the probability density function used for modeling stochastic disturbances (e.g., inflows) becomes an additional uncertain parameter of the problem, which can be described in a deterministic and set-membership based fashion. This study contributes a novel method for designing optimal, adaptive policies for controlling water reservoir systems under climate-related uncertainty. The proposed method, called scenario-based Fitted Q-Iteration (sFQI), extends the original Fitted Q-Iteration algorithm by enlarging the state space to include the space of the uncertain system's parameters (i.e., the uncertain climate scenarios). As a result, sFQI embeds the set-membership uncertainty of the future inflow scenarios in the action-value function and is able to approximate, with a single learning process, the optimal control policy associated to any scenario included in the uncertainty set. The method is demonstrated on a synthetic water system, consisting of a regulated lake operated for ensuring reliable water supply to downstream users. Numerical results show that the sFQI algorithm successfully identifies adaptive solutions to operate the system under different inflow scenarios, which outperform the control policy designed under historical conditions. Moreover, the sFQI policy generalizes over inflow scenarios not directly experienced during the policy design, thus alleviating the risk of mis-adaptation, namely the design of a solution fully adapted to a scenario that is different from the one that will actually realize.
Kouri, Donald J [Houston, TX; Vijay, Amrendra [Houston, TX; Zhang, Haiyan [Houston, TX; Zhang, Jingfeng [Houston, TX; Hoffman, David K [Ames, IA
2007-05-01
A method and system for solving the inverse acoustic scattering problem using an iterative approach with consideration of half-off-shell transition matrix elements (near-field) information, where the Volterra inverse series correctly predicts the first two moments of the interaction, while the Fredholm inverse series is correct only for the first moment and that the Volterra approach provides a method for exactly obtaining interactions which can be written as a sum of delta functions.
2010-02-24
electronic Schrodinger equation . In previous grant cycles, we implemented the NEO approach at the Hartree-Fock (NEO-HF),13 configuration interaction...electronic and nuclear molecular orbitals. The resulting electronic and nuclear Hartree-Fock-Roothaan equations are solved iteratively until self...directly into the standard Hartree- Fock-Roothaan equations , which are solved iteratively to self-consistency. The density matrix representation
Image segmentation using local shape and gray-level appearance models
NASA Astrophysics Data System (ADS)
Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul
2006-03-01
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
Development of iterative techniques for the solution of unsteady compressible viscous flows
NASA Technical Reports Server (NTRS)
Sankar, Lakshmi N.; Hixon, Duane
1992-01-01
The development of efficient iterative solution methods for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations is discussed. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. In this work, another approach based on the classical conjugate gradient method, known as the Generalized Minimum Residual (GMRES) algorithm is investigated. The GMRES algorithm has been used in the past by a number of researchers for solving steady viscous and inviscid flow problems. Here, we investigate the suitability of this algorithm for solving the system of non-linear equations that arise in unsteady Navier-Stokes solvers at each time step.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
A new framework for designing programmes of assessment
Van der Vleuten, C. P. M.; Schuwirth, L. W. T.
2009-01-01
Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals. PMID:19821042
A new framework for designing programmes of assessment.
Dijkstra, J; Van der Vleuten, C P M; Schuwirth, L W T
2010-08-01
Research on assessment in medical education has strongly focused on individual measurement instruments and their psychometric quality. Without detracting from the value of this research, such an approach is not sufficient to high quality assessment of competence as a whole. A programmatic approach is advocated which presupposes criteria for designing comprehensive assessment programmes and for assuring their quality. The paucity of research with relevance to programmatic assessment, and especially its development, prompted us to embark on a research project to develop design principles for programmes of assessment. We conducted focus group interviews to explore the experiences and views of nine assessment experts concerning good practices and new ideas about theoretical and practical issues in programmes of assessment. The discussion was analysed, mapping all aspects relevant for design onto a framework, which was iteratively adjusted to fit the data until saturation was reached. The overarching framework for designing programmes of assessment consists of six assessment programme dimensions: Goals, Programme in Action, Support, Documenting, Improving and Accounting. The model described in this paper can help to frame programmes of assessment; it not only provides a common language, but also a comprehensive picture of the dimensions to be covered when formulating design principles. It helps identifying areas concerning assessment in which ample research and development has been done. But, more importantly, it also helps to detect underserved areas. A guiding principle in design of assessment programmes is fitness for purpose. High quality assessment can only be defined in terms of its goals.
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
The Physics Basis of ITER Confinement
NASA Astrophysics Data System (ADS)
Wagner, F.
2009-02-01
ITER will be the first fusion reactor and the 50 year old dream of fusion scientists will become reality. The quality of magnetic confinement will decide about the success of ITER, directly in the form of the confinement time and indirectly because it decides about the plasma parameters and the fluxes, which cross the separatrix and have to be handled externally by technical means. This lecture portrays some of the basic principles which govern plasma confinement, uses dimensionless scaling to set the limits for the predictions for ITER, an approach which also shows the limitations of the predictions, and describes briefly the major characteristics and physics behind the H-mode—the preferred confinement regime of ITER.
Critical elements on fitting the Bayesian multivariate Poisson Lognormal model
NASA Astrophysics Data System (ADS)
Zamzuri, Zamira Hasanah binti
2015-10-01
Motivated by a problem on fitting multivariate models to traffic accident data, a detailed discussion of the Multivariate Poisson Lognormal (MPL) model is presented. This paper reveals three critical elements on fitting the MPL model: the setting of initial estimates, hyperparameters and tuning parameters. These issues have not been highlighted in the literature. Based on simulation studies conducted, we have shown that to use the Univariate Poisson Model (UPM) estimates as starting values, at least 20,000 iterations are needed to obtain reliable final estimates. We also illustrated the sensitivity of the specific hyperparameter, which if it is not given extra attention, may affect the final estimates. The last issue is regarding the tuning parameters where they depend on the acceptance rate. Finally, a heuristic algorithm to fit the MPL model is presented. This acts as a guide to ensure that the model works satisfactorily given any data set.
Brown, Angus M
2006-04-01
The objective of this present study was to demonstrate a method for fitting complex electrophysiological data with multiple functions using the SOLVER add-in of the ubiquitous spreadsheet Microsoft Excel. SOLVER minimizes the difference between the sum of the squares of the data to be fit and the function(s) describing the data using an iterative generalized reduced gradient method. While it is a straightforward procedure to fit data with linear functions, and we have previously demonstrated a method of non-linear regression analysis of experimental data based upon a single function, it is more complex to fit data with multiple functions, usually requiring specialized expensive computer software. In this paper we describe an easily understood program for fitting experimentally acquired data, in this case the stimulus-evoked compound action potential from the mouse optic nerve, with multiple Gaussian functions. The program is flexible and can be applied to describe data with a wide variety of user-input functions.
Modeling Data Containing Outliers using ARIMA Additive Outlier (ARIMA-AO)
NASA Astrophysics Data System (ADS)
Saleh Ahmar, Ansari; Guritno, Suryo; Abdurakhman; Rahman, Abdul; Awi; Alimuddin; Minggi, Ilham; Arif Tiro, M.; Kasim Aidid, M.; Annas, Suwardi; Utami Sutiksno, Dian; Ahmar, Dewi S.; Ahmar, Kurniawan H.; Abqary Ahmar, A.; Zaki, Ahmad; Abdullah, Dahlan; Rahim, Robbi; Nurdiyanto, Heri; Hidayat, Rahmat; Napitupulu, Darmawan; Simarmata, Janner; Kurniasih, Nuning; Andretti Abdillah, Leon; Pranolo, Andri; Haviluddin; Albra, Wahyudin; Arifin, A. Nurani M.
2018-01-01
The aim this study is discussed on the detection and correction of data containing the additive outlier (AO) on the model ARIMA (p, d, q). The process of detection and correction of data using an iterative procedure popularized by Box, Jenkins, and Reinsel (1994). By using this method we obtained an ARIMA models were fit to the data containing AO, this model is added to the original model of ARIMA coefficients obtained from the iteration process using regression methods. In the simulation data is obtained that the data contained AO initial models are ARIMA (2,0,0) with MSE = 36,780, after the detection and correction of data obtained by the iteration of the model ARIMA (2,0,0) with the coefficients obtained from the regression Zt = 0,106+0,204Z t-1+0,401Z t-2-329X 1(t)+115X 2(t)+35,9X 3(t) and MSE = 19,365. This shows that there is an improvement of forecasting error rate data.
Representation-Independent Iteration of Sparse Data Arrays
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
An approach is defined that describes a method of iterating over massively large arrays containing sparse data using an approach that is implementation independent of how the contents of the sparse arrays are laid out in memory. What is unique and important here is the decoupling of the iteration over the sparse set of array elements from how they are internally represented in memory. This enables this approach to be backward compatible with existing schemes for representing sparse arrays as well as new approaches. What is novel here is a new approach for efficiently iterating over sparse arrays that is independent of the underlying memory layout representation of the array. A functional interface is defined for implementing sparse arrays in any modern programming language with a particular focus for the Chapel programming language. Examples are provided that show the translation of a loop that computes a matrix vector product into this representation for both the distributed and not-distributed cases. This work is directly applicable to NASA and its High Productivity Computing Systems (HPCS) program that JPL and our current program are engaged in. The goal of this program is to create powerful, scalable, and economically viable high-powered computer systems suitable for use in national security and industry by 2010. This is important to NASA for its computationally intensive requirements for analyzing and understanding the volumes of science data from our returned missions.
Detecting Aberrant Response Patterns in the Rasch Model. Rapport 87-3.
ERIC Educational Resources Information Center
Kogut, Jan
In this paper, the detection of response patterns aberrant from the Rasch model is considered. For this purpose, a new person fit index, recently developed by I. W. Molenaar (1987) and an iterative estimation procedure are used in a simulation study of Rasch model data mixed with aberrant data. Three kinds of aberrant response behavior are…
ERIC Educational Resources Information Center
Kelderman, Henk
1992-01-01
Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…
NASA Astrophysics Data System (ADS)
Tupa, Peter R.; Quirin, S.; DeLeo, G. G.; McCluskey, G. E., Jr.
2007-12-01
We present a modified Fourier transform approach to determine the orbital parameters of detached visual binary stars. Originally inspired by Monet (ApJ 234, 275, 1979), this new method utilizes an iterative routine of refining higher order Fourier terms in a manner consistent with Keplerian motion. In most cases, this approach is not sensitive to the starting orbital parameters in the iterative loop. In many cases we have determined orbital elements even with small fragments of orbits and noisy data, although some systems show computational instabilities. The algorithm was constructed using the MAPLE mathematical software code and tested on artificially created orbits and many real binary systems, including Gliese 22 AC, Tau 51, and BU 738. This work was supported at Lehigh University by NSF-REU grant PHY-9820301.
Rapid alignment of nanotomography data using joint iterative reconstruction and reprojection
Gürsoy, Doğa; Hong, Young P.; He, Kuan; ...
2017-09-18
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Leiner, Claude; Nemitz, Wolfgang; Schweitzer, Susanne; Kuna, Ladislav; Wenzl, Franz P; Hartmann, Paul; Satzinger, Valentin; Sommer, Christian
2016-03-20
We show that with an appropriate combination of two optical simulation techniques-classical ray-tracing and the finite difference time domain method-an optical device containing multiple diffractive and refractive optical elements can be accurately simulated in an iterative simulation approach. We compare the simulation results with experimental measurements of the device to discuss the applicability and accuracy of our iterative simulation procedure.
Langlois, Etienne V; Becerril Montekio, Victor; Young, Taryn; Song, Kayla; Alcalde-Rabanal, Jacqueline; Tran, Nhan
2016-03-17
There is an increasing interest worldwide to ensure evidence-informed health policymaking as a means to improve health systems performance. There is a need to engage policymakers in collaborative approaches to generate and use knowledge in real world settings. To address this gap, we implemented two interventions based on iterative exchanges between researchers and policymakers/implementers. This article aims to reflect on the implementation and impact of these multi-site evidence-to-policy approaches implemented in low-resource settings. The first approach was implemented in Mexico and Nicaragua and focused on implementation research facilitated by communities of practice (CoP) among maternal health stakeholders. We conducted a process evaluation of the CoPs and assessed the professionals' abilities to acquire, analyse, adapt and apply research. The second approach, called the Policy BUilding Demand for evidence in Decision making through Interaction and Enhancing Skills (Policy BUDDIES), was implemented in South Africa and Cameroon. The intervention put forth a 'buddying' process to enhance demand and use of systematic reviews by sub-national policymakers. The Policy BUDDIES initiative was assessed using a mixed-methods realist evaluation design. In Mexico, the implementation research supported by CoPs triggered monitoring by local health organizations of the quality of maternal healthcare programs. Health programme personnel involved in CoPs in Mexico and Nicaragua reported improved capacities to identify and use evidence in solving implementation problems. In South Africa, Policy BUDDIES informed a policy framework for medication adherence for chronic diseases, including both HIV and non-communicable diseases. Policymakers engaged in the buddying process reported an enhanced recognition of the value of research, and greater demand for policy-relevant knowledge. The collaborative evidence-to-policy approaches underline the importance of iterations and continuity in the engagement of researchers and policymakers/programme managers, in order to account for swift evolutions in health policy planning and implementation. In developing and supporting evidence-to-policy interventions, due consideration should be given to fit-for-purpose approaches, as different needs in policymaking cycles require adapted processes and knowledge. Greater consideration should be provided to approaches embedding the use of research in real-world policymaking, better suited to the complex adaptive nature of health systems.
Whole Brain Networks for Treatment for Epilepsy
2012-07-01
target. For the numerical approach, we applied both a simultaneous over- relaxation method (e.g. Gauss - Seidel ) and a biconjugate gradient...with b=1000sec/mm 2 and 7 b=0 acquisitions) was acquired on a Siemens TIM Trio (Siemens Medical Solutions, Erlangen) followed by iterative motion...partly from the difficulty with which the Monte Carlo approach is able to determine track densities at regions distal to the seed. Much iteration is
Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source
NASA Astrophysics Data System (ADS)
Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.
2017-11-01
Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.
An optimization approach for fitting canonical tensor decompositions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson
Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less
Towards inverse modeling of turbidity currents: The inverse lock-exchange problem
NASA Astrophysics Data System (ADS)
Lesshafft, Lutz; Meiburg, Eckart; Kneller, Ben; Marsden, Alison
2011-04-01
A new approach is introduced for turbidite modeling, leveraging the potential of computational fluid dynamics methods to simulate the flow processes that led to turbidite formation. The practical use of numerical flow simulation for the purpose of turbidite modeling so far is hindered by the need to specify parameters and initial flow conditions that are a priori unknown. The present study proposes a method to determine optimal simulation parameters via an automated optimization process. An iterative procedure matches deposit predictions from successive flow simulations against available localized reference data, as in practice may be obtained from well logs, and aims at convergence towards the best-fit scenario. The final result is a prediction of the entire deposit thickness and local grain size distribution. The optimization strategy is based on a derivative-free, surrogate-based technique. Direct numerical simulations are performed to compute the flow dynamics. A proof of concept is successfully conducted for the simple test case of a two-dimensional lock-exchange turbidity current. The optimization approach is demonstrated to accurately retrieve the initial conditions used in a reference calculation.
NASA Astrophysics Data System (ADS)
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-01
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Aviat, Félix; Lagardère, Louis; Piquemal, Jean-Philip
2017-10-28
In a recent paper [F. Aviat et al., J. Chem. Theory Comput. 13, 180-190 (2017)], we proposed the Truncated Conjugate Gradient (TCG) approach to compute the polarization energy and forces in polarizable molecular simulations. The method consists in truncating the conjugate gradient algorithm at a fixed predetermined order leading to a fixed computational cost and can thus be considered "non-iterative." This gives the possibility to derive analytical forces avoiding the usual energy conservation (i.e., drifts) issues occurring with iterative approaches. A key point concerns the evaluation of the analytical gradients, which is more complex than that with a usual solver. In this paper, after reviewing the present state of the art of polarization solvers, we detail a viable strategy for the efficient implementation of the TCG calculation. The complete cost of the approach is then measured as it is tested using a multi-time step scheme and compared to timings using usual iterative approaches. We show that the TCG methods are more efficient than traditional techniques, making it a method of choice for future long molecular dynamics simulations using polarizable force fields where energy conservation matters. We detail the various steps required for the implementation of the complete method by software developers.
Schaarup, Clara; Hejlesen, Ole Kristian
2016-01-01
Objective. The aim of the present study is to evaluate the usability of the telehealth system, coined Telekit, by using an iterative, mixed usability approach. Materials and Methods. Ten double experts participated in two heuristic evaluations (HE1, HE2), and 11 COPD patients attended two think-aloud tests. The double experts identified usability violations and classified them into Jakob Nielsen's heuristics. These violations were then translated into measurable values on a scale of 0 to 4 indicating degree of severity. In the think-aloud tests, COPD participants were invited to verbalise their thoughts. Results. The double experts identified 86 usability violations in HE1 and 101 usability violations in HE2. The majority of the violations were rated in the 0–2 range. The findings from the think-aloud tests resulted in 12 themes and associated examples regarding the usability of the Telekit system. The use of the iterative, mixed usability approach produced both quantitative and qualitative results. Conclusion. The iterative, mixed usability approach yields a strong result owing to the high number of problems identified in the tests because the double experts and the COPD participants focus on different aspects of Telekit's usability. This trial is registered with Clinicaltrials.gov, NCT01984840, November 14, 2013. PMID:27974888
Lilholt, Pernille Heyckendorff; Schaarup, Clara; Hejlesen, Ole Kristian
2016-01-01
Objective . The aim of the present study is to evaluate the usability of the telehealth system, coined Telekit, by using an iterative, mixed usability approach. Materials and Methods . Ten double experts participated in two heuristic evaluations (HE1, HE2), and 11 COPD patients attended two think-aloud tests. The double experts identified usability violations and classified them into Jakob Nielsen's heuristics. These violations were then translated into measurable values on a scale of 0 to 4 indicating degree of severity. In the think-aloud tests, COPD participants were invited to verbalise their thoughts. Results . The double experts identified 86 usability violations in HE1 and 101 usability violations in HE2. The majority of the violations were rated in the 0-2 range. The findings from the think-aloud tests resulted in 12 themes and associated examples regarding the usability of the Telekit system. The use of the iterative, mixed usability approach produced both quantitative and qualitative results. Conclusion . The iterative, mixed usability approach yields a strong result owing to the high number of problems identified in the tests because the double experts and the COPD participants focus on different aspects of Telekit's usability. This trial is registered with Clinicaltrials.gov, NCT01984840, November 14, 2013.
Marsh canopy leaf area and orientation calculated for improved marsh structure mapping
Ramsey, Elijah W.; Rangoonwala, Amina; Jones, Cathleen E.; Bannister, Terri
2015-01-01
An approach is presented for producing the spatiotemporal estimation of leaf area index (LAI) of a highly heterogeneous coastal marsh without reliance on user estimates of marsh leaf-stem orientation. The canopy LAI profile derivation used three years of field measured photosynthetically active radiation (PAR) vertical profiles at seven S. alterniflora marsh sites and iterative transform of those PAR attenuation profiles to best-fit light extinction coefficients (KM). KM sun zenith dependency was removed obtaining the leaf angle distribution (LAD) representing the average marsh orientation and the LAD used to calculate the LAI canopy profile. LAI and LAD reproduced measured PAR profiles with 99% accuracy and corresponded to field documented structures. LAI and LAD better reflect marsh structure and results substantiate the need to account for marsh orientation. The structure indexes are directly amenable to remote sensing spatiotemporal mapping and offer a more meaningful representation of wetland systems promoting biophysical function understanding.
A comparative study of minimum norm inverse methods for MEG imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leahy, R.M.; Mosher, J.C.; Phillips, J.W.
1996-07-01
The majority of MEG imaging techniques currently in use fall into the general class of (weighted) minimum norm methods. The minimization of a norm is used as the basis for choosing one from a generally infinite set of solutions that provide an equally good fit to the data. This ambiguity in the solution arises from the inherent non- uniqueness of the continuous inverse problem and is compounded by the imbalance between the relatively small number of measurements and the large number of source voxels. Here we present a unified view of the minimum norm methods and describe how we canmore » use Tikhonov regularization to avoid instabilities in the solutions due to noise. We then compare the performance of regularized versions of three well known linear minimum norm methods with the non-linear iteratively reweighted minimum norm method and a Bayesian approach.« less
A mixability theory for the role of sex in evolution
Livnat, Adi; Papadimitriou, Christos; Dushoff, Jonathan; Feldman, Marcus W.
2008-01-01
The question of what role sex plays in evolution is still open despite decades of research. It has often been assumed that sex should facilitate the increase in fitness. Hence, the fact that it may break down highly favorable genetic combinations has been seen as a problem. Here, we consider an alternative approach. We define a measure that represents the ability of alleles to perform well across different combinations and, using numerical iterations within a classical population-genetic framework, show that selection in the presence of sex favors this ability in a highly robust manner. We also show that the mechanism responsible for this effect has been out of the purview of previous theory, because it operates during the evolutionary transient, and that the breaking down of favorable genetic combinations is an integral part of it. Implications of these results and more to evolutionary theory are discussed. PMID:19073912
A mixability theory for the role of sex in evolution.
Livnat, Adi; Papadimitriou, Christos; Dushoff, Jonathan; Feldman, Marcus W
2008-12-16
The question of what role sex plays in evolution is still open despite decades of research. It has often been assumed that sex should facilitate the increase in fitness. Hence, the fact that it may break down highly favorable genetic combinations has been seen as a problem. Here, we consider an alternative approach. We define a measure that represents the ability of alleles to perform well across different combinations and, using numerical iterations within a classical population-genetic framework, show that selection in the presence of sex favors this ability in a highly robust manner. We also show that the mechanism responsible for this effect has been out of the purview of previous theory, because it operates during the evolutionary transient, and that the breaking down of favorable genetic combinations is an integral part of it. Implications of these results and more to evolutionary theory are discussed.
Accounting for measurement error in log regression models with applications to accelerated testing.
Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M
2018-01-01
In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.
The Iterative Research Cycle: Process-Based Model Evaluation
NASA Astrophysics Data System (ADS)
Vrugt, J. A.
2014-12-01
The ever increasing pace of computational power, along with continued advances in measurement technologies and improvements in process understanding has stimulated the development of increasingly complex physics based models that simulate a myriad of processes at different spatial and temporal scales. Reconciling these high-order system models with perpetually larger volumes of field data is becoming more and more difficult, particularly because classical likelihood-based fitting methods lack the power to detect and pinpoint deficiencies in the model structure. In this talk I will give an overview of our latest research on process-based model calibration and evaluation. This approach, rooted in Bayesian theory, uses summary metrics of the calibration data rather than the data itself to help detect which component(s) of the model is (are) malfunctioning and in need of improvement. A few case studies involving hydrologic and geophysical models will be used to demonstrate the proposed methodology.
NASA Astrophysics Data System (ADS)
Ahunov, Roman R.; Kuksenko, Sergey P.; Gazizov, Talgat R.
2016-06-01
A multiple solution of linear algebraic systems with dense matrix by iterative methods is considered. To accelerate the process, the recomputing of the preconditioning matrix is used. A priory condition of the recomputing based on change of the arithmetic mean of the current solution time during the multiple solution is proposed. To confirm the effectiveness of the proposed approach, the numerical experiments using iterative methods BiCGStab and CGS for four different sets of matrices on two examples of microstrip structures are carried out. For solution of 100 linear systems the acceleration up to 1.6 times, compared to the approach without recomputing, is obtained.
NASA Astrophysics Data System (ADS)
Hladowski, Lukasz; Galkowski, Krzysztof; Cai, Zhonglun; Rogers, Eric; Freeman, Chris T.; Lewin, Paul L.
2011-07-01
In this article a new approach to iterative learning control for the practically relevant case of deterministic discrete linear plants with uniform rank greater than unity is developed. The analysis is undertaken in a 2D systems setting that, by using a strong form of stability for linear repetitive processes, allows simultaneous consideration of both trial-to-trial error convergence and along the trial performance, resulting in design algorithms that can be computed using linear matrix inequalities (LMIs). Finally, the control laws are experimentally verified on a gantry robot that replicates a pick and place operation commonly found in a number of applications to which iterative learning control is applicable.
Fine-resolution conservation planning with limited climate-change information.
Shah, Payal; Mallory, Mindy L; Ando, Amy W; Guntenspergen, Glenn R
2017-04-01
Climate-change induced uncertainties in future spatial patterns of conservation-related outcomes make it difficult to implement standard conservation-planning paradigms. A recent study translates Markowitz's risk-diversification strategy from finance to conservation settings, enabling conservation agents to use this diversification strategy for allocating conservation and restoration investments across space to minimize the risk associated with such uncertainty. However, this method is information intensive and requires a large number of forecasts of ecological outcomes associated with possible climate-change scenarios for carrying out fine-resolution conservation planning. We developed a technique for iterative, spatial portfolio analysis that can be used to allocate scarce conservation resources across a desired level of subregions in a planning landscape in the absence of a sufficient number of ecological forecasts. We applied our technique to the Prairie Pothole Region in central North America. A lack of sufficient future climate information prevented attainment of the most efficient risk-return conservation outcomes in the Prairie Pothole Region. The difference in expected conservation returns between conservation planning with limited climate-change information and full climate-change information was as large as 30% for the Prairie Pothole Region even when the most efficient iterative approach was used. However, our iterative approach allowed finer resolution portfolio allocation with limited climate-change forecasts such that the best possible risk-return combinations were obtained. With our most efficient iterative approach, the expected loss in conservation outcomes owing to limited climate-change information could be reduced by 17% relative to other iterative approaches. © 2016 Society for Conservation Biology.
Glaholt, Stephen P; Chen, Celia Y; Demidenko, Eugene; Bugge, Deenie M; Folt, Carol L; Shaw, Joseph R
2012-08-15
The study of stressor interactions by eco-toxicologists using nonlinear response variables is limited by required amounts of a priori knowledge, complexity of experimental designs, the use of linear models, and the lack of use of optimal designs of nonlinear models to characterize complex interactions. Therefore, we developed AID, an adaptive-iterative design for eco-toxicologist to more accurately and efficiently examine complex multiple stressor interactions. AID incorporates the power of the general linear model and A-optimal criteria with an iterative process that: 1) minimizes the required amount of a priori knowledge, 2) simplifies the experimental design, and 3) quantifies both individual and interactive effects. Once a stable model is determined, the best fit model is identified and the direction and magnitude of stressors, individually and all combinations (including complex interactions) are quantified. To validate AID, we selected five commonly co-occurring components of polluted aquatic systems, three metal stressors (Cd, Zn, As) and two water chemistry parameters (pH, hardness) to be tested using standard acute toxicity tests in which Daphnia mortality is the (nonlinear) response variable. We found after the initial data input of experimental data, although literature values (e.g. EC-values) may also be used, and after only two iterations of AID, our dose response model was stable. The model ln(Cd)*ln(Zn) was determined the best predictor of Daphnia mortality response to the combined effects of Cd, Zn, As, pH, and hardness. This model was then used to accurately identify and quantify the strength of both greater- (e.g. As*Cd) and less-than additive interactions (e.g. Cd*Zn). Interestingly, our study found only binary interactions significant, not higher order interactions. We conclude that AID is more efficient and effective at assessing multiple stressor interactions than current methods. Other applications, including life-history endpoints commonly used by regulators, could benefit from AID's efficiency in assessing water quality criteria. Copyright © 2012 Elsevier B.V. All rights reserved.
On the shape of martian dust and water ice aerosols
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.; Clancy, R. T.; Clayton, G. C.
2000-10-01
Researchers have often calculated radiative properties of Martian aerosols using either Mie theory for homogeneous spheres or semi-empirical theories. Given that these atmospheric particles are randomly oriented, this approach seems fairly reasonable. However, the idea that randomly oriented nonspherical particles have scattering properties equivalent to even a select subset of spheres is demonstratably false} (Bohren and Huffman 1983; Bohren and Koh 1985, Appl. Optics, 24, 1023). Fortunately, recent computational developments now enable us to directly compute scattering properties for nonspherical particles. We have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm to constrain the shape of water ice and dust aerosols. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component (starting with some ``guess'') from radiative transfer models of MGS Thermal Emission Spectrometer Emission Phase Function (EPF) sequences (for details on this step, see Clancy et al., DPS 2000). Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. In this presentation, we provide details on the second step in our analysis, including the derived phase functions (for several characteristic EPF sequences) as well as the particle properties of the best-fit theoretical models. We provide a sensitivity analysis for the EPF model-data comparisons in terms of perturbations in the particle properties (i.e., range of axial ratios, sizes, refractive indices, etc). This work is supported through NASA grant NAGS-9820 (MJW) and JPL contract no. 961471 (RTC).
Keypress-Based Musical Preference Is Both Individual and Lawful.
Livengood, Sherri L; Sheppard, John P; Kim, Byoung W; Malthouse, Edward C; Bourne, Janet E; Barlow, Anne E; Lee, Myung J; Marin, Veronica; O'Connor, Kailyn P; Csernansky, John G; Block, Martin P; Blood, Anne J; Breiter, Hans C
2017-01-01
Musical preference is highly individualized and is an area of active study to develop methods for its quantification. Recently, preference-based behavior, associated with activity in brain reward circuitry, has been shown to follow lawful, quantifiable patterns, despite broad variation across individuals. These patterns, observed using a keypress paradigm with visual stimuli, form the basis for relative preference theory (RPT). Here, we sought to determine if such patterns extend to non-visual domains (i.e., audition) and dynamic stimuli, potentially providing a method to supplement psychometric, physiological, and neuroimaging approaches to preference quantification. For this study, we adapted our keypress paradigm to two sets of stimuli consisting of seventeenth to twenty-first century western art music (Classical) and twentieth to twenty-first century jazz and popular music (Popular). We studied a pilot sample and then a separate primary experimental sample with this paradigm, and used iterative mathematical modeling to determine if RPT relationships were observed with high R 2 fits. We further assessed the extent of heterogeneity in the rank ordering of keypress-based responses across subjects. As expected, individual rank orderings of preferences were quite heterogeneous, yet we observed mathematical patterns fitting these data similar to those observed previously with visual stimuli. These patterns in music preference were recurrent across two cohorts and two stimulus sets, and scaled between individual and group data, adhering to the requirements for lawfulness. Our findings suggest a general neuroscience framework that predicts human approach/avoidance behavior, while also allowing for individual differences and the broad diversity of human choices; the resulting framework may offer novel approaches to advancing music neuroscience, or its applications to medicine and recommendation systems.
Keypress-Based Musical Preference Is Both Individual and Lawful
Livengood, Sherri L.; Sheppard, John P.; Kim, Byoung W.; Malthouse, Edward C.; Bourne, Janet E.; Barlow, Anne E.; Lee, Myung J.; Marin, Veronica; O'Connor, Kailyn P.; Csernansky, John G.; Block, Martin P.; Blood, Anne J.; Breiter, Hans C.
2017-01-01
Musical preference is highly individualized and is an area of active study to develop methods for its quantification. Recently, preference-based behavior, associated with activity in brain reward circuitry, has been shown to follow lawful, quantifiable patterns, despite broad variation across individuals. These patterns, observed using a keypress paradigm with visual stimuli, form the basis for relative preference theory (RPT). Here, we sought to determine if such patterns extend to non-visual domains (i.e., audition) and dynamic stimuli, potentially providing a method to supplement psychometric, physiological, and neuroimaging approaches to preference quantification. For this study, we adapted our keypress paradigm to two sets of stimuli consisting of seventeenth to twenty-first century western art music (Classical) and twentieth to twenty-first century jazz and popular music (Popular). We studied a pilot sample and then a separate primary experimental sample with this paradigm, and used iterative mathematical modeling to determine if RPT relationships were observed with high R2 fits. We further assessed the extent of heterogeneity in the rank ordering of keypress-based responses across subjects. As expected, individual rank orderings of preferences were quite heterogeneous, yet we observed mathematical patterns fitting these data similar to those observed previously with visual stimuli. These patterns in music preference were recurrent across two cohorts and two stimulus sets, and scaled between individual and group data, adhering to the requirements for lawfulness. Our findings suggest a general neuroscience framework that predicts human approach/avoidance behavior, while also allowing for individual differences and the broad diversity of human choices; the resulting framework may offer novel approaches to advancing music neuroscience, or its applications to medicine and recommendation systems. PMID:28512395
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
3D reconstruction of the magnetic vector potential using model based iterative reconstruction.
Prabhat, K C; Aditya Mohan, K; Phatak, Charudatta; Bouman, Charles; De Graef, Marc
2017-11-01
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model for image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. A comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach. Copyright © 2017 Elsevier B.V. All rights reserved.
3D reconstruction of the magnetic vector potential using model based iterative reconstruction
Prabhat, K. C.; Aditya Mohan, K.; Phatak, Charudatta; ...
2017-07-03
Lorentz transmission electron microscopy (TEM) observations of magnetic nanoparticles contain information on the magnetic and electrostatic potentials. Vector field electron tomography (VFET) can be used to reconstruct electromagnetic potentials of the nanoparticles from their corresponding LTEM images. The VFET approach is based on the conventional filtered back projection approach to tomographic reconstructions and the availability of an incomplete set of measurements due to experimental limitations means that the reconstructed vector fields exhibit significant artifacts. In this paper, we outline a model-based iterative reconstruction (MBIR) algorithm to reconstruct the magnetic vector potential of magnetic nanoparticles. We combine a forward model formore » image formation in TEM experiments with a prior model to formulate the tomographic problem as a maximum a-posteriori probability estimation problem (MAP). The MAP cost function is minimized iteratively to determine the vector potential. Here, a comparative reconstruction study of simulated as well as experimental data sets show that the MBIR approach yields quantifiably better reconstructions than the VFET approach.« less
Simulation and Analysis of Launch Teams (SALT)
NASA Technical Reports Server (NTRS)
2008-01-01
A SALT effort was initiated in late 2005 with seed funding from the Office of Safety and Mission Assurance Human Factors organization. Its objectives included demonstrating human behavior and performance modeling and simulation technologies for launch team analysis, training, and evaluation. The goal of the research is to improve future NASA operations and training. The project employed an iterative approach, with the first iteration focusing on the last 70 minutes of a nominal-case Space Shuttle countdown, the second iteration focusing on aborts and launch commit criteria violations, the third iteration focusing on Ares I-X communications, and the fourth iteration focusing on Ares I-X Firing Room configurations. SALT applied new commercial off-the-shelf technologies from industry and the Department of Defense in the spaceport domain.
NASA Astrophysics Data System (ADS)
Chakravarthi, V.; Sastry, S. Rajeswara; Ramamma, B.
2013-07-01
Based on the principles of modeling and inversion, two interpretation methods are developed in the space domain along with a GUI based JAVA code, MODTOHAFSD, to analyze the gravity anomalies of strike limited sedimentary basins using a prescribed exponential density contrast-depth function. A stack of vertical prisms all having equal widths, but each one possesses its own limited strike length and thickness, describes the structure of a sedimentary basin above the basement complex. The thicknesses of prisms represent the depths to the basement and are the unknown parameters to be estimated from the observed gravity anomalies. Forward modeling is realized in the space domain using a combination of analytical and numerical approaches. The algorithm estimates the initial depths of a sedimentary basin and improves them, iteratively, based on the differences between the observed and modeled gravity anomalies within the specified convergence criteria. The present code, works on Model-View-Controller (MVC) pattern, reads the Bouguer gravity anomalies, constructs/modifies regional gravity background in an interactive approach, estimates residual gravity anomalies and performs automatic modeling or inversion based on user specification for basement topography. Besides generating output in both ASCII and graphical forms, the code displays (i) the changes in the depth structure, (ii) nature of fit between the observed and modeled gravity anomalies, (iii) changes in misfit, and (iv) variation of density contrast with iteration in animated forms. The code is used to analyze both synthetic and real field gravity anomalies. The proposed technique yielded information that is consistent with the assumed parameters in case of synthetic structure and with available drilling depths in case of field example. The advantage of the code is that it can be used to analyze the gravity anomalies of sedimentary basins even when the profile along which the interpretation is intended fails to bisect the strike length.
Decentralized control of sound radiation using iterative loop recovery.
Schiller, Noah H; Cabell, Randolph H; Fuller, Chris R
2010-10-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
Decentralized Control of Sound Radiation Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2009-01-01
A decentralized model-based control strategy is designed to reduce low-frequency sound radiation from periodically stiffened panels. While decentralized control systems tend to be scalable, performance can be limited due to modeling error introduced by the unmodeled interaction between neighboring control units. Since bounds on modeling error are not known in advance, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is evaluated numerically using a model of a stiffened aluminum panel that is representative of the sidewall of an aircraft. Simulations demonstrate that the iterative approach can achieve significant reductions in radiated sound power from the stiffened panel without destabilizing neighboring control units.
On iterative processes in the Krylov-Sonneveld subspaces
NASA Astrophysics Data System (ADS)
Ilin, Valery P.
2016-10-01
The iterative Induced Dimension Reduction (IDR) methods are considered for solving large systems of linear algebraic equations (SLAEs) with nonsingular nonsymmetric matrices. These approaches are investigated by many authors and are charachterized sometimes as the alternative to the classical processes of Krylov type. The key moments of the IDR algorithms consist in the construction of the embedded Sonneveld subspaces, which have the decreasing dimensions and use the orthogonalization to some fixed subspace. Other independent approaches for research and optimization of the iterations are based on the augmented and modified Krylov subspaces by using the aggregation and deflation procedures with present various low rank approximations of the original matrices. The goal of this paper is to show, that IDR method in Sonneveld subspaces present an original interpretation of the modified algorithms in the Krylov subspaces. In particular, such description is given for the multi-preconditioned semi-conjugate direction methods which are actual for the parallel algebraic domain decomposition approaches.
Helping System Engineers Bridge the Peaks
NASA Technical Reports Server (NTRS)
Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen
2014-01-01
In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.
Meischl, Florian; Kirchler, Christian Günter; Jäger, Michael Andreas; Huck, Christian Wolfgang; Rainer, Matthias
2018-02-01
We present a novel method for the quantitative determination of the clean-up efficiency to provide a calculated parameter for peak purity through iterative fitting in conjunction with design of experiments. Rosemary extracts were used and analyzed before and after solid-phase extraction using a self-fabricated mixed-mode sorbent based on poly(N-vinylimidazole/ethylene glycol dimethacrylate). Optimization was performed by variation of washing steps using a full three-level factorial design and response surface methodology. Separation efficiency of rosmarinic acid from interfering compounds was calculated using an iterative fit of Gaussian-like signals and quantifications were performed by the separate integration of the two interfering peak areas. Results and recoveries were analyzed using Design-Expert® software and revealed significant differences between the washing steps. Optimized parameters were considered and used for all further experiments. Furthermore, the solid-phase extraction procedure was tested and compared with commercial available sorbents. In contrast to generic protocols of the manufacturers, the optimized procedure showed excellent recoveries and clean-up rates for the polymer with ion exchange properties. Finally, rosemary extracts from different manufacturing areas and application types were studied to verify the developed method for its applicability. The cleaned-up extracts were analyzed by liquid chromatography with tandem mass spectrometry for detailed compound evaluation to exclude any interference from coeluting molecules. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bootstrap evaluation of a young Douglas-fir height growth model for the Pacific Northwest
Nicholas R. Vaughn; Eric C. Turnblom; Martin W. Ritchie
2010-01-01
We evaluated the stability of a complex regression model developed to predict the annual height growth of young Douglas-fir. This model is highly nonlinear and is fit in an iterative manner for annual growth coefficients from data with multiple periodic remeasurement intervals. The traditional methods for such a sensitivity analysis either involve laborious math or...
NASA Astrophysics Data System (ADS)
Sudevan, Vipin; Aluri, Pavan K.; Yadav, Sarvesh Kumar; Saha, Rajib; Souradeep, Tarun
2017-06-01
We report an improved technique for diffuse foreground minimization from Cosmic Microwave Background (CMB) maps using a new multiphase iterative harmonic space internal-linear-combination (HILC) approach. Our method nullifies a foreground leakage that was present in the old and usual iterative HILC method. In phase 1 of the multiphase technique, we obtain an initial cleaned map using the single iteration HILC approach over the desired portion of the sky. In phase 2, we obtain a final CMB map using the iterative HILC approach; however, now, to nullify the leakage, during each iteration, some of the regions of the sky that are not being cleaned in the current iteration are replaced by the corresponding cleaned portions of the phase 1 map. We bring all input frequency maps to a common and maximum possible beam and pixel resolution at the beginning of the analysis, which significantly reduces data redundancy, memory usage, and computational cost, and avoids, during the HILC weight calculation, the deconvolution of partial sky harmonic coefficients by the azimuthally symmetric beam and pixel window functions, which in a strict mathematical sense, are not well defined. Using WMAP 9 year and Planck 2015 frequency maps, we obtain foreground-cleaned CMB maps and a CMB angular power spectrum for the multipole range 2≤slant {\\ell }≤slant 2500. Our power spectrum matches the published Planck results with some differences at different multipole ranges. We validate our method by performing Monte Carlo simulations. Finally, we show that the weights for HILC foreground minimization have the intrinsic characteristic that they also tend to produce a statistically isotropic CMB map.
NASA Technical Reports Server (NTRS)
Tilton, James C.
1988-01-01
Image segmentation can be a key step in data compression and image analysis. However, the segmentation results produced by most previous approaches to region growing are suspect because they depend on the order in which portions of the image are processed. An iterative parallel segmentation algorithm avoids this problem by performing globally best merges first. Such a segmentation approach, and two implementations of the approach on NASA's Massively Parallel Processor (MPP) are described. Application of the segmentation approach to data compression and image analysis is then described, and results of such application are given for a LANDSAT Thematic Mapper image.
NASA Astrophysics Data System (ADS)
Muhiddin, F. A.; Sulaiman, J.
2017-09-01
The aim of this paper is to investigate the effectiveness of the Successive Over-Relaxation (SOR) iterative method by using the fourth-order Crank-Nicolson (CN) discretization scheme to derive a five-point Crank-Nicolson approximation equation in order to solve diffusion equation. From this approximation equation, clearly, it can be shown that corresponding system of five-point approximation equations can be generated and then solved iteratively. In order to access the performance results of the proposed iterative method with the fourth-order CN scheme, another point iterative method which is Gauss-Seidel (GS), also presented as a reference method. Finally the numerical results obtained from the use of the fourth-order CN discretization scheme, it can be pointed out that the SOR iterative method is superior in terms of number of iterations, execution time, and maximum absolute error.
Experiments and Simulations of ITER-like Plasmas in Alcator C-Mod
DOE Office of Scientific and Technical Information (OSTI.GOV)
.R. Wilson, C.E. Kessel, S. Wolfe, I.H. Hutchinson, P. Bonoli, C. Fiore, A.E. Hubbard, J. Hughes, Y. Lin, Y. Ma, D. Mikkelsen, M. Reinke, S. Scott, A.C.C. Sips, S. Wukitch and the C-Mod Team
Alcator C-Mod is performing ITER-like experiments to benchmark and verify projections to 15 MA ELMy H-mode Inductive ITER discharges. The main focus has been on the transient ramp phases. The plasma current in C-Mod is 1.3 MA and toroidal field is 5.4 T. Both Ohmic and ion cyclotron (ICRF) heated discharges are examined. Plasma current rampup experiments have demonstrated that (ICRF and LH) heating in the rise phase can save voltseconds (V-s), as was predicted for ITER by simulations, but showed that the ICRF had no effect on the current profile versus Ohmic discharges. Rampdown experiments show an overcurrent inmore » the Ohmic coil (OH) at the H to L transition, which can be mitigated by remaining in H-mode into the rampdown. Experiments have shown that when the EDA H-mode is preserved well into the rampdown phase, the density and temperature pedestal heights decrease during the plasma current rampdown. Simulations of the full C-Mod discharges have been done with the Tokamak Simulation Code (TSC) and the Coppi-Tang energy transport model is used with modified settings to provide the best fit to the experimental electron temperature profile. Other transport models have been examined also. __________________________________________________« less
2014-06-02
2011). [22] Li, Q., Micchelli, C., Shen, L., and Xu, Y. A proximity algorithm acelerated by Gauss - Seidel iterations for L1/TV denoising models. Inverse...system of equations and their relationship to the solution of Model (2) and present an algorithm with an iterative approach for finding these solutions...Using the fixed-point characterization above, the (k + 1)th iteration of the prox- imity operator algorithm to find the solution of the Dantzig
Toward Generalization of Iterative Small Molecule Synthesis
Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.
2018-01-01
Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152
Pinsard, Basile; Boutin, Arnaud; Doyon, Julien; Benali, Habib
2018-01-01
Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit the acquisition processes. The registration of single or multiple simultaneously acquired slices is achieved online by an Iterated Extended Kalman Filter, favoring the robust estimation of continuous motion, while an intensity bias field is non-parametrically fitted. The proposed extraction of gray-matter BOLD activity from the acquisition space to an anatomical group template space, taking into account distortions, better preserves fine-scale patterns of activity. Importantly, the proposed unified framework generalizes to high-resolution multi-slice techniques. When tested on simulated and real data the latter shows a reduction of motion explained variance and signal variability when compared to the conventional preprocessing approach. These improvements provide more stable patterns of activity, facilitating investigation of cerebral information representation in healthy and/or clinical populations where motion is known to impact fine-scale data. PMID:29755312
Pinsard, Basile; Boutin, Arnaud; Doyon, Julien; Benali, Habib
2018-01-01
Functional MRI acquisition is sensitive to subjects' motion that cannot be fully constrained. Therefore, signal corrections have to be applied a posteriori in order to mitigate the complex interactions between changing tissue localization and magnetic fields, gradients and readouts. To circumvent current preprocessing strategies limitations, we developed an integrated method that correct motion and spatial low-frequency intensity fluctuations at the level of each slice in order to better fit the acquisition processes. The registration of single or multiple simultaneously acquired slices is achieved online by an Iterated Extended Kalman Filter, favoring the robust estimation of continuous motion, while an intensity bias field is non-parametrically fitted. The proposed extraction of gray-matter BOLD activity from the acquisition space to an anatomical group template space, taking into account distortions, better preserves fine-scale patterns of activity. Importantly, the proposed unified framework generalizes to high-resolution multi-slice techniques. When tested on simulated and real data the latter shows a reduction of motion explained variance and signal variability when compared to the conventional preprocessing approach. These improvements provide more stable patterns of activity, facilitating investigation of cerebral information representation in healthy and/or clinical populations where motion is known to impact fine-scale data.
Overview of ASDEX Upgrade results
NASA Astrophysics Data System (ADS)
Zohm, H.; Adamek, J.; Angioni, C.; Antar, G.; Atanasiu, C. V.; Balden, M.; Becker, W.; Behler, K.; Behringer, K.; Bergmann, A.; Bertoncelli, T.; Bilato, R.; Bobkov, V.; Boom, J.; Bottino, A.; Brambilla, M.; Braun, F.; Brüdgam, M.; Buhler, A.; Chankin, A.; Classen, I.; Conway, G. D.; Coster, D. P.; de Marné, P.; D'Inca, R.; Drube, R.; Dux, R.; Eich, T.; Engelhardt, K.; Esposito, B.; Fahrbach, H.-U.; Fattorini, L.; Fink, J.; Fischer, R.; Flaws, A.; Foley, M.; Forest, C.; Fuchs, J. C.; Gál, K.; García Muñoz, M.; Gemisic Adamov, M.; Giannone, L.; Görler, T.; Gori, S.; da Graça, S.; Granucci, G.; Greuner, H.; Gruber, O.; Gude, A.; Günter, S.; Haas, G.; Hahn, D.; Harhausen, J.; Hauff, T.; Heinemann, B.; Herrmann, A.; Hicks, N.; Hobirk, J.; Hölzl, M.; Holtum, D.; Hopf, C.; Horton, L.; Huart, M.; Igochine, V.; Janzer, M.; Jenko, F.; Kallenbach, A.; Kálvin, S.; Kardaun, O.; Kaufmann, M.; Kick, M.; Kirk, A.; Klingshirn, H.-J.; Koscis, G.; Kollotzek, H.; Konz, C.; Krieger, K.; Kurki-Suonio, T.; Kurzan, B.; Lackner, K.; Lang, P. T.; Langer, B.; Lauber, P.; Laux, M.; Leuterer, F.; Likonen, J.; Liu, L.; Lohs, A.; Lunt, T.; Lyssoivan, A.; Maggi, C. F.; Manini, A.; Mank, K.; Manso, M.-E.; Mantsinen, M.; Maraschek, M.; Martin, P.; Mayer, M.; McCarthy, P.; McCormick, K.; Meister, H.; Meo, F.; Merkel, P.; Merkel, R.; Mertens, V.; Merz, F.; Meyer, H.; Mlynek, A.; Monaco, F.; Müller, H.-W.; Münich, M.; Murmann, H.; Neu, G.; Neu, R.; Neuhauser, J.; Nold, B.; Noterdaeme, J.-M.; Pautasso, G.; Pereverzev, G.; Poli, E.; Potzel, S.; Püschel, M.; Pütterich, T.; Pugno, R.; Raupp, G.; Reich, M.; Reiter, B.; Ribeiro, T.; Riedl, R.; Rohde, V.; Roth, J.; Rott, M.; Ryter, F.; Sandmann, W.; Santos, J.; Sassenberg, K.; Sauter, P.; Scarabosio, A.; Schall, G.; Schilling, H.-B.; Schirmer, J.; Schmid, A.; Schmid, K.; Schneider, W.; Schramm, G.; Schrittwieser, R.; Schustereder, W.; Schweinzer, J.; Schweizer, S.; Scott, B.; Seidel, U.; Sempf, M.; Serra, F.; Sertoli, M.; Siccinio, M.; Sigalov, A.; Silva, A.; Sips, A. C. C.; Speth, E.; Stäbler, A.; Stadler, R.; Steuer, K.-H.; Stober, J.; Streibl, B.; Strumberger, E.; Suttrop, W.; Tardini, G.; Tichmann, C.; Treutterer, W.; Tröster, C.; Urso, L.; Vainonen-Ahlgren, E.; Varela, P.; Vermare, L.; Volpe, F.; Wagner, D.; Wigger, C.; Wischmeier, M.; Wolfrum, E.; Würsching, E.; Yadikin, D.; Yu, Q.; Zasche, D.; Zehetbauer, T.; Zilker, M.
2009-10-01
ASDEX Upgrade was operated with a fully W-covered wall in 2007 and 2008. Stationary H-modes at the ITER target values and improved H-modes with H up to 1.2 were run without any boronization. The boundary conditions set by the full W wall (high enough ELM frequency, high enough central heating and low enough power density arriving at the target plates) require significant scenario development, but will apply to ITER as well. D retention has been reduced and stationary operation with saturated wall conditions has been found. Concerning confinement, impurity ion transport across the pedestal is neoclassical, explaining the strong inward pinch of high-Z impurities in between ELMs. In improved H-mode, the width of the temperature pedestal increases with heating power, consistent with a \\beta_{pol,ped}^{1/2} scaling. In the area of MHD instabilities, disruption mitigation experiments using massive Ne injection reach volume averaged values of the total electron density close to those required for runaway suppression in ITER. ECRH at the q = 2 surface was successfully applied to delay density limit disruptions. The characterization of fast particle losses due to MHD has shown the importance of different loss mechanisms for NTMs, TAEs and also beta-induced Alfven eigenmodes (BAEs). Specific studies addressing the first ITER operational phase show that O1 ECRH at the HFS assists reliable low-voltage breakdown. During ramp-up, additional heating can be used to vary li to fit within the ITER range. Confinement and power threshold in He are more favourable than in H, suggesting that He operation could allow us to assess H-mode operation in the non-nuclear phase of ITER operation.
Słonecka, Iwona; Łukasik, Krzysztof; Fornalski, Krzysztof W
2018-06-04
The present paper proposes two methods of calculating components of the dose absorbed by the human body after exposure to a mixed neutron and gamma radiation field. The article presents a novel approach to replace the common iterative method in its analytical form, thus reducing the calculation time. It also shows a possibility of estimating the neutron and gamma doses when their ratio in a mixed beam is not precisely known.
Stokes-Doppler coherence imaging for ITER boundary tomography.
Howard, J; Kocan, M; Lisgo, S; Reichle, R
2016-11-01
An optical coherence imaging system is presently being designed for impurity transport studies and other applications on ITER. The wide variation in magnetic field strength and pitch angle (assumed known) across the field of view generates additional Zeeman-polarization-weighting information that can improve the reliability of tomographic reconstructions. Because background reflected light will be somewhat depolarized analysis of only the polarized fraction may be enough to provide a level of background suppression. We present the principles behind these ideas and some simulations that demonstrate how the approach might work on ITER. The views and opinions expressed herein do not necessarily reflect those of the ITER Organization.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Harith, Hazreen; Schmutz, Beat; Malekani, Javad; Schuetz, Michael A; Yarlagadda, Prasad K
2016-03-01
Anatomically precontoured plates are commonly used to treat periarticular fractures. A well-fitting plate can be used as a tool for anatomical reduction of the fractured bone. Recent studies highlighted that some plates fit poorly for many patients due to considerable shape variations between bones of the same anatomical site. While it is impossible to design one shape that fits all, it is also burdensome for the manufacturers and hospitals to produce, store and manage multiple plate shapes without the certainty of utilization by a patient population. In this study, we investigated the number of shapes required for maximum fit within a given dataset, and if they could be obtained by manually deforming the original plate. A distal medial tibial plate was automatically positioned on 45 individual tibiae, and the optimal deformation was determined iteratively using finite element analysis simulation. Within the studied dataset, we found that: (i) 89% fit could be achieved with four shapes, (ii) 100% fit was impossible through mechanical deformation, and (iii) the deformations required to obtain the four plate shapes were safe for the stainless steel plate for further clinical use. The proposed framework is easily transferable to other orthopaedic plates. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Solving Differential Equations Using Modified Picard Iteration
ERIC Educational Resources Information Center
Robin, W. A.
2010-01-01
Many classes of differential equations are shown to be open to solution through a method involving a combination of a direct integration approach with suitably modified Picard iterative procedures. The classes of differential equations considered include typical initial value, boundary value and eigenvalue problems arising in physics and…
Type Ia Supernova Intrinsic Magnitude Dispersion and the Fitting of Cosmological Parameters
NASA Astrophysics Data System (ADS)
Kim, A. G.
2011-02-01
I present an analysis for fitting cosmological parameters from a Hubble diagram of a standard candle with unknown intrinsic magnitude dispersion. The dispersion is determined from the data, simultaneously with the cosmological parameters. This contrasts with the strategies used to date. The advantages of the presented analysis are that it is done in a single fit (it is not iterative), it provides a statistically founded and unbiased estimate of the intrinsic dispersion, and its cosmological-parameter uncertainties account for the intrinsic-dispersion uncertainty. Applied to Type Ia supernovae, my strategy provides a statistical measure to test for subtypes and assess the significance of any magnitude corrections applied to the calibrated candle. Parameter bias and differences between likelihood distributions produced by the presented and currently used fitters are negligibly small for existing and projected supernova data sets.
NASA Astrophysics Data System (ADS)
Pachhai, S.; Masters, G.; Laske, G.
2017-12-01
Earth's normal-mode spectra are crucial to studying the long wavelength structure of the Earth. Such observations have been used extensively to estimate "splitting coefficients" which, in turn, can be used to determine the three-dimensional velocity and density structure. Most past studies apply a non-linear iterative inversion to estimate the splitting coefficients which requires that the earthquake source is known. However, it is challenging to know the source details, particularly for big events as used in normal-mode analyses. Additionally, the final solution of the non-linear inversion can depend on the choice of damping parameter and starting model. To circumvent the need to know the source, a two-step linear inversion has been developed and successfully applied to many mantle and core sensitive modes. The first step takes combinations of the data from a single event to produce spectra known as "receiver strips". The autoregressive nature of the receiver strips can then be used to estimate the structure coefficients without the need to know the source. Based on this approach, we recently employed a neighborhood algorithm to measure the splitting coefficients for an isolated inner-core sensitive mode (13S2). This approach explores the parameter space efficiently without any need of regularization and finds the structure coefficients which best fit the observed strips. Here, we implement a Bayesian approach to data collected for earthquakes from early 2000 and more recent. This approach combines the data (through likelihood) and prior information to provide rigorous parameter values and their uncertainties for both isolated and coupled modes. The likelihood function is derived from the inferred errors of the receiver strips which allows us to retrieve proper uncertainties. Finally, we apply model selection criteria that balance the trade-offs between fit (likelihood) and model complexity to investigate the degree and type of structure (elastic and anelastic) required to explain the data.
Diagnostics of Dielectric Materials with Several Relaxation Times
NASA Astrophysics Data System (ADS)
Karpov, A. G.; Klemeshev, V. A.
2018-04-01
A set of means for detection and preprocessing of dielectrometric information has been suggested for studying the polarization/depolarization of dielectrics. Special attention has been paid to the processing of dielectrometric data for inhomogeneous materials using dielectric diagrams. Rapid analysis has been carried out the results of which can be used as initial approximations in more accurate (more complicated and time-consuming) iterative algorithms for model fitting.
NASA Astrophysics Data System (ADS)
Entler, S.; Duran, I.; Kocan, M.; Vayakis, G.
2017-07-01
Three vacuum vessel sectors in ITER will be instrumented by the outer vessel steady-state magnetic field sensors. Each sensor unit features a pair of metallic Hall sensors with a sensing layer made of bismuth to measure tangential and normal components of the local magnetic field. The influence of temperature and magnetic field on the Hall coefficient was tested for the temperature range from 25 to 250 oC and the magnetic field range from 0 to 0.5 T. A fit of the Hall coefficient normalized temperature function independent of magnetic field was found, and a model of the Hall coefficient functional dependence at a wide range of temperature and magnetic field was built with the purpose to simplify the calibration procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Griffin, Patrick J.
2016-10-05
The code is used to provide an unfolded/adjusted energy-dependent fission reactor neutron spectrum based upon an input trial spectrum and a set of measured activities. This is part of a neutron environment characterization that supports doing testing in a given reactor environment. An iterative perturbation method is used to obtain a "best fit" neutron flux spectrum for a given input set of infinitely dilute foil activities. The calculational procedure consists of the selection of a trial flux spectrum to serve as the initial approximation to the solution, and subsequent iteration to a form acceptable as an appropriate solution. The solutionmore » is specified either as time-integrated flux (fluence) for a pulsed environment or as a flux for a steady-state neutron environment.« less
Composition of web services using Markov decision processes and dynamic programming.
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity.
Brown, A M
2001-06-01
The objective of this present study was to introduce a simple, easily understood method for carrying out non-linear regression analysis based on user input functions. While it is relatively straightforward to fit data with simple functions such as linear or logarithmic functions, fitting data with more complicated non-linear functions is more difficult. Commercial specialist programmes are available that will carry out this analysis, but these programmes are expensive and are not intuitive to learn. An alternative method described here is to use the SOLVER function of the ubiquitous spreadsheet programme Microsoft Excel, which employs an iterative least squares fitting routine to produce the optimal goodness of fit between data and function. The intent of this paper is to lead the reader through an easily understood step-by-step guide to implementing this method, which can be applied to any function in the form y=f(x), and is well suited to fast, reliable analysis of data in all fields of biology.
Executing SPARQL Queries over the Web of Linked Data
NASA Astrophysics Data System (ADS)
Hartig, Olaf; Bizer, Christian; Freytag, Johann-Christoph
The Web of Linked Data forms a single, globally distributed dataspace. Due to the openness of this dataspace, it is not possible to know in advance all data sources that might be relevant for query answering. This openness poses a new challenge that is not addressed by traditional research on federated query processing. In this paper we present an approach to execute SPARQL queries over the Web of Linked Data. The main idea of our approach is to discover data that might be relevant for answering a query during the query execution itself. This discovery is driven by following RDF links between data sources based on URIs in the query and in partial results. The URIs are resolved over the HTTP protocol into RDF data which is continuously added to the queried dataset. This paper describes concepts and algorithms to implement our approach using an iterator-based pipeline. We introduce a formalization of the pipelining approach and show that classical iterators may cause blocking due to the latency of HTTP requests. To avoid blocking, we propose an extension of the iterator paradigm. The evaluation of our approach shows its strengths as well as the still existing challenges.
NASA Astrophysics Data System (ADS)
Ramasahayam, Veda Krishna Vyas; Diwakar, Anant; Bodi, Kowsik
2017-11-01
To study the flow of high temperature air in vibrational and chemical equilibrium, accurate models for thermodynamic state and transport phenomena are required. In the present work, the performance of a state equation model and two mixing rules for determining equilibrium air thermodynamic and transport properties are compared with that of curve fits. The thermodynamic state model considers 11 species which computes flow chemistry by an iterative process and the mixing rules considered for viscosity are Wilke and Armaly-Sutton. The curve fits of Srinivasan, which are based on Grabau type transition functions, are chosen for comparison. A two-dimensional Navier-Stokes solver is developed to simulate high enthalpy flows with numerical fluxes computed by AUSM+-up. The accuracy of state equation model and curve fits for thermodynamic properties is determined using hypersonic inviscid flow over a circular cylinder. The performance of mixing rules and curve fits for viscosity are compared using hypersonic laminar boundary layer prediction on a flat plate. It is observed that steady state solutions from state equation model and curve fits match with each other. Though curve fits are significantly faster the state equation model is more general and can be adapted to any flow composition.
High-Dimensional Bayesian Geostatistics
Banerjee, Sudipto
2017-01-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as “priors” for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings. PMID:29391920
High-Dimensional Bayesian Geostatistics.
Banerjee, Sudipto
2017-06-01
With the growing capabilities of Geographic Information Systems (GIS) and user-friendly software, statisticians today routinely encounter geographically referenced data containing observations from a large number of spatial locations and time points. Over the last decade, hierarchical spatiotemporal process models have become widely deployed statistical tools for researchers to better understand the complex nature of spatial and temporal variability. However, fitting hierarchical spatiotemporal models often involves expensive matrix computations with complexity increasing in cubic order for the number of spatial locations and temporal points. This renders such models unfeasible for large data sets. This article offers a focused review of two methods for constructing well-defined highly scalable spatiotemporal stochastic processes. Both these processes can be used as "priors" for spatiotemporal random fields. The first approach constructs a low-rank process operating on a lower-dimensional subspace. The second approach constructs a Nearest-Neighbor Gaussian Process (NNGP) that ensures sparse precision matrices for its finite realizations. Both processes can be exploited as a scalable prior embedded within a rich hierarchical modeling framework to deliver full Bayesian inference. These approaches can be described as model-based solutions for big spatiotemporal datasets. The models ensure that the algorithmic complexity has ~ n floating point operations (flops), where n the number of spatial locations (per iteration). We compare these methods and provide some insight into their methodological underpinnings.
Groundwater regulation and integrated planning
Quevauviller, Philippe; Batelaan, Okke; Hunt, Randall J.
2016-01-01
The complex nature of groundwater and the diversity of uses and environmental interactions call for emerging groundwater problems to be addressed through integrated management and planning approaches. Planning requires different levels of integration dealing with: the hydrologic cycle (the physical process) including the temporal dimension; river basins and aquifers (spatial integration); socioeconomic considerations at regional, national and international levels; and scientific knowledge. The great natural variation in groundwater conditions obviously affects planning needs and options as well as perceptions from highly localised to regionally-based approaches. The scale at which planning is done therefore needs to be carefully evaluated against available policy choices and options in each particular setting. A solid planning approach is based on River Basin Management Planning (RBMP), which covers: (1) objectives that management planning are designed to address; (2) the way various types of measures fit into the overall management planning; and (3) the criteria against which the success or failure of specific strategies or interventions can be evaluated (e.g. compliance with environmental quality standards). A management planning framework is to be conceived as a “living” or iterated document that can be updated, refined and if necessary changed as information and experience are gained. This chapter discusses these aspects, providing an insight into European Union (EU), United States and Australia groundwater planning practices.
Liu, Jiangang; Jolly, Robert A.; Smith, Aaron T.; Searfoss, George H.; Goldstein, Keith M.; Uversky, Vladimir N.; Dunker, Keith; Li, Shuyu; Thomas, Craig E.; Wei, Tao
2011-01-01
Toxicogenomics promises to aid in predicting adverse effects, understanding the mechanisms of drug action or toxicity, and uncovering unexpected or secondary pharmacology. However, modeling adverse effects using high dimensional and high noise genomic data is prone to over-fitting. Models constructed from such data sets often consist of a large number of genes with no obvious functional relevance to the biological effect the model intends to predict that can make it challenging to interpret the modeling results. To address these issues, we developed a novel algorithm, Predictive Power Estimation Algorithm (PPEA), which estimates the predictive power of each individual transcript through an iterative two-way bootstrapping procedure. By repeatedly enforcing that the sample number is larger than the transcript number, in each iteration of modeling and testing, PPEA reduces the potential risk of overfitting. We show with three different cases studies that: (1) PPEA can quickly derive a reliable rank order of predictive power of individual transcripts in a relatively small number of iterations, (2) the top ranked transcripts tend to be functionally related to the phenotype they are intended to predict, (3) using only the most predictive top ranked transcripts greatly facilitates development of multiplex assay such as qRT-PCR as a biomarker, and (4) more importantly, we were able to demonstrate that a small number of genes identified from the top-ranked transcripts are highly predictive of phenotype as their expression changes distinguished adverse from nonadverse effects of compounds in completely independent tests. Thus, we believe that the PPEA model effectively addresses the over-fitting problem and can be used to facilitate genomic biomarker discovery for predictive toxicology and drug responses. PMID:21935387
NASA Astrophysics Data System (ADS)
Sanatkhani, Soroosh; Oladosu, Michael; Chera, Karandeep; Nedios, Sotirios; Menon, Prahlad G.
2018-03-01
Pulmonary vein isolation (PVI) is an established procedure for atrial fibrillation (AF) patients. Pre-procedural screening is necessary prior to PVI in order to reduce the likelihood of AF recurrence and improve overall success rate of the procedure. However, current reliable methods to determine AF triggers are invasive. In this paper, we present an approach to relate the regional characteristics of left atrial (LA) shape to existence of low-voltage areas (LVA) which indicate the presence of scar in invasive exams. A cohort of 29 AF patient-specific clinical images were each segmented into 3D surface bodies representing the LA. Iterative closest point based similarity transformation was used to find the best fit sphere to each patient-specific LA and the mean deviation of LA wall to this sphere of best fit was determined using a signed point-to-surface regional distance metric. Regional departure from the best-fit sphere was reduced into a metric of global LA sphericity. Next, the LA was divided into six regions to perform an analysis of regional sphericity. Regional sphericity analysis revealed that sphericity of the inferior-posterior LA region was found to be related to several clinical variables, including a direct correlation with body mass index (BMI) and an inverse correlation with left ventricular ejection fraction (EF), which presents a diseased heart that has been asymmetrically inflated. Our observations therefore demonstrate promise in being leveraged as a non-invasive patient selection tool to increase the success rate of PVI procedures.
NASA Astrophysics Data System (ADS)
Rahim, K. J.; Cumming, B. F.; Hallett, D. J.; Thomson, D. J.
2007-12-01
An accurate assessment of historical local Holocene data is important in making future climate predictions. Holocene climate is often obtained through proxy measures such as diatoms or pollen using radiocarbon dating. Wiggle Match Dating (WMD) uses an iterative least squares approach to tune a core with a large amount of 14C dates to the 14C calibration curve. This poster will present a new method of tuning a time series with when only a modest number of 14C dates are available. The method presented uses the multitaper spectral estimation, and it specifically makes use of a multitaper spectral coherence tuning technique. Holocene climate reconstructions are often based on a simple depth-time fit such as a linear interpolation, splines, or low order polynomials. Many of these models make use of only a small number of 14C dates, each of which is a point estimate with a significant variance. This technique attempts to tune the 14C dates to a reference series, such as tree rings, varves, or the radiocarbon calibration curve. The amount of 14C in the atmosphere is not constant, and a significant source of variance is solar activity. A decrease in solar activity coincides with an increase in cosmogenic isotope production, and an increase in cosmogenic isotope production coincides with a decrease in temperature. The method presented uses multitaper coherence estimates and adjusts the phase of the time series to line up significant line components with that of the reference series in attempt to obtain a better depth-time fit then the original model. Given recent concerns and demonstrations of the variation in estimated dates from radiocarbon labs, methods to confirm and tune the depth-time fit can aid climate reconstructions by improving and serving to confirm the accuracy of the underlying depth-time fit. Climate reconstructions can then be made on the improved depth-time fit. This poster presents a run though of this process using Chauvin Lake in the Canadian prairies and Mt. Barr Cirque Lake located in British Columbia as examples.
NASA Astrophysics Data System (ADS)
Pua, Rizza; Park, Miran; Wi, Sunhee; Cho, Seungryong
2016-12-01
We propose a hybrid metal artifact reduction (MAR) approach for computed tomography (CT) that is computationally more efficient than a fully iterative reconstruction method, but at the same time achieves superior image quality to the interpolation-based in-painting techniques. Our proposed MAR method, an image-based artifact subtraction approach, utilizes an intermediate prior image reconstructed via PDART to recover the background information underlying the high density objects. For comparison, prior images generated by total-variation minimization (TVM) algorithm, as a realization of fully iterative approach, were also utilized as intermediate images. From the simulation and real experimental results, it has been shown that PDART drastically accelerates the reconstruction to an acceptable quality of prior images. Incorporating PDART-reconstructed prior images in the proposed MAR scheme achieved higher quality images than those by a conventional in-painting method. Furthermore, the results were comparable to the fully iterative MAR that uses high-quality TVM prior images.
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J; Stayman, J Webster; Zbijewski, Wojciech; Brock, Kristy K; Daly, Michael J; Chan, Harley; Irish, Jonathan C; Siewerdsen, Jeffrey H
2011-04-01
A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values ("intensity"). A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration. A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.
NASA Astrophysics Data System (ADS)
Cao, Jian; Chen, Jing-Bo; Dai, Meng-Xue
2018-01-01
An efficient finite-difference frequency-domain modeling of seismic wave propagation relies on the discrete schemes and appropriate solving methods. The average-derivative optimal scheme for the scalar wave modeling is advantageous in terms of the storage saving for the system of linear equations and the flexibility for arbitrary directional sampling intervals. However, using a LU-decomposition-based direct solver to solve its resulting system of linear equations is very costly for both memory and computational requirements. To address this issue, we consider establishing a multigrid-preconditioned BI-CGSTAB iterative solver fit for the average-derivative optimal scheme. The choice of preconditioning matrix and its corresponding multigrid components is made with the help of Fourier spectral analysis and local mode analysis, respectively, which is important for the convergence. Furthermore, we find that for the computation with unequal directional sampling interval, the anisotropic smoothing in the multigrid precondition may affect the convergence rate of this iterative solver. Successful numerical applications of this iterative solver for the homogenous and heterogeneous models in 2D and 3D are presented where the significant reduction of computer memory and the improvement of computational efficiency are demonstrated by comparison with the direct solver. In the numerical experiments, we also show that the unequal directional sampling interval will weaken the advantage of this multigrid-preconditioned iterative solver in the computing speed or, even worse, could reduce its accuracy in some cases, which implies the need for a reasonable control of directional sampling interval in the discretization.
Unimodular sequence design under frequency hopping communication compatibility requirements
NASA Astrophysics Data System (ADS)
Ge, Peng; Cui, Guolong; Kong, Lingjiang; Yang, Jianyu
2016-12-01
The integrated design for both radar and anonymous communication has drawn more attention recently since wireless communication system appeals to enhance security and reliability. Given the frequency hopping (FH) communication system, an effective way to realize integrated design is to meet the spectrum compatibility between these two systems. The paper deals with a unimodular sequence design technique which considers optimizing both the spectrum compatibility and peak sidelobes levels (PSL) of auto-correlation function (ACF). The spectrum compatibility requirement realizes anonymous communication for the FH system and provides this system lower probability of intercept (LPI) since the spectrum of the FH system is hidden in that of the radar system. The proposed algorithm, named generalized fitting template (GFT) technique, converts the sequence optimization design problem to a iterative fitting process. In this process, the power spectrum density (PSD) and PSL behaviors of the generated sequences fit both PSD and PSL templates progressively. Two templates are established based on the spectrum compatibility requirement and the expected PSL. As noted, in order to ensure the communication security and reliability, spectrum compatibility requirement is given a higher priority to achieve in the GFT algorithm. This algorithm realizes this point by adjusting the weight adaptively between these two terms during the iteration process. The simulation results are analyzed in terms of bit error rate (BER), PSD, PSL, and signal-interference rate (SIR) for both the radar and FH systems. The performance of GFT is compared with SCAN, CAN, FRE, CYC, and MAT algorithms in the above aspects, which shows its good effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi iteration are substantially improved compared to the Gauss-Seidel iteration, it does yield a roughly 8$-$10% increase in the overall memory requirement.« less
Stimpson, Shane; Collins, Benjamin; Kochunas, Brendan
2017-03-10
The MPACT code, being developed collaboratively by the University of Michigan and Oak Ridge National Laboratory, is the primary deterministic neutron transport solver being deployed within the Virtual Environment for Reactor Applications (VERA) as part of the Consortium for Advanced Simulation of Light Water Reactors (CASL). In many applications of the MPACT code, transport-corrected scattering has proven to be an obstacle in terms of stability, and considerable effort has been made to try to resolve the convergence issues that arise from it. Most of the convergence problems seem related to the transport-corrected cross sections, particularly when used in the 2Dmore » method of characteristics (MOC) solver, which is the focus of this work. Here in this paper, the stability and performance of the 2-D MOC solver in MPACT is evaluated for two iteration schemes: Gauss-Seidel and Jacobi. With the Gauss-Seidel approach, as the MOC solver loops over groups, it uses the flux solution from the previous group to construct the inscatter source for the next group. Alternatively, the Jacobi approach uses only the fluxes from the previous outer iteration to determine the inscatter source for each group. Consequently for the Jacobi iteration, the loop over groups can be moved from the outermost loop$-$as is the case with the Gauss-Seidel sweeper$-$to the innermost loop, allowing for a substantial increase in efficiency by minimizing the overhead of retrieving segment, region, and surface index information from the ray tracing data. Several test problems are assessed: (1) Babcock & Wilcox 1810 Core I, (2) Dimple S01A-Sq, (3) VERA Progression Problem 5a, and (4) VERA Problem 2a. The Jacobi iteration exhibits better stability than Gauss-Seidel, allowing for converged solutions to be obtained over a much wider range of iteration control parameters. Additionally, the MOC solve time with the Jacobi approach is roughly 2.0-2.5× faster per sweep. While the performance and stability of the Jacobi iteration are substantially improved compared to the Gauss-Seidel iteration, it does yield a roughly 8$-$10% increase in the overall memory requirement.« less
TH-AB-BRA-09: Stability Analysis of a Novel Dose Calculation Algorithm for MRI Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zelyak, O; Fallone, B; Cross Cancer Institute, Edmonton, AB
2016-06-15
Purpose: To determine the iterative deterministic solution stability of the Linear Boltzmann Transport Equation (LBTE) in the presence of magnetic fields. Methods: The LBTE with magnetic fields under investigation is derived using a discrete ordinates approach. The stability analysis is performed using analytical and numerical methods. Analytically, the spectral Fourier analysis is used to obtain the convergence rate of the source iteration procedures based on finding the largest eigenvalue of the iterative operator. This eigenvalue is a function of relevant physical parameters, such as magnetic field strength and material properties, and provides essential information about the domain of applicability requiredmore » for clinically optimal parameter selection and maximum speed of convergence. The analytical results are reinforced by numerical simulations performed using the same discrete ordinates method in angle, and a discontinuous finite element spatial approach. Results: The spectral radius for the source iteration technique of the time independent transport equation with isotropic and anisotropic scattering centers inside infinite 3D medium is equal to the ratio of differential and total cross sections. The result is confirmed numerically by solving LBTE and is in full agreement with previously published results. The addition of magnetic field reveals that the convergence becomes dependent on the strength of magnetic field, the energy group discretization, and the order of anisotropic expansion. Conclusion: The source iteration technique for solving the LBTE with magnetic fields with the discrete ordinates method leads to divergent solutions in the limiting cases of small energy discretizations and high magnetic field strengths. Future investigations into non-stationary Krylov subspace techniques as an iterative solver will be performed as this has been shown to produce greater stability than source iteration. Furthermore, a stability analysis of a discontinuous finite element space-angle approach (which has been shown to provide the greatest stability) will also be investigated. Dr. B Gino Fallone is a co-founder and CEO of MagnetTx Oncology Solutions (under discussions to license Alberta bi-planar linac MR for commercialization)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
NASA Astrophysics Data System (ADS)
Tour, James M.; Schumm, Jeffrey S.; Pearson, Darren L.
1994-06-01
Described is the synthesis of oligo (2-ethylphenylene ethynylene)s and oligo (2-(3'ethylheptyl) phenylene ethynylene)s via an iterative divergent convergent approach. Synthesized were the monomer, dimer, tetramer, and octamer of the ethyl derivative and the monomer, dimer, tetramer, octamer, and 16-mer of the ethylheptyl derivative. The 16-mer is 128 A long. At each stage in the iteration, the length of the framework doubles. Only three sets of reaction conditions are needed for the entire iterative synthetic sequence; an iodination, a protodesilylation, and a Pd/Cu-catalyzed cross coupling. The oligomers were characterized spectroscopically and by mass spectrometry. The optical properties are presented which show the stage of optical absorbance saturation. The size exclusion chromatography values for the number average weights, relative to polystyrene, illustrate the tremendous differences in the hydrodynamic volume of these rigid rod oligomers verses the random coils of polystyrene. These differences become quite apparent at the octamer stage. These oligomers may act as molecular wires in molecular electronic devices and they also serve as useful models for understanding related bulk polymers.
Iterated reaction graphs: simulating complex Maillard reaction pathways.
Patel, S; Rabone, J; Russell, S; Tissen, J; Klaffke, W
2001-01-01
This study investigates a new method of simulating a complex chemical system including feedback loops and parallel reactions. The practical purpose of this approach is to model the actual reactions that take place in the Maillard process, a set of food browning reactions, in sufficient detail to be able to predict the volatile composition of the Maillard products. The developed framework, called iterated reaction graphs, consists of two main elements: a soup of molecules and a reaction base of Maillard reactions. An iterative process loops through the reaction base, taking reactants from and feeding products back to the soup. This produces a reaction graph, with molecules as nodes and reactions as arcs. The iterated reaction graph is updated and validated by comparing output with the main products found by classical gas-chromatographic/mass spectrometric analysis. To ensure a realistic output and convergence to desired volatiles only, the approach contains a number of novel elements: rate kinetics are treated as reaction probabilities; only a subset of the true chemistry is modeled; and the reactions are blocked into groups.
NASA Astrophysics Data System (ADS)
Soni, Jigensh; Yadav, R. K.; Patel, A.; Gahlaut, A.; Mistry, H.; Parmar, K. G.; Mahesh, V.; Parmar, D.; Prajapati, B.; Singh, M. J.; Bandyopadhyay, M.; Bansal, G.; Pandya, K.; Chakraborty, A.
2013-02-01
Twin Source - An Inductively coupled two RF driver based 180 kW, 1 MHz negative ion source experimental setup is initiated at IPR, Gandhinagar, under Indian program, with the objective of understanding the physics and technology of multi-driver coupling. Twin Source [1] (TS) also provides an intermediate platform between operational ROBIN [2] [5] and eight RF drivers based Indian test facility -INTF [3]. A twin source experiment requires a central system to provide control, data acquisition and communication interface, referred as TS-CODAC, for which a software architecture similar to ITER CODAC core system has been decided for implementation. The Core System is a software suite for ITER plant system manufacturers to use as a template for the development of their interface with CODAC. The ITER approach, in terms of technology, has been adopted for the TS-CODAC so as to develop necessary expertise for developing and operating a control system based on the ITER guidelines as similar configuration needs to be implemented for the INTF. This cost effective approach will provide an opportunity to evaluate and learn ITER CODAC technology, documentation, information technology and control system processes, on an operational machine. Conceptual design of the TS-CODAC system has been completed. For complete control of the system, approximately 200 Nos. control signals and 152 acquisition signals are needed. In TS-CODAC, control loop time required is within the range of 5ms - 10 ms, therefore for the control system, PLC (Siemens S-7 400) has been chosen as suggested in the ITER slow controller catalog. For the data acquisition, the maximum sampling interval required is 100 micro second, and therefore National Instruments (NI) PXIe system and NI 6259 digitizer cards have been selected as suggested in the ITER fast controller catalog. This paper will present conceptual design of TS -CODAC system based on ITER CODAC Core software and applicable plant system integration processes.
Robust Decision Making Approach to Managing Water Resource Risks (Invited)
NASA Astrophysics Data System (ADS)
Lempert, R.
2010-12-01
The IPCC and US National Academies of Science have recommended iterative risk management as the best approach for water management and many other types of climate-related decisions. Such an approach does not rely on a single set of judgments at any one time but rather actively updates and refines strategies as new information emerges. In addition, the approach emphasizes that a portfolio of different types of responses, rather than any single action, often provides the best means to manage uncertainty. Implementing an iterative risk management approach can however prove difficult in actual decision support applications. This talk will suggest that robust decision making (RDM) provides a particularly useful set of quantitative methods for implementing iterative risk management. This RDM approach is currently being used in a wide variety of water management applications. RDM employs three key concepts that differentiate it from most types of probabilistic risk analysis: 1) characterizing uncertainty with multiple views of the future (which can include sets of probability distributions) rather than a single probabilistic best-estimate, 2) employing a robustness rather than an optimality criterion to assess alternative policies, and 3) organizing the analysis with a vulnerability and response option framework, rather than a predict-then-act framework. This talk will summarize the RDM approach, describe its use in several different types of water management applications, and compare the results to those obtained with other methods.
Matching Pursuit with Asymmetric Functions for Signal Decomposition and Parameterization
Spustek, Tomasz; Jedrzejczak, Wiesław Wiktor; Blinowska, Katarzyna Joanna
2015-01-01
The method of adaptive approximations by Matching Pursuit makes it possible to decompose signals into basic components (called atoms). The approach relies on fitting, in an iterative way, functions from a large predefined set (called dictionary) to an analyzed signal. Usually, symmetric functions coming from the Gabor family (sine modulated Gaussian) are used. However Gabor functions may not be optimal in describing waveforms present in physiological and medical signals. Many biomedical signals contain asymmetric components, usually with a steep rise and slower decay. For the decomposition of this kind of signal we introduce a dictionary of functions of various degrees of asymmetry – from symmetric Gabor atoms to highly asymmetric waveforms. The application of this enriched dictionary to Otoacoustic Emissions and Steady-State Visually Evoked Potentials demonstrated the advantages of the proposed method. The approach provides more sparse representation, allows for correct determination of the latencies of the components and removes the "energy leakage" effect generated by symmetric waveforms that do not sufficiently match the structures of the analyzed signal. Additionally, we introduced a time-frequency-amplitude distribution that is more adequate for representation of asymmetric atoms than the conventional time-frequency-energy distribution. PMID:26115480
Applying mixed methods to pretest the Pressure Ulcer Quality of Life (PU-QOL) instrument.
Gorecki, C; Lamping, D L; Nixon, J; Brown, J M; Cano, S
2012-04-01
Pretesting is key in the development of patient-reported outcome (PRO) instruments. We describe a mixed-methods approach based on interviews and Rasch measurement methods in the pretesting of the Pressure Ulcer Quality of Life (PU-QOL) instrument. We used cognitive interviews to pretest the PU-QOL in 35 patients with pressure ulcers with the view to identifying problematic items, followed by Rasch analysis to examine response options, appropriateness of the item series and biases due to question ordering (item fit). We then compared findings in an interactive and iterative process to identify potential strengths and weaknesses of PU-QOL items, and guide decision-making about further revisions to items and design/layout. Although cognitive interviews largely supported items, they highlighted problems with layout, response options and comprehension. Findings from the Rasch analysis identified problems with response options through reversed thresholds. The use of a mixed-methods approach in pretesting the PU-QOL instrument proved beneficial for identifying problems with scale layout, response options and framing/wording of items. Rasch measurement methods are a useful addition to standard qualitative pretesting for evaluating strengths and weaknesses of early stage PRO instruments.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1992-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The computer code uses the method of pseudo-compressibility with an upwind-differencing scheme for the convective fluxes and an implicit line-relaxation solution algorithm. The motivation for this work includes interest in studying the high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack, up to stall, is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared: a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time (on a CRAY YMP) per element in the airfoil configuration.
Efficient simulation of incompressible viscous flow over multi-element airfoils
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Wiltberger, N. Lyn; Kwak, Dochan
1993-01-01
The incompressible, viscous, turbulent flow over single and multi-element airfoils is numerically simulated in an efficient manner by solving the incompressible Navier-Stokes equations. The solution algorithm employs the method of pseudo compressibility and utilizes an upwind differencing scheme for the convective fluxes, and an implicit line-relaxation scheme. The motivation for this work includes interest in studying high-lift take-off and landing configurations of various aircraft. In particular, accurate computation of lift and drag at various angles of attack up to stall is desired. Two different turbulence models are tested in computing the flow over an NACA 4412 airfoil; an accurate prediction of stall is obtained. The approach used for multi-element airfoils involves the use of multiple zones of structured grids fitted to each element. Two different approaches are compared; a patched system of grids, and an overlaid Chimera system of grids. Computational results are presented for two-element, three-element, and four-element airfoil configurations. Excellent agreement with experimental surface pressure coefficients is seen. The code converges in less than 200 iterations, requiring on the order of one minute of CPU time on a CRAY YMP per element in the airfoil configuration.
iCI: Iterative CI toward full CI.
Liu, Wenjian; Hoffmann, Mark R
2016-03-08
It is shown both theoretically and numerically that the minimal multireference configuration interaction (CI) approach [Liu, W.; Hoffmann, M. R. Theor. Chem. Acc. 2014, 133, 1481] converges quickly and monotonically from above to full CI by updating the primary, external, and secondary states that describe the respective static, dynamic, and again static components of correlation iteratively, even when starting with a rather poor description of a strongly correlated system. In short, the iterative CI (iCI) is a very effective means toward highly correlated wave functions and, ultimately, full CI.
NASA Technical Reports Server (NTRS)
Turc, Catalin; Anand, Akash; Bruno, Oscar; Chaubell, Julian
2011-01-01
We present a computational methodology (a novel Nystrom approach based on use of a non-overlapping patch technique and Chebyshev discretizations) for efficient solution of problems of acoustic and electromagnetic scattering by open surfaces. Our integral equation formulations (1) Incorporate, as ansatz, the singular nature of open-surface integral-equation solutions, and (2) For the Electric Field Integral Equation (EFIE), use analytical regularizes that effectively reduce the number of iterations required by iterative linear-algebra solution based on Krylov-subspace iterative solvers.
Sinogram-based adaptive iterative reconstruction for sparse view x-ray computed tomography
NASA Astrophysics Data System (ADS)
Trinca, D.; Zhong, Y.; Wang, Y.-Z.; Mamyrbayev, T.; Libin, E.
2016-10-01
With the availability of more powerful computing processors, iterative reconstruction algorithms have recently been successfully implemented as an approach to achieving significant dose reduction in X-ray CT. In this paper, we propose an adaptive iterative reconstruction algorithm for X-ray CT, that is shown to provide results comparable to those obtained by proprietary algorithms, both in terms of reconstruction accuracy and execution time. The proposed algorithm is thus provided for free to the scientific community, for regular use, and for possible further optimization.
Time series modeling and forecasting using memetic algorithms for regime-switching models.
Bergmeir, Christoph; Triguero, Isaac; Molina, Daniel; Aznarte, José Luis; Benitez, José Manuel
2012-11-01
In this brief, we present a novel model fitting procedure for the neuro-coefficient smooth transition autoregressive model (NCSTAR), as presented by Medeiros and Veiga. The model is endowed with a statistically founded iterative building procedure and can be interpreted in terms of fuzzy rule-based systems. The interpretability of the generated models and a mathematically sound building procedure are two very important properties of forecasting models. The model fitting procedure employed by the original NCSTAR is a combination of initial parameter estimation by a grid search procedure with a traditional local search algorithm. We propose a different fitting procedure, using a memetic algorithm, in order to obtain more accurate models. An empirical evaluation of the method is performed, applying it to various real-world time series originating from three forecasting competitions. The results indicate that we can significantly enhance the accuracy of the models, making them competitive to models commonly used in the field.
Radar altimeter waveform modeled parameter recovery. [SEASAT-1 data
NASA Technical Reports Server (NTRS)
1981-01-01
Satellite-borne radar altimeters include waveform sampling gates providing point samples of the transmitted radar pulse after its scattering from the ocean's surface. Averages of the waveform sampler data can be fitted by varying parameters in a model mean return waveform. The theoretical waveform model used is described as well as a general iterative nonlinear least squares procedures used to obtain estimates of parameters characterizing the modeled waveform for SEASAT-1 data. The six waveform parameters recovered by the fitting procedure are: (1) amplitude; (2) time origin, or track point; (3) ocean surface rms roughness; (4) noise baseline; (5) ocean surface skewness; and (6) altitude or off-nadir angle. Additional practical processing considerations are addressed and FORTRAN source listing for subroutines used in the waveform fitting are included. While the description is for the Seasat-1 altimeter waveform data analysis, the work can easily be generalized and extended to other radar altimeter systems.
Iterative LQG Controller Design Through Closed-Loop Identification
NASA Technical Reports Server (NTRS)
Hsiao, Min-Hung; Huang, Jen-Kuang; Cox, David E.
1996-01-01
This paper presents an iterative Linear Quadratic Gaussian (LQG) controller design approach for a linear stochastic system with an uncertain open-loop model and unknown noise statistics. This approach consists of closed-loop identification and controller redesign cycles. In each cycle, the closed-loop identification method is used to identify an open-loop model and a steady-state Kalman filter gain from closed-loop input/output test data obtained by using a feedback LQG controller designed from the previous cycle. Then the identified open-loop model is used to redesign the state feedback. The state feedback and the identified Kalman filter gain are used to form an updated LQC controller for the next cycle. This iterative process continues until the updated controller converges. The proposed controller design is demonstrated by numerical simulations and experiments on a highly unstable large-gap magnetic suspension system.
Zheng, Wenjun; Brooks, Bernard R
2006-06-15
Recently we have developed a normal-modes-based algorithm that predicts the direction of protein conformational changes given the initial state crystal structure together with a small number of pairwise distance constraints for the end state. Here we significantly extend this method to accurately model both the direction and amplitude of protein conformational changes. The new protocol implements a multisteps search in the conformational space that is driven by iteratively minimizing the error of fitting the given distance constraints and simultaneously enforcing the restraint of low elastic energy. At each step, an incremental structural displacement is computed as a linear combination of the lowest 10 normal modes derived from an elastic network model, whose eigenvectors are reorientated to correct for the distortions caused by the structural displacements in the previous steps. We test this method on a list of 16 pairs of protein structures for which relatively large conformational changes are observed (root mean square deviation >3 angstroms), using up to 10 pairwise distance constraints selected by a fluctuation analysis of the initial state structures. This method has achieved a near-optimal performance in almost all cases, and in many cases the final structural models lie within root mean square deviation of 1 approximately 2 angstroms from the native end state structures.
Composition of Web Services Using Markov Decision Processes and Dynamic Programming
Uc-Cetina, Víctor; Moo-Mena, Francisco; Hernandez-Ucan, Rafael
2015-01-01
We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity. PMID:25874247
Kernel approach to molecular similarity based on iterative graph similarity.
Rupp, Matthias; Proschak, Ewgenij; Schneider, Gisbert
2007-01-01
Similarity measures for molecules are of basic importance in chemical, biological, and pharmaceutical applications. We introduce a molecular similarity measure defined directly on the annotated molecular graph, based on iterative graph similarity and optimal assignments. We give an iterative algorithm for the computation of the proposed molecular similarity measure, prove its convergence and the uniqueness of the solution, and provide an upper bound on the required number of iterations necessary to achieve a desired precision. Empirical evidence for the positive semidefiniteness of certain parametrizations of our function is presented. We evaluated our molecular similarity measure by using it as a kernel in support vector machine classification and regression applied to several pharmaceutical and toxicological data sets, with encouraging results.
Hybrid pairwise likelihood analysis of animal behavior experiments.
Cattelan, Manuela; Varin, Cristiano
2013-12-01
The study of the determinants of fights between animals is an important issue in understanding animal behavior. For this purpose, tournament experiments among a set of animals are often used by zoologists. The results of these tournament experiments are naturally analyzed by paired comparison models. Proper statistical analysis of these models is complicated by the presence of dependence between the outcomes of fights because the same animal is involved in different contests. This paper discusses two different model specifications to account for between-fights dependence. Models are fitted through the hybrid pairwise likelihood method that iterates between optimal estimating equations for the regression parameters and pairwise likelihood inference for the association parameters. This approach requires the specification of means and covariances only. For this reason, the method can be applied also when the computation of the joint distribution is difficult or inconvenient. The proposed methodology is investigated by simulation studies and applied to real data about adult male Cape Dwarf Chameleons. © 2013, The International Biometric Society.
Derieppe, Marc; de Senneville, Baudouin Denis; Kuijf, Hugo; Moonen, Chrit; Bos, Clemens
2014-10-01
Previously, we demonstrated the feasibility to monitor ultrasound-mediated uptake of a cell-impermeable model drug in real time with fibered confocal fluorescence microscopy. Here, we present a complete post-processing methodology, which corrects for cell displacements, to improve the accuracy of pharmacokinetic parameter estimation. Nucleus detection was performed based on the radial symmetry transform algorithm. Cell tracking used an iterative closest point approach. Pharmacokinetic parameters were calculated by fitting a two-compartment model to the time-intensity curves of individual cells. Cells were tracked successfully, improving time-intensity curve accuracy and pharmacokinetic parameter estimation. With tracking, 93 % of the 370 nuclei showed a fluorescence signal variation that was well-described by a two-compartment model. In addition, parameter distributions were narrower, thus increasing precision. Dedicated image analysis was implemented and enabled studying ultrasound-mediated model drug uptake kinetics in hundreds of cells per experiment, using fiber-based confocal fluorescence microscopy.
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1986-01-01
Computational predictions of turbulent flow in sharply curved 180 degree turn around ducts are presented. The CNS2D computer code is used to solve the equations of motion for two-dimensional incompressible flows transformed to a nonorthogonal body-fitted coordinate system. This procedure incorporates the pressure velocity correction algorithm SIMPLE-C to iteratively solve a discretized form of the transformed equations. A multiple scale turbulence model based on simplified spectral partitioning is employed to obtain closure. Flow field predictions utilizing the multiple scale model are compared to features predicted by the traditional single scale k-epsilon model. Tuning parameter sensitivities of the multiple scale model applied to turn around duct flows are also determined. In addition, a wall function approach based on a wall law suitable for incompressible turbulent boundary layers under strong adverse pressure gradients is tested. Turn around duct flow characteristics utilizing this modified wall law are presented and compared to results based on a standard wall treatment.
Caldicott, Catherine V; Danis, Marion
2013-01-01
OBJECTIVES In order to teach medical students to engage more fully with patients, we offer ethics education as a tool to assist in the management of patient health issues. METHODS We propose that many dilemmas in clinical medicine would benefit by having the doctor embark on an iterative reasoning process with the patient. Such a process acknowledges and engages the patient as a moral agent. We recommend employing Kant’s ethic of respect and a more inclusive definition of patient autonomy drawn from philosophy and clinical medicine, rather than simply presenting dichotomous choices to patients, which represents a common, but often suboptimal, means of approaching both medical and moral concerns. DISCUSSION We describe how more nuanced teaching about the ethics of the doctor–patient relationship might fit into the medical curriculum and offer practical suggestions for implementing a more respectful, morally engaged relationship with patients that should assist them to achieve meaningful health goals. PMID:19250356
Iterative image reconstruction for PROPELLER-MRI using the nonuniform fast fourier transform.
Tamhane, Ashish A; Anastasio, Mark A; Gui, Minzhi; Arfanakis, Konstantinos
2010-07-01
To investigate an iterative image reconstruction algorithm using the nonuniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping ParallEL Lines with Enhanced Reconstruction) MRI. Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it with that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased signal to noise ratio, reduced artifacts, for similar spatial resolution, compared with gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter, the new reconstruction technique may provide PROPELLER images with improved image quality compared with conventional gridding. (c) 2010 Wiley-Liss, Inc.
Iterative Image Reconstruction for PROPELLER-MRI using the NonUniform Fast Fourier Transform
Tamhane, Ashish A.; Anastasio, Mark A.; Gui, Minzhi; Arfanakis, Konstantinos
2013-01-01
Purpose To investigate an iterative image reconstruction algorithm using the non-uniform fast Fourier transform (NUFFT) for PROPELLER (Periodically Rotated Overlapping parallEL Lines with Enhanced Reconstruction) MRI. Materials and Methods Numerical simulations, as well as experiments on a phantom and a healthy human subject were used to evaluate the performance of the iterative image reconstruction algorithm for PROPELLER, and compare it to that of conventional gridding. The trade-off between spatial resolution, signal to noise ratio, and image artifacts, was investigated for different values of the regularization parameter. The performance of the iterative image reconstruction algorithm in the presence of motion was also evaluated. Results It was demonstrated that, for a certain range of values of the regularization parameter, iterative reconstruction produced images with significantly increased SNR, reduced artifacts, for similar spatial resolution, compared to gridding. Furthermore, the ability to reduce the effects of motion in PROPELLER-MRI was maintained when using the iterative reconstruction approach. Conclusion An iterative image reconstruction technique based on the NUFFT was investigated for PROPELLER MRI. For a certain range of values of the regularization parameter the new reconstruction technique may provide PROPELLER images with improved image quality compared to conventional gridding. PMID:20578028
Analysis of Physical Properties of Dust Suspended in the Mars Atmosphere
NASA Technical Reports Server (NTRS)
Snook, Kelly; McKay, Chris; Cantwell, Brian
1998-01-01
Methods for iteratively determining the infrared optical constants for dust suspended in the Mars atmosphere are described. High quality spectra for wavenumbers from 200 to 2000 1/cm were obtained over a wide range of view angles by the Mariner 9 spacecraft, when it observed a global Martian dust storm in 1971-2. In this research, theoretical spectra of the emergent intensity from Martian dust clouds are generated using a 2-stream source-function radiative transfer code. The code computes the radiation field in a plane-parallel, vertically homogeneous, multiply scattering atmosphere. Calculated intensity spectra are compared with the actual spacecraft data to iteratively retrieve the optical properties and opacity of the dust, as well as the surface temperature of Mars at the time and location of each measurement. Many different particle size distributions a-re investigated to determine the best fit to the data. The particles are assumed spherical and the temperature profile was obtained from the CO2 band shape. Given a reasonable initial guess for the indices of refraction, the searches converge in a well-behaved fashion, producing a fit with error of less than 1.2 K (rms) to the observed brightness spectra. The particle size distribution corresponding to the best fit was a lognormal distribution with a mean particle radius, r(sub m) 0.66 pm, and variance, omega(sup 2) = 0.412 (r(sub eff) = 1.85 microns, v(sub eff) =.51), in close agreement with the size distribution found to be the best fit in the visible wavelengths in recent studies. The optical properties and the associated single scattering properties are shown to be a significant improvement over those used in existing models by demonstrating the effects of the new properties both on heating rates of the Mars atmosphere and in example spectral retrieval of surface characteristics from emission spectra.
Ramani, Sathish; Liu, Zhihao; Rosen, Jeffrey; Nielsen, Jon-Fredrik; Fessler, Jeffrey A.
2012-01-01
Regularized iterative reconstruction algorithms for imaging inverse problems require selection of appropriate regularization parameter values. We focus on the challenging problem of tuning regularization parameters for nonlinear algorithms for the case of additive (possibly complex) Gaussian noise. Generalized cross-validation (GCV) and (weighted) mean-squared error (MSE) approaches (based on Stein's Unbiased Risk Estimate— SURE) need the Jacobian matrix of the nonlinear reconstruction operator (representative of the iterative algorithm) with respect to the data. We derive the desired Jacobian matrix for two types of nonlinear iterative algorithms: a fast variant of the standard iterative reweighted least-squares method and the contemporary split-Bregman algorithm, both of which can accommodate a wide variety of analysis- and synthesis-type regularizers. The proposed approach iteratively computes two weighted SURE-type measures: Predicted-SURE and Projected-SURE (that require knowledge of noise variance σ2), and GCV (that does not need σ2) for these algorithms. We apply the methods to image restoration and to magnetic resonance image (MRI) reconstruction using total variation (TV) and an analysis-type ℓ1-regularization. We demonstrate through simulations and experiments with real data that minimizing Predicted-SURE and Projected-SURE consistently lead to near-MSE-optimal reconstructions. We also observed that minimizing GCV yields reconstruction results that are near-MSE-optimal for image restoration and slightly sub-optimal for MRI. Theoretical derivations in this work related to Jacobian matrix evaluations can be extended, in principle, to other types of regularizers and reconstruction algorithms. PMID:22531764
Hybrid cloud and cluster computing paradigms for life science applications
2010-01-01
Background Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Results Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. Conclusions The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. Methods We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments. PMID:21210982
Hybrid cloud and cluster computing paradigms for life science applications.
Qiu, Judy; Ekanayake, Jaliya; Gunarathne, Thilina; Choi, Jong Youl; Bae, Seung-Hee; Li, Hui; Zhang, Bingjing; Wu, Tak-Lon; Ruan, Yang; Ekanayake, Saliya; Hughes, Adam; Fox, Geoffrey
2010-12-21
Clouds and MapReduce have shown themselves to be a broadly useful approach to scientific computing especially for parallel data intensive applications. However they have limited applicability to some areas such as data mining because MapReduce has poor performance on problems with an iterative structure present in the linear algebra that underlies much data analysis. Such problems can be run efficiently on clusters using MPI leading to a hybrid cloud and cluster environment. This motivates the design and implementation of an open source Iterative MapReduce system Twister. Comparisons of Amazon, Azure, and traditional Linux and Windows environments on common applications have shown encouraging performance and usability comparisons in several important non iterative cases. These are linked to MPI applications for final stages of the data analysis. Further we have released the open source Twister Iterative MapReduce and benchmarked it against basic MapReduce (Hadoop) and MPI in information retrieval and life sciences applications. The hybrid cloud (MapReduce) and cluster (MPI) approach offers an attractive production environment while Twister promises a uniform programming environment for many Life Sciences applications. We used commercial clouds Amazon and Azure and the NSF resource FutureGrid to perform detailed comparisons and evaluations of different approaches to data intensive computing. Several applications were developed in MPI, MapReduce and Twister in these different environments.
Model fit evaluation in multilevel structural equation models
Ryu, Ehri
2014-01-01
Assessing goodness of model fit is one of the key questions in structural equation modeling (SEM). Goodness of fit is the extent to which the hypothesized model reproduces the multivariate structure underlying the set of variables. During the earlier development of multilevel structural equation models, the “standard” approach was to evaluate the goodness of fit for the entire model across all levels simultaneously. The model fit statistics produced by the standard approach have a potential problem in detecting lack of fit in the higher-level model for which the effective sample size is much smaller. Also when the standard approach results in poor model fit, it is not clear at which level the model does not fit well. This article reviews two alternative approaches that have been proposed to overcome the limitations of the standard approach. One is a two-step procedure which first produces estimates of saturated covariance matrices at each level and then performs single-level analysis at each level with the estimated covariance matrices as input (Yuan and Bentler, 2007). The other level-specific approach utilizes partially saturated models to obtain test statistics and fit indices for each level separately (Ryu and West, 2009). Simulation studies (e.g., Yuan and Bentler, 2007; Ryu and West, 2009) have consistently shown that both alternative approaches performed well in detecting lack of fit at any level, whereas the standard approach failed to detect lack of fit at the higher level. It is recommended that the alternative approaches are used to assess the model fit in multilevel structural equation model. Advantages and disadvantages of the two alternative approaches are discussed. The alternative approaches are demonstrated in an empirical example. PMID:24550882
Smith, Samuel G; Wolf, Michael S; Obichere, Austin; Raine, Rosalind; Wardle, Jane; von Wagner, Christian
2013-12-01
To design and user-test a 'gist-based' colorectal cancer screening information leaflet, which promotes comprehension of the screening offer. Twenty-eight individuals approaching screening age were recruited from organisations in deprived areas of England. Using a between-subjects design, we tested iterations of a newly-designed gist-based information leaflet. Participants read the leaflet and answered 8 'true' or 'false' comprehension statements. For the leaflet to be considered fit-for-purpose, all statements had to be answered correctly by at least 80% of participants in each round. Alterations were made if this threshold was not met and additional rounds of testing were undertaken. At round 1, answers to 2/8 statements did not meet the threshold. After changes, answers in round 2 did not reach the threshold for 1/8 statements. In round 3, all answers were adequate and the leaflet was deemed fit-for-purpose. Qualitative data offered solutions such as language and layout changes which led to improved comprehension of the leaflet. User-testing substantially improved the design and subsequent comprehensibility of a theory-driven gist-based colorectal cancer screening information leaflet. This leaflet will be evaluated as part of a large national randomised controlled trial designed to reduce socioeconomic inequalities in colorectal cancer screening participation. Copyright © 2013 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
McClelland, Jamie R.; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; O' Connell, Dylan; Low, Daniel A.; Kaza, Evangelia; Collins, David J.; Leach, Martin O.; Hawkes, David J.
2017-06-01
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D'Souza, Derek; Thomas, David; Connell, Dylan O'; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-06-07
Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of 'partial' imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated.
McClelland, Jamie R; Modat, Marc; Arridge, Simon; Grimes, Helen; D’Souza, Derek; Thomas, David; Connell, Dylan O’; Low, Daniel A; Kaza, Evangelia; Collins, David J; Leach, Martin O; Hawkes, David J
2017-01-01
Abstract Surrogate-driven respiratory motion models relate the motion of the internal anatomy to easily acquired respiratory surrogate signals, such as the motion of the skin surface. They are usually built by first using image registration to determine the motion from a number of dynamic images, and then fitting a correspondence model relating the motion to the surrogate signals. In this paper we present a generalized framework that unifies the image registration and correspondence model fitting into a single optimization. This allows the use of ‘partial’ imaging data, such as individual slices, projections, or k-space data, where it would not be possible to determine the motion from an individual frame of data. Motion compensated image reconstruction can also be incorporated using an iterative approach, so that both the motion and a motion-free image can be estimated from the partial image data. The framework has been applied to real 4DCT, Cine CT, multi-slice CT, and multi-slice MR data, as well as simulated datasets from a computer phantom. This includes the use of a super-resolution reconstruction method for the multi-slice MR data. Good results were obtained for all datasets, including quantitative results for the 4DCT and phantom datasets where the ground truth motion was known or could be estimated. PMID:28195833
Streak camera based SLR receiver for two color atmospheric measurements
NASA Technical Reports Server (NTRS)
Varghese, Thomas K.; Clarke, Christopher; Oldham, Thomas; Selden, Michael
1993-01-01
To realize accurate two-color differential measurements, an image digitizing system with variable spatial resolution was designed, built, and integrated to a photon-counting picosecond streak camera, yielding a temporal scan resolution better than 300 femtosecond/pixel. The streak camera is configured to operate with 3 spatial channels; two of these support green (532 nm) and uv (355 nm) while the third accommodates reference pulses (764 nm) for real-time calibration. Critical parameters affecting differential timing accuracy such as pulse width and shape, number of received photons, streak camera/imaging system nonlinearities, dynamic range, and noise characteristics were investigated to optimize the system for accurate differential delay measurements. The streak camera output image consists of three image fields, each field is 1024 pixels along the time axis and 16 pixels across the spatial axis. Each of the image fields may be independently positioned across the spatial axis. Two of the image fields are used for the two wavelengths used in the experiment; the third window measures the temporal separation of a pair of diode laser pulses which verify the streak camera sweep speed for each data frame. The sum of the 16 pixel intensities across each of the 1024 temporal positions for the three data windows is used to extract the three waveforms. The waveform data is processed using an iterative three-point running average filter (10 to 30 iterations are used) to remove high-frequency structure. The pulse pair separations are determined using the half-max and centroid type analysis. Rigorous experimental verification has demonstrated that this simplified process provides the best measurement accuracy. To calibrate the receiver system sweep, two laser pulses with precisely known temporal separation are scanned along the full length of the sweep axis. The experimental measurements are then modeled using polynomial regression to obtain a best fit to the data. Data aggregation using normal point approach has provided accurate data fitting techniques and is found to be much more convenient than using the full rate single shot data. The systematic errors from this model have been found to be less than 3 ps for normal points.
Bindu, G; Semenov, S
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.
On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms
NASA Astrophysics Data System (ADS)
Lässig, Jörg; Hoffmann, Karl Heinz
The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.
A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators
NASA Technical Reports Server (NTRS)
Snyder, David B.; Wolford, David S.
2012-01-01
NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.
NASA Astrophysics Data System (ADS)
Habtezion, S.
2015-12-01
Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Fostering Earth Observation Regional Networks - Integrative and iterative approaches to capacity building Senay Habtezion (shabtezion@start.org) / Hassan Virji (hvirji@start.org)Global Change SySTem for Analysis, Training and Research (START) (www.start.org) 2000 Florida Avenue NW, Suite 200 Washington, DC 20009 USA As part of the Global Observation of Forest and Land Cover Dynamics (GOFC-GOLD) project partnership effort to promote use of earth observations in advancing scientific knowledge, START works to bridge capacity needs related to earth observations (EOs) and their applications in the developing world. GOFC-GOLD regional networks, fostered through the support of regional and thematic workshops, have been successful in (1) enabling participation of scientists for developing countries and from the US to collaborate on key GOFC-GOLD and Land Cover and Land Use Change (LCLUC) issues, including NASA Global Data Set validation and (2) training young developing country scientists to gain key skills in EOs data management and analysis. Members of the regional networks are also engaged and reengaged in other EOs programs (e.g. visiting scientists program; data initiative fellowship programs at the USGS EROS Center and Boston University), which has helped strengthen these networks. The presentation draws from these experiences in advocating for integrative and iterative approaches to capacity building through the lens of the GOFC-GOLD partnership effort. Specifically, this presentation describes the role of the GODC-GOLD partnership in nurturing organic networks of scientists and EOs practitioners in Asia, Africa, Eastern Europe and Latin America.
A fast method to emulate an iterative POCS image reconstruction algorithm.
Zeng, Gengsheng L
2017-10-01
Iterative image reconstruction algorithms are commonly used to optimize an objective function, especially when the objective function is nonquadratic. Generally speaking, the iterative algorithms are computationally inefficient. This paper presents a fast algorithm that has one backprojection and no forward projection. This paper derives a new method to solve an optimization problem. The nonquadratic constraint, for example, an edge-preserving denoising constraint is implemented as a nonlinear filter. The algorithm is derived based on the POCS (projections onto projections onto convex sets) approach. A windowed FBP (filtered backprojection) algorithm enforces the data fidelity. An iterative procedure, divided into segments, enforces edge-enhancement denoising. Each segment performs nonlinear filtering. The derived iterative algorithm is computationally efficient. It contains only one backprojection and no forward projection. Low-dose CT data are used for algorithm feasibility studies. The nonlinearity is implemented as an edge-enhancing noise-smoothing filter. The patient studies results demonstrate its effectiveness in processing low-dose x ray CT data. This fast algorithm can be used to replace many iterative algorithms. © 2017 American Association of Physicists in Medicine.
Loss, Leandro A.; Bebis, George; Parvin, Bahram
2012-01-01
In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is applied to delineation of adherens junctions imaged through fluorescence microscopy. This class of membrane-bound macromolecules maintains tissue structural integrity and cell-cell interactions. Visually, it exhibits fibrous patterns that may be diffused, punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates. PMID:21421432
DOE Office of Scientific and Technical Information (OSTI.GOV)
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Hopf, Barbara; Dutz, Franz J; Bosselmann, Thomas; Willsch, Michael; Koch, Alexander W; Roths, Johannes
2018-04-30
A new iterative matrix algorithm has been applied to improve the precision of temperature and force decoupling in multi-parameter FBG sensing. For the first time, this evaluation technique allows the integration of nonlinearities in the sensor's temperature characteristic and the temperature dependence of the sensor's force sensitivity. Applied to a sensor cable consisting of two FBGs in fibers with 80 µm and 125 µm cladding diameter installed in a 7 m-long coiled PEEK capillary, this technique significantly reduced the uncertainties in friction-compensated temperature measurements. In the presence of high friction-induced forces of up to 1.6 N the uncertainties in temperature evaluation were reduced from several degrees Celsius if using a standard linear matrix approach to less than 0.5°C if using the iterative matrix approach in an extended temperature range between -35°C and 125°C.
Iterative feature refinement for accurate undersampled MR image reconstruction
NASA Astrophysics Data System (ADS)
Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong
2016-05-01
Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.
A Universal Tare Load Prediction Algorithm for Strain-Gage Balance Calibration Data Analysis
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2011-01-01
An algorithm is discussed that may be used to estimate tare loads of wind tunnel strain-gage balance calibration data. The algorithm was originally developed by R. Galway of IAR/NRC Canada and has been described in the literature for the iterative analysis technique. Basic ideas of Galway's algorithm, however, are universally applicable and work for both the iterative and the non-iterative analysis technique. A recent modification of Galway's algorithm is presented that improves the convergence behavior of the tare load prediction process if it is used in combination with the non-iterative analysis technique. The modified algorithm allows an analyst to use an alternate method for the calculation of intermediate non-linear tare load estimates whenever Galway's original approach does not lead to a convergence of the tare load iterations. It is also shown in detail how Galway's algorithm may be applied to the non-iterative analysis technique. Hand load data from the calibration of a six-component force balance is used to illustrate the application of the original and modified tare load prediction method. During the analysis of the data both the iterative and the non-iterative analysis technique were applied. Overall, predicted tare loads for combinations of the two tare load prediction methods and the two balance data analysis techniques showed excellent agreement as long as the tare load iterations converged. The modified algorithm, however, appears to have an advantage over the original algorithm when absolute voltage measurements of gage outputs are processed using the non-iterative analysis technique. In these situations only the modified algorithm converged because it uses an exact solution of the intermediate non-linear tare load estimate for the tare load iteration.
Generalizing the Iterative Proportional Fitting Procedure.
1980-04-01
Csiszar gives conditions under which P (R) exists (it is always unique) and develops a geometry of I-divergence by using an analogue of Pythagoras ...8217 Theorem . As our goal is to study maximum likelihood estimation in contingency tables, we turn briefly to the problem of estimating a multinomial...envoke a result of Csiszir (due originally to Kullback (1959)), giving the form of the density of the I-projection. Csiszar’s Theorem 3.1, which we
ERIC Educational Resources Information Center
Kelderman, Henk
In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; ...
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Hierarchical optimization for neutron scattering problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Feng; Archibald, Rick; Bansal, Dipanshu
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Discrete Fourier Transform Analysis in a Complex Vector Space
NASA Technical Reports Server (NTRS)
Dean, Bruce H.
2009-01-01
Alternative computational strategies for the Discrete Fourier Transform (DFT) have been developed using analysis of geometric manifolds. This approach provides a general framework for performing DFT calculations, and suggests a more efficient implementation of the DFT for applications using iterative transform methods, particularly phase retrieval. The DFT can thus be implemented using fewer operations when compared to the usual DFT counterpart. The software decreases the run time of the DFT in certain applications such as phase retrieval that iteratively call the DFT function. The algorithm exploits a special computational approach based on analysis of the DFT as a transformation in a complex vector space. As such, this approach has the potential to realize a DFT computation that approaches N operations versus Nlog(N) operations for the equivalent Fast Fourier Transform (FFT) calculation.
Radiative Heating Methodology for the Huygens Probe
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2007-01-01
The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.
Gaussian beam and physical optics iteration technique for wideband beam waveguide feed design
NASA Technical Reports Server (NTRS)
Veruttipong, W.; Chen, J. C.; Bathker, D. A.
1991-01-01
The Gaussian beam technique has become increasingly popular for wideband beam waveguide (BWG) design. However, it is observed that the Gaussian solution is less accurate for smaller mirrors (approximately less than 30 lambda in diameter). Therefore, a high-performance wideband BWG design cannot be achieved by using the Gaussian beam technique alone. This article demonstrates a new design approach by iterating Gaussian beam and BWG parameters simultaneously at various frequencies to obtain a wideband BWG. The result is further improved by comparing it with physical optics results and repeating the iteration.
PID controller tuning using metaheuristic optimization algorithms for benchmark problems
NASA Astrophysics Data System (ADS)
Gholap, Vishal; Naik Dessai, Chaitali; Bagyaveereswaran, V.
2017-11-01
This paper contributes to find the optimal PID controller parameters using particle swarm optimization (PSO), Genetic Algorithm (GA) and Simulated Annealing (SA) algorithm. The algorithms were developed through simulation of chemical process and electrical system and the PID controller is tuned. Here, two different fitness functions such as Integral Time Absolute Error and Time domain Specifications were chosen and applied on PSO, GA and SA while tuning the controller. The proposed Algorithms are implemented on two benchmark problems of coupled tank system and DC motor. Finally, comparative study has been done with different algorithms based on best cost, number of iterations and different objective functions. The closed loop process response for each set of tuned parameters is plotted for each system with each fitness function.
Intensity correlation imaging with sunlight-like source
NASA Astrophysics Data System (ADS)
Wang, Wentao; Tang, Zhiguo; Zheng, Huaibin; Chen, Hui; Yuan, Yuan; Liu, Jinbin; Liu, Yanyan; Xu, Zhuo
2018-05-01
We show a method of intensity correlation imaging of targets illuminated by a sunlight-like source both theoretically and experimentally. With a Faraday anomalous dispersion optical filter (FADOF), we have modulated the coherence time of a thermal source up to 0.167 ns. And we carried out measurements of temporal and spatial correlations, respectively, with an intensity interferometer setup. By skillfully using the even Fourier fitting on the very sparse sampling data, the images of targets are successfully reconstructed from the low signal-noise-ratio(SNR) interference pattern by applying an iterative phase retrieval algorithm. The resulting imaging quality is as well as the one obtained by the theoretical fitting. The realization of such a case will bring this technique closer to geostationary satellite imaging illuminated by sunlight.
Shape reanalysis and sensitivities utilizing preconditioned iterative boundary solvers
NASA Technical Reports Server (NTRS)
Guru Prasad, K.; Kane, J. H.
1992-01-01
The computational advantages associated with the utilization of preconditined iterative equation solvers are quantified for the reanalysis of perturbed shapes using continuum structural boundary element analysis (BEA). Both single- and multi-zone three-dimensional problems are examined. Significant reductions in computer time are obtained by making use of previously computed solution vectors and preconditioners in subsequent analyses. The effectiveness of this technique is demonstrated for the computation of shape response sensitivities required in shape optimization. Computer times and accuracies achieved using the preconditioned iterative solvers are compared with those obtained via direct solvers and implicit differentiation of the boundary integral equations. It is concluded that this approach employing preconditioned iterative equation solvers in reanalysis and sensitivity analysis can be competitive with if not superior to those involving direct solvers.
The Routine Fitting of Kinetic Data to Models
Berman, Mones; Shahn, Ezra; Weiss, Marjory F.
1962-01-01
A mathematical formalism is presented for use with digital computers to permit the routine fitting of data to physical and mathematical models. Given a set of data, the mathematical equations describing a model, initial conditions for an experiment, and initial estimates for the values of model parameters, the computer program automatically proceeds to obtain a least squares fit of the data by an iterative adjustment of the values of the parameters. When the experimental measures are linear combinations of functions, the linear coefficients for a least squares fit may also be calculated. The values of both the parameters of the model and the coefficients for the sum of functions may be unknown independent variables, unknown dependent variables, or known constants. In the case of dependence, only linear dependencies are provided for in routine use. The computer program includes a number of subroutines, each one of which performs a special task. This permits flexibility in choosing various types of solutions and procedures. One subroutine, for example, handles linear differential equations, another, special non-linear functions, etc. The use of analytic or numerical solutions of equations is possible. PMID:13867975
Zheng, Hanrong; Fang, Zujie; Wang, Zhaoyong; Lu, Bin; Cao, Yulong; Ye, Qing; Qu, Ronghui; Cai, Haiwen
2018-01-31
It is a basic task in Brillouin distributed fiber sensors to extract the peak frequency of the scattering spectrum, since the peak frequency shift gives information on the fiber temperature and strain changes. Because of high-level noise, quadratic fitting is often used in the data processing. Formulas of the dependence of the minimum detectable Brillouin frequency shift (BFS) on the signal-to-noise ratio (SNR) and frequency step have been presented in publications, but in different expressions. A detailed deduction of new formulas of BFS variance and its average is given in this paper, showing especially their dependences on the data range used in fitting, including its length and its center respective to the real spectral peak. The theoretical analyses are experimentally verified. It is shown that the center of the data range has a direct impact on the accuracy of the extracted BFS. We propose and demonstrate an iterative fitting method to mitigate such effects and improve the accuracy of BFS measurement. The different expressions of BFS variances presented in previous papers are explained and discussed.
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali; Mirota, Daniel J.; Stayman, J. Webster; Zbijewski, Wojciech; Brock, Kristy K.; Daly, Michael J.; Chan, Harley; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2011-01-01
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (“intensity”). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and∕or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5±2.8) mm compared to (3.5±3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance. PMID:21626913
Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nithiananthan, Sajendra; Schafer, Sebastian; Uneri, Ali
2011-04-15
Purpose: A method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (''intensity''). Methods: A variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specificmore » intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively. Results: The iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5{+-}2.8) mm compared to (3.5{+-}3.0) mm with rigid registration. Conclusions: A method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance.« less
Iterative Methods to Solve Linear RF Fields in Hot Plasma
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2014-10-01
Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.
Gong, Pinghua; Zhang, Changshui; Lu, Zhaosong; Huang, Jianhua Z; Ye, Jieping
2013-01-01
Non-convex sparsity-inducing penalties have recently received considerable attentions in sparse learning. Recent theoretical investigations have demonstrated their superiority over the convex counterparts in several sparse learning settings. However, solving the non-convex optimization problems associated with non-convex penalties remains a big challenge. A commonly used approach is the Multi-Stage (MS) convex relaxation (or DC programming), which relaxes the original non-convex problem to a sequence of convex problems. This approach is usually not very practical for large-scale problems because its computational cost is a multiple of solving a single convex problem. In this paper, we propose a General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-convex penalties. The GIST algorithm iteratively solves a proximal operator problem, which in turn has a closed-form solution for many commonly used penalties. At each outer iteration of the algorithm, we use a line search initialized by the Barzilai-Borwein (BB) rule that allows finding an appropriate step size quickly. The paper also presents a detailed convergence analysis of the GIST algorithm. The efficiency of the proposed algorithm is demonstrated by extensive experiments on large-scale data sets.
NASA Astrophysics Data System (ADS)
Schenone, D. J.; Igama, S.; Marash-Whitman, D.; Sloan, C.; Okansinski, A.; Moffet, A.; Grace, J. M.; Gentry, D.
2015-12-01
Experimental evolution of microorganisms in controlled microenvironments serves as a powerful tool for understanding the relationship between micro-scale microbial interactions as well as local-to global-scale environmental factors. In response to iterative and targeted environmental pressures, mutagenesis drives the emergence of novel phenotypes. Current methods to induce expression of these phenotypes require repetitive and time intensive procedures and do not allow for the continuous monitoring of conditions such as optical density, pH and temperature. To address this shortcoming, an Automated Dynamic Directed Evolution Chamber is being developed. It will initially produce Escherichia coli cells with an elevated UV-C resistance phenotype that will ultimately be adapted for different organisms as well as studying environmental effects. A useful phenotype and environmental factor for examining this relationship is UV-C resistance and exposure. In order to build a baseline for the device's operational parameters, a UV-C assay was performed on six E. coli replicates with three exposure fluxes across seven iterations. The fluxes included a 0 second exposure (control), 6 seconds at 3.3 J/m2/s and 40 seconds at 0.5 J/m2/s. After each iteration the cells were regrown and tested for UV-C resistance. We sought to quantify the increase and variability of UV-C resistance among different fluxes, and observe changes in each replicate at each iteration in terms of variance. Under different fluxes, we observed that the 0s control showed no significant increase in resistance, while the 6s/40s fluxes showed increased resistance as the number of iterations increased. A one-million fold increase in survivability was observed after seven iterations. Through statistical analysis using Spearman's rank correlation, the 40s exposure showed signs of more consistently increased resistance, but seven iterations was insufficient to demonstrate statistical significance; to test this further, our experiments will include more iterations. Furthermore, we plan to sequence all the replicants. As adaptation dynamics under intense UV exposure leads to high rate of change, it would be useful to observe differences in tolerance-related and non-tolerance-related genes between the original and UV resistant strains.
Kirk, Maggie; Tonkin, Emma; Skirton, Heather
2014-01-01
KIRK M., TONKIN E. & SKIRTON H. (2014) An iterative consensus-building approach to revising a genetics/genomics competency framework for nurse education in the UK. Journal of Advanced Nursing 70(2), 405–420. doi: 10.1111/jan.12207 AimTo report a review of a genetics education framework using a consensus approach to agree on a contemporary and comprehensive revised framework. BackgroundAdvances in genomic health care have been significant since the first genetics education framework for nurses was developed in 2003. These, coupled with developments in policy and international efforts to promote nursing competence in genetics, indicated that review was timely. DesignA structured, iterative, primarily qualitative approach, based on a nominal group technique. MethodA meeting convened in 2010 involved stakeholders in UK nursing education, practice and management, including patient representatives (n = 30). A consensus approach was used to solicit participants' views on the individual/family needs identified from real-life stories of people affected by genetic conditions and the nurses' knowledge, skills and attitudes needed to meet those needs. Five groups considered the stories in iterative rounds, reviewing comments from previous groups. Omissions and deficiencies were identified by mapping resulting themes to the original framework. Anonymous voting captured views. Educators at a second meeting developed learning outcomes for the final framework. FindingsDeficiencies in relation to Advocacy, Information management and Ongoing care were identified. All competencies of the original framework were revised, adding an eighth competency to make explicit the need for ongoing care of the individual/family. ConclusionModifications to the framework reflect individual/family needs and are relevant to the nursing role. The approach promoted engagement in a complex issue and provides a framework to guide nurse education in genetics/genomics; however, nursing leadership is crucial to successful implementation. PMID:23879662
How Surgeons Conceptualize Talent: A Qualitative Study Using Sport Science as a Lens.
Jensen, Rune Dall; Christensen, Mette Krogh; LaDonna, Kori A; Seyer-Hansen, Mikkel; Cristancho, Sayra
Debates prevail regarding the definition of surgical talent, and how individuals with the potential to become talented surgeons can be identified and developed. However, over the past 30 years, talent has been studied extensively in other domains. The objectives of this study is to explore notions of talent in surgery and sport in order to investigate if the field of surgical education can benefit from expanding its view on talented performances. Therefore, this study aims to use the sport literature as a lens when exploring how surgeons conceptualize and define talent. Semi-structured interviews were conducted with a sample of 11 consultant surgeons from multiple specialties. We used constructivist grounded theory principles to explore talent in surgery. Ongoing data analysis refined the theoretical framework and iteratively informed data collection. Themes were identified iteratively using constant comparison. The setting included 8 separate hospitals across Canada and Denmark. A total of 11 consultant surgeons from 6 different surgical subspecialties (urology, orthopedic surgery, colorectal surgery, general surgery, vascular surgery, head & neck surgery) were included. We identified three key elements for conceptualizing surgical talent: (1) Individual skills makes the surgical prospect "good", (2) a mixture of skills gives the surgical prospect the potential to become talented, and (3) becoming talented may rely on the fit between person and environment. We embarked on a study aimed at understanding talent in surgery. Talent is a difficult construct to agree on. Whether in medicine or sports, debates about talent will continue to persist, as we all perceive talent differently. While we heard different opinions, three key ideas summarize our participants' discussions regarding surgical talent. These findings resonate with the holistic ecological approach from sport science and hence highlight the limits of a reductionist approach while favoring the individual-environment system as the minimal ontology for describing talented performances. Copyright © 2017 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Yang, D.; Oldenburg, D.
2016-12-01
The salinization of the floodplains of Lower Murray River in South Australia has caused negative consequences to the local ecosystem. As part of the Living Murray Initiative, the Clark's Floodplain at Bookpurnong was chosen to examine the effectiveness of different intervention methods from 2005 to 2008. Because of the link between groundwater salinity and electric conductivity, electromagnetic (EM) methods have been an integrated part of the project to test it as a cost-effective tool for monitoring. In this paper, we analyze two airborne EM surveys that assess the salinization at the regional scale: the SkyTEM in 2006 and the RESOLVE in 2008. Conventional interpretation often inverts those data sets separately using the 1D layered earth modeling, which often produces inconsistent images if different surveys are carried out at different times. Here we propose a new approach that considers the coherence in time and across systems. We allow each data set to iteratively construct its own model with guidance from a common reference model that is updated in a democratic voting procedure after every iteration. There are two possible outcomes. If the data sets are intrinsically compatible, the individual models will converge to essentially the same model, like in the regular unimodal joint inversion. If there are survey-specific errors or a change of ground truth, the inversion can still fit the data but leaves discrepancy in the models. By applying this approach to the two data sets at Bookpurnong, we identify an area of increased conductivity at the midstream section of the river that can only be explained by a temporal variation of salinity, a plausible evidence of escalated saline water intrusion due to irrigation on the nearby riverbank. This study illustrates that multi-type time-lapse EM, in conjunction with advanced inversion techniques, can achieve superior temporal resolution for the purpose of groundwater evaluation and management.
Space shuttle propulsion parameter estimation using optimal estimation techniques
NASA Technical Reports Server (NTRS)
1983-01-01
The first twelve system state variables are presented with the necessary mathematical developments for incorporating them into the filter/smoother algorithm. Other state variables, i.e., aerodynamic coefficients can be easily incorporated into the estimation algorithm, representing uncertain parameters, but for initial checkout purposes are treated as known quantities. An approach for incorporating the NASA propulsion predictive model results into the optimal estimation algorithm was identified. This approach utilizes numerical derivatives and nominal predictions within the algorithm with global iterations of the algorithm. The iterative process is terminated when the quality of the estimates provided no longer significantly improves.
Saddeek, Ali Mohamed
2017-01-01
Most mathematical models arising in stationary filtration processes as well as in the theory of soft shells can be described by single-valued or generalized multivalued pseudomonotone mixed variational inequalities with proper convex nondifferentiable functionals. Therefore, for finding the minimum norm solution of such inequalities, the current paper attempts to introduce a modified two-layer iteration via a boundary point approach and to prove its strong convergence. The results here improve and extend the corresponding recent results announced by Badriev, Zadvornov and Saddeek (Differ. Equ. 37:934-942, 2001).
Method for accurate sizing of pulmonary vessels from 3D medical images
NASA Astrophysics Data System (ADS)
O'Dell, Walter G.
2015-03-01
Detailed characterization of vascular anatomy, in particular the quantification of changes in the distribution of vessel sizes and of vascular pruning, is essential for the diagnosis and management of a variety of pulmonary vascular diseases and for the care of cancer survivors who have received radiation to the thorax. Clinical estimates of vessel radii are typically based on setting a pixel intensity threshold and counting how many "On" pixels are present across the vessel cross-section. A more objective approach introduced recently involves fitting the image with a library of spherical Gaussian filters and utilizing the size of the best matching filter as the estimate of vessel diameter. However, both these approaches have significant accuracy limitations including mis-match between a Gaussian intensity distribution and that of real vessels. Here we introduce and demonstrate a novel approach for accurate vessel sizing using 3D appearance models of a tubular structure along a curvilinear trajectory in 3D space. The vessel branch trajectories are represented with cubic Hermite splines and the tubular branch surfaces represented as a finite element surface mesh. An iterative parameter adjustment scheme is employed to optimally match the appearance models to a patient's chest X-ray computed tomography (CT) scan to generate estimates for branch radii and trajectories with subpixel resolution. The method is demonstrated on pulmonary vasculature in an adult human CT scan, and on 2D simulated test cases.
Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method
NASA Astrophysics Data System (ADS)
Mehl, S.
2012-12-01
Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.
Assessing performance of flaw characterization methods through uncertainty propagation
NASA Astrophysics Data System (ADS)
Miorelli, R.; Le Bourdais, F.; Artusi, X.
2018-04-01
In this work, we assess the inversion performance in terms of crack characterization and localization based on synthetic signals associated to ultrasonic and eddy current physics. More precisely, two different standard iterative inversion algorithms are used to minimize the discrepancy between measurements (i.e., the tested data) and simulations. Furthermore, in order to speed up the computational time and get rid of the computational burden often associated to iterative inversion algorithms, we replace the standard forward solver by a suitable metamodel fit on a database built offline. In a second step, we assess the inversion performance by adding uncertainties on a subset of the database parameters and then, through the metamodel, we propagate these uncertainties within the inversion procedure. The fast propagation of uncertainties enables efficiently evaluating the impact due to the lack of knowledge on some parameters employed to describe the inspection scenarios, which is a situation commonly encountered in the industrial NDE context.
Beef quality grading using machine vision
NASA Astrophysics Data System (ADS)
Jeyamkondan, S.; Ray, N.; Kranzler, Glenn A.; Biju, Nisha
2000-12-01
A video image analysis system was developed to support automation of beef quality grading. Forty images of ribeye steaks were acquired. Fat and lean meat were differentiated using a fuzzy c-means clustering algorithm. Muscle longissimus dorsi (l.d.) was segmented from the ribeye using morphological operations. At the end of each iteration of erosion and dilation, a convex hull was fitted to the image and compactness was measured. The number of iterations was selected to yield the most compact l.d. Match between the l.d. muscle traced by an expert grader and that segmented by the program was 95.9%. Marbling and color features were extracted from the l.d. muscle and were used to build regression models to predict marbling and color scores. Quality grade was predicted using another regression model incorporating all features. Grades predicted by the model were statistically equivalent to the grades assigned by expert graders.
Zhao, Ming; Li, Yu; Peng, Leilei
2014-01-01
We report a fast non-iterative lifetime data analysis method for the Fourier multiplexed frequency-sweeping confocal FLIM (Fm-FLIM) system [ Opt. Express22, 10221 ( 2014)24921725]. The new method, named R-method, allows fast multi-channel lifetime image analysis in the system’s FPGA data processing board. Experimental tests proved that the performance of the R-method is equivalent to that of single-exponential iterative fitting, and its sensitivity is well suited for time-lapse FLIM-FRET imaging of live cells, for example cyclic adenosine monophosphate (cAMP) level imaging with GFP-Epac-mCherry sensors. With the R-method and its FPGA implementation, multi-channel lifetime images can now be generated in real time on the multi-channel frequency-sweeping FLIM system, and live readout of FRET sensors can be performed during time-lapse imaging. PMID:25321778
NASA Astrophysics Data System (ADS)
Pasztor, G.; Bruzzone, P.
2004-06-01
The dc performance of a recently produced internal tin route Nb3Sn strand with enhanced specification is studied extensively and compared with predecessor wires manufactured by the suppliers for the ITER Model Coils in 1996. The wire has been selected for use in a full size, developmental cable-in-conduit conductor sample, which is being tested in the SULTAN Test Facility. The critical current, Ic, and the index of the current/voltage characteristic, n, are measured over a broad range of field and temperature, using ITER standard sample holders, made of TiAlV grooved cylinders. The behavior of Ic versus applied tensile strain is also investigated at 4.2 K and 12 T, on straight specimens. Scaling law parameters are drawn from the fit of the experimental results. The implications of the test results to the design of the fusion conductors are discussed.
Fluid Simulation in the Movies: Navier and Stokes Must Be Circulating in Their Graves
NASA Astrophysics Data System (ADS)
Tessendorf, Jerry
2010-11-01
Fluid simulations based on the Incompressible Navier-Stokes equations are commonplace computer graphics tools in the visual effects industry. These simulations mostly come from custom C++ code written by the visual effects companies. Their significant impact in films was recognized in 2008 with Academy Awards to four visual effects companies for their technical achievement. However artists are not fluid dynamicists, and fluid dynamics simulations are expensive to use in a deadline-driven production environment. As a result, the simulation algorithms are modified to limit the computational resources, adapt them to production workflow, and to respect the client's vision of the film plot. Eulerian solvers on fixed rectangular grids use a mix of momentum solvers, including Semi-Lagrangian, FLIP, and QUICK. Incompressibility is enforced with FFT, Conjugate Gradient, and Multigrid methods. For liquids, a levelset field tracks the free surface. Smooth Particle Hydrodynamics is also used, and is part of a hybrid Eulerian-SPH liquid simulator. Artists use all of them in a mix and match fashion to control the appearance of the simulation. Specially designed forces and boundary conditions control the flow. The simulation can be an input to artistically driven procedural particle simulations that enhance the flow with more detail and drama. Post-simulation processing increases the visual detail beyond the grid resolution. Ultimately, iterative simulation methods that fit naturally in the production workflow are extremely desirable but not yet successful. Results from some efforts for iterative methods are shown, and other approaches motivated by the history of production are proposed.
NASA Astrophysics Data System (ADS)
Yuniarto, Budi; Kurniawan, Robert
2017-03-01
PLS Path Modeling (PLS-PM) is different from covariance based SEM, where PLS-PM use an approach based on variance or component, therefore, PLS-PM is also known as a component based SEM. Multiblock Partial Least Squares (MBPLS) is a method in PLS regression which can be used in PLS Path Modeling which known as Multiblock PLS Path Modeling (MBPLS-PM). This method uses an iterative procedure in its algorithm. This research aims to modify MBPLS-PM with Back Propagation Neural Network approach. The result is MBPLS-PM algorithm can be modified using the Back Propagation Neural Network approach to replace the iterative process in backward and forward step to get the matrix t and the matrix u in the algorithm. By modifying the MBPLS-PM algorithm using Back Propagation Neural Network approach, the model parameters obtained are relatively not significantly different compared to model parameters obtained by original MBPLS-PM algorithm.
Reduction of Free Edge Peeling Stress of Laminated Composites Using Active Piezoelectric Layers
Huang, Bin; Kim, Heung Soo
2014-01-01
An analytical approach is proposed in the reduction of free edge peeling stresses of laminated composites using active piezoelectric layers. The approach is the extended Kantorovich method which is an iterative method. Multiterms of trial function are employed and governing equations are derived by taking the principle of complementary virtual work. The solutions are obtained by solving a generalized eigenvalue problem. By this approach, the stresses automatically satisfy not only the traction-free boundary conditions, but also the free edge boundary conditions. Through the iteration processes, the free edge stresses converge very quickly. It is found that the peeling stresses generated by mechanical loadings are significantly reduced by applying a proper electric field to the piezoelectric actuators. PMID:25025088
Bindu, G.; Semenov, S.
2013-01-01
This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell’s equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness. PMID:24058889
NASA Astrophysics Data System (ADS)
Trujillo Bueno, Javier; Manso Sainz, Rafael
1999-05-01
This paper shows how to generalize to non-LTE polarization transfer some operator splitting methods that were originally developed for solving unpolarized transfer problems. These are the Jacobi-based accelerated Λ-iteration (ALI) method of Olson, Auer, & Buchler and the iterative schemes based on Gauss-Seidel and successive overrelaxation (SOR) iteration of Trujillo Bueno and Fabiani Bendicho. The theoretical framework chosen for the formulation of polarization transfer problems is the quantum electrodynamics (QED) theory of Landi Degl'Innocenti, which specifies the excitation state of the atoms in terms of the irreducible tensor components of the atomic density matrix. This first paper establishes the grounds of our numerical approach to non-LTE polarization transfer by concentrating on the standard case of scattering line polarization in a gas of two-level atoms, including the Hanle effect due to a weak microturbulent and isotropic magnetic field. We begin demonstrating that the well-known Λ-iteration method leads to the self-consistent solution of this type of problem if one initializes using the ``exact'' solution corresponding to the unpolarized case. We show then how the above-mentioned splitting methods can be easily derived from this simple Λ-iteration scheme. We show that our SOR method is 10 times faster than the Jacobi-based ALI method, while our implementation of the Gauss-Seidel method is 4 times faster. These iterative schemes lead to the self-consistent solution independently of the chosen initialization. The convergence rate of these iterative methods is very high; they do not require either the construction or the inversion of any matrix, and the computing time per iteration is similar to that of the Λ-iteration method.
NASA Astrophysics Data System (ADS)
Shirzaei, M.; Walter, T. R.
2009-10-01
Modern geodetic techniques provide valuable and near real-time observations of volcanic activity. Characterizing the source of deformation based on these observations has become of major importance in related monitoring efforts. We investigate two random search approaches, simulated annealing (SA) and genetic algorithm (GA), and utilize them in an iterated manner. The iterated approach helps to prevent GA in general and SA in particular from getting trapped in local minima, and it also increases redundancy for exploring the search space. We apply a statistical competency test for estimating the confidence interval of the inversion source parameters, considering their internal interaction through the model, the effect of the model deficiency, and the observational error. Here, we present and test this new randomly iterated search and statistical competency (RISC) optimization method together with GA and SA for the modeling of data associated with volcanic deformations. Following synthetic and sensitivity tests, we apply the improved inversion techniques to two episodes of activity in the Campi Flegrei volcanic region in Italy, observed by the interferometric synthetic aperture radar technique. Inversion of these data allows derivation of deformation source parameters and their associated quality so that we can compare the two inversion methods. The RISC approach was found to be an efficient method in terms of computation time and search results and may be applied to other optimization problems in volcanic and tectonic environments.
2D and 3D registration methods for dual-energy contrast-enhanced digital breast tomosynthesis
NASA Astrophysics Data System (ADS)
Lau, Kristen C.; Roth, Susan; Maidment, Andrew D. A.
2014-03-01
Contrast-enhanced digital breast tomosynthesis (CE-DBT) uses an iodinated contrast agent to image the threedimensional breast vasculature. The University of Pennsylvania is conducting a CE-DBT clinical study in patients with known breast cancers. The breast is compressed continuously and imaged at four time points (1 pre-contrast; 3 postcontrast). A hybrid subtraction scheme is proposed. First, dual-energy (DE) images are obtained by a weighted logarithmic subtraction of the high-energy and low-energy image pairs. Then, post-contrast DE images are subtracted from the pre-contrast DE image. This hybrid temporal subtraction of DE images is performed to analyze iodine uptake, but suffers from motion artifacts. Employing image registration further helps to correct for motion, enhancing the evaluation of vascular kinetics. Registration using ANTS (Advanced Normalization Tools) is performed in an iterative manner. Mutual information optimization first corrects large-scale motions. Normalized cross-correlation optimization then iteratively corrects fine-scale misalignment. Two methods have been evaluated: a 2D method using a slice-by-slice approach, and a 3D method using a volumetric approach to account for out-of-plane breast motion. Our results demonstrate that iterative registration qualitatively improves with each iteration (five iterations total). Motion artifacts near the edge of the breast are corrected effectively and structures within the breast (e.g. blood vessels, surgical clip) are better visualized. Statistical and clinical evaluations of registration accuracy in the CE-DBT images are ongoing.
Decentralized Control of Sound Radiation from an Aircraft-Style Panel Using Iterative Loop Recovery
NASA Technical Reports Server (NTRS)
Schiller, Noah H.; Cabell, Randolph H.; Fuller, Chris R.
2008-01-01
A decentralized LQG-based control strategy is designed to reduce low-frequency sound transmission through periodically stiffened panels. While modern control strategies have been used to reduce sound radiation from relatively simple structural acoustic systems, significant implementation issues have to be addressed before these control strategies can be extended to large systems such as the fuselage of an aircraft. For instance, centralized approaches typically require a high level of connectivity and are computationally intensive, while decentralized strategies face stability problems caused by the unmodeled interaction between neighboring control units. Since accurate uncertainty bounds are not known a priori, it is difficult to ensure the decentralized control system will be robust without making the controller overly conservative. Therefore an iterative approach is suggested, which utilizes frequency-shaped loop recovery. The approach accounts for modeling error introduced by neighboring control loops, requires no communication between subsystems, and is relatively simple. The control strategy is validated using real-time control experiments performed on a built-up aluminum test structure representative of the fuselage of an aircraft. Experiments demonstrate that the iterative approach is capable of achieving 12 dB peak reductions and a 3.6 dB integrated reduction in radiated sound power from the stiffened panel.
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
Adaptive cornea modeling from keratometric data.
Martínez-Finkelshtein, Andrei; López, Darío Ramos; Castro, Gracia M; Alió, Jorge L
2011-07-01
To introduce an iterative, multiscale procedure that allows for better reconstruction of the shape of the anterior surface of the cornea from altimetric data collected by a corneal topographer. The report describes, first, an adaptive, multiscale mathematical algorithm for the parsimonious fit of the corneal surface data that adapts the number of functions used in the reconstruction to the conditions of each cornea. The method also implements a dynamic selection of the parameters and the management of noise. Then, several numerical experiments are performed, comparing it with the results obtained by the standard Zernike-based procedure. The numerical experiments showed that the algorithm exhibits steady exponential error decay, independent of the level of aberration of the cornea. The complexity of each anisotropic Gaussian-basis function in the functional representation is the same, but the parameters vary to fit the current scale. This scale is determined only by the residual errors and not by the number of the iteration. Finally, the position and clustering of the centers, as well as the size of the shape parameters, provides additional spatial information about the regions of higher irregularity. The methodology can be used for the real-time reconstruction of both altimetric data and corneal power maps from the data collected by keratoscopes, such as the Placido ring-based topographers, that will be decisive in early detection of corneal diseases such as keratoconus.
Deletion Diagnostics for the Generalised Linear Mixed Model with independent random effects
Ganguli, B.; Roy, S. Sen; Naskar, M.; Malloy, E. J.; Eisen, E. A.
2015-01-01
The Generalised Linear Mixed Model (GLMM) is widely used for modelling environmental data. However, such data are prone to influential observations which can distort the estimated exposure-response curve particularly in regions of high exposure. Deletion diagnostics for iterative estimation schemes commonly derive the deleted estimates based on a single iteration of the full system holding certain pivotal quantities such as the information matrix to be constant. In this paper, we present an approximate formula for the deleted estimates and Cook’s distance for the GLMM which does not assume that the estimates of variance parameters are unaffected by deletion. The procedure allows the user to calculate standardised DFBETAs for mean as well as variance parameters. In certain cases, such as when using the GLMM as a device for smoothing, such residuals for the variance parameters are interesting in their own right. In general, the procedure leads to deleted estimates of mean parameters which are corrected for the effect of deletion on variance components as estimation of the two sets of parameters is interdependent. The probabilistic behaviour of these residuals is investigated and a simulation based procedure suggested for their standardisation. The method is used to identify influential individuals in an occupational cohort exposed to silica. The results show that failure to conduct post model fitting diagnostics for variance components can lead to erroneous conclusions about the fitted curve and unstable confidence intervals. PMID:26626135
ERIC Educational Resources Information Center
Hathout, Leith
2007-01-01
Counting the number of internal intersection points made by the diagonals of irregular convex polygons where no three diagonals are concurrent is an interesting problem in discrete mathematics. This paper uses an iterative approach to develop a summation relation which tallies the total number of intersections, and shows that this total can be…
Intelligent Distributed Systems
2015-10-23
periodic gossiping algorithms by using convex combination rules rather than standard averaging rules. On a ring graph, we have discovered how to sequence...the gossips within a period to achieve the best possible convergence rate and we have related this optimal value to the classic edge coloring problem...consensus. There are three different approaches to distributed averaging: linear iterations, gossiping , and dou- ble linear iterations which are also known as
Blind One-Bit Compressive Sampling
2013-01-17
14] Q. Li, C. A. Micchelli, L. Shen, and Y. Xu, A proximity algorithm accelerated by Gauss - Seidel iterations for L1/TV denoising models, Inverse...methods for nonconvex optimization on the unit sphere and has a provable convergence guarantees. Binary iterative hard thresholding (BIHT) algorithms were... Convergence analysis of the algorithm is presented. Our approach is to obtain a sequence of optimization problems by successively approximating the ℓ0
Thomson scattering for core plasma on DEMO
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mukhin, E. E.; Kurskiev, G. S.; Tolstyakov, S. Yu.
2014-08-21
This paper describes the challenges of Thomson scattering implementation for core plasma on DEMO and evaluates the capability to measure extremely high electron temperature range 0.5-40keV. A number of solutions to be developed for ITER diagnostics are suggested in consideration of their realization for DEMO. New approaches suggested for DEMO may also be of interest to ITER and currently operating magnetic confinement devices.
NASA Astrophysics Data System (ADS)
Benner, Peter; Dolgov, Sergey; Khoromskaia, Venera; Khoromskij, Boris N.
2017-04-01
In this paper, we propose and study two approaches to approximate the solution of the Bethe-Salpeter equation (BSE) by using structured iterative eigenvalue solvers. Both approaches are based on the reduced basis method and low-rank factorizations of the generating matrices. We also propose to represent the static screen interaction part in the BSE matrix by a small active sub-block, with a size balancing the storage for rank-structured representations of other matrix blocks. We demonstrate by various numerical tests that the combination of the diagonal plus low-rank plus reduced-block approximation exhibits higher precision with low numerical cost, providing as well a distinct two-sided error estimate for the smallest eigenvalues of the Bethe-Salpeter operator. The complexity is reduced to O (Nb2) in the size of the atomic orbitals basis set, Nb, instead of the practically intractable O (Nb6) scaling for the direct diagonalization. In the second approach, we apply the quantized-TT (QTT) tensor representation to both, the long eigenvectors and the column vectors in the rank-structured BSE matrix blocks, and combine this with the ALS-type iteration in block QTT format. The QTT-rank of the matrix entities possesses almost the same magnitude as the number of occupied orbitals in the molecular systems, No
NASA Astrophysics Data System (ADS)
An, Hyunuk; Ichikawa, Yutaka; Tachikawa, Yasuto; Shiiba, Michiharu
2012-11-01
SummaryThree different iteration methods for a three-dimensional coordinate-transformed saturated-unsaturated flow model are compared in this study. The Picard and Newton iteration methods are the common approaches for solving Richards' equation. The Picard method is simple to implement and cost-efficient (on an individual iteration basis). However it converges slower than the Newton method. On the other hand, although the Newton method converges faster, it is more complex to implement and consumes more CPU resources per iteration than the Picard method. The comparison of the two methods in finite-element model (FEM) for saturated-unsaturated flow has been well evaluated in previous studies. However, two iteration methods might exhibit different behavior in the coordinate-transformed finite-difference model (FDM). In addition, the Newton-Krylov method could be a suitable alternative for the coordinate-transformed FDM because it requires the evaluation of a 19-point stencil matrix. The formation of a 19-point stencil is quite a complex and laborious procedure. Instead, the Newton-Krylov method calculates the matrix-vector product, which can be easily approximated by calculating the differences of the original nonlinear function. In this respect, the Newton-Krylov method might be the most appropriate iteration method for coordinate-transformed FDM. However, this method involves the additional cost of taking an approximation at each Krylov iteration in the Newton-Krylov method. In this paper, we evaluated the efficiency and robustness of three iteration methods—the Picard, Newton, and Newton-Krylov methods—for simulating saturated-unsaturated flow through porous media using a three-dimensional coordinate-transformed FDM.
NASA Astrophysics Data System (ADS)
Evans, T. E.
2013-07-01
Large edge-localized mode (ELM) control techniques must be developed to help ensure the success of burning and ignited fusion plasma devices such as tokamaks and stellarators. In full performance ITER tokamak discharges, with QDT = 10, the energy released by a single ELM could reach ˜30 MJ which is expected to result in an energy density of 10-15 MJ/m2on the divertor targets. This will exceed the estimated divertor ablation limit by a factor of 20-30. A worldwide research program is underway to develop various types of ELM control techniques in preparation for ITER H-mode plasma operations. An overview of the ELM control techniques currently being developed is discussed along with the requirements for applying these techniques to plasmas in ITER. Particular emphasis is given to the primary approaches, pellet pacing and resonant magnetic perturbation fields, currently being considered for ITER.
On a new iterative method for solving linear systems and comparison results
NASA Astrophysics Data System (ADS)
Jing, Yan-Fei; Huang, Ting-Zhu
2008-10-01
In Ujevic [A new iterative method for solving linear systems, Appl. Math. Comput. 179 (2006) 725-730], the author obtained a new iterative method for solving linear systems, which can be considered as a modification of the Gauss-Seidel method. In this paper, we show that this is a special case from a point of view of projection techniques. And a different approach is established, which is both theoretically and numerically proven to be better than (at least the same as) Ujevic's. As the presented numerical examples show, in most cases, the convergence rate is more than one and a half that of Ujevic.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ives, Robert Lawrence; Marsden, David; Collins, George
Calabazas Creek Research, Inc. developed a 1.5 MW RF load for the ITER fusion research facility currently under construction in France. This program leveraged technology developed in two previous SBIR programs that successfully developed high power RF loads for fusion research applications. This program specifically focused on modifications required by revised technical performance, materials, and assembly specification for ITER. This program implemented an innovative approach to actively distribute the RF power inside the load to avoid excessive heating or arcing associated with constructive interference. The new design implemented materials and assembly changes required to meet specifications. Critical components were builtmore » and successfully tested during the program.« less
Guédon, A C P; Wauben, L S G L; de Korne, D F; Overvelde, M; Dankelman, J; van den Dobbelsteen, J J
2015-01-01
Information technology, such as real-time location (RTL) systems using Radio Frequency IDentification (RFID) may contribute to overcome patient safety issues and high costs in healthcare. The aim of this work is to study if a RFID specific Participatory Design (PD) approach supports the design and the implementation of RTL systems in the Operating Room (OR). A RFID specific PD approach was used to design and implement two RFID based modules. The Device Module monitors the safety status of OR devices and the Patient Module tracks the patients' locations during their hospital stay. The PD principles 'multidisciplinary team', 'participation users (active involvement)' and 'early adopters' were used to include users from the RFID company, the university and the hospital. The design and implementation process consisted of two 'structured cycles' ('iterations'). The effectiveness of this approach was assessed by the acceptance in terms of level of use, continuity of the project and purchase. The Device Module included eight strategic and twelve tactical actions and the Patient Module included six strategic and twelve tactical actions. Both modules are now used on a daily basis and are purchased by the hospitals for continued use. The RFID specific PD approach was effective in guiding and supporting the design and implementation process of RFID technology in the OR. The multidisciplinary teams and their active participation provided insights in the social and the organizational context of the hospitals making it possible to better fit the technology to the hospitals' (future) needs.
Caregivers' management of schooling for their children with fetal alcohol spectrum disorder.
Swart, Suretha; Hall, Wendy A; McKee, William T; Ford, Laurie
2014-11-01
In this article we describe a grounded theory study of how caregivers of school-aged children with fetal alcohol spectrum disorder (FASD) managed their children's schooling. We completed 30 interviews with 17 caregivers residing in a western Canadian province, as well as document analysis and 25 hours of participant observation. We used constant comparative analysis to construct our substantive theory: intertwining to fit in. The core variable is an iterative cycle caregivers used to resolve their main concerns: preventing their children from failing academically and in social interactions and preventing themselves from being regarded as unacceptable parents. To intertwine to fit in, caregivers used two strategies: orchestrating schooling and keeping up appearances. They also regulated their relationships with their children. "Intertwining to fit in" contributes to the literature on attachment and parenting and extends explanations about caregivers' advocacy for their children with FASD. The theory has implications for school personnel and practitioners, as well as researchers. © The Author(s) 2014.
Liquid-vapor relations for the system NaCl-H2O: summary of the P-T- x surface from 300° to 500°C
Bischoff, J.L.; Pitzer, Kenneth S.
1989-01-01
Experimental data on the vapor-liquid equilibrium relations for the system NaCl-H2O were compiled and compared in order to provide an improved estimate of the P-T-x surface between 300° to 500°C, a range for which the system changes from subcritical to critical behavior. Data for the three-phase curve (halite + liquid + vapor) and the NaCl-H2O critical curve were evaluated, and the best fits for these extrema then were used to guide selection of best fit for isothermal plots for the vapor-liquid region in-between. Smoothing was carried out in an iterative procedure by replotting the best-fit data as isobars and then as isopleths, until an internally consistent set of data was obtained. The results are presented in table form that will have application to theoretical modelling and to the understanding of two-phase behavior in saline geothermal systems.
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
Using evolutionary algorithms for fitting high-dimensional models to neuronal data.
Svensson, Carl-Magnus; Coombes, Stephen; Peirce, Jonathan Westley
2012-04-01
In the study of neurosciences, and of complex biological systems in general, there is frequently a need to fit mathematical models with large numbers of parameters to highly complex datasets. Here we consider algorithms of two different classes, gradient following (GF) methods and evolutionary algorithms (EA) and examine their performance in fitting a 9-parameter model of a filter-based visual neuron to real data recorded from a sample of 107 neurons in macaque primary visual cortex (V1). Although the GF method converged very rapidly on a solution, it was highly susceptible to the effects of local minima in the error surface and produced relatively poor fits unless the initial estimates of the parameters were already very good. Conversely, although the EA required many more iterations of evaluating the model neuron's response to a series of stimuli, it ultimately found better solutions in nearly all cases and its performance was independent of the starting parameters of the model. Thus, although the fitting process was lengthy in terms of processing time, the relative lack of human intervention in the evolutionary algorithm, and its ability ultimately to generate model fits that could be trusted as being close to optimal, made it far superior in this particular application than the gradient following methods. This is likely to be the case in many further complex systems, as are often found in neuroscience.
NASA Technical Reports Server (NTRS)
Chen, Y. S.
1986-01-01
In this report, a numerical method for solving the equations of motion of three-dimensional incompressible flows in nonorthogonal body-fitted coordinate (BFC) systems has been developed. The equations of motion are transformed to a generalized curvilinear coordinate system from which the transformed equations are discretized using finite difference approximations in the transformed domain. The hybrid scheme is used to approximate the convection terms in the governing equations. Solutions of the finite difference equations are obtained iteratively by using a pressure-velocity correction algorithm (SIMPLE-C). Numerical examples of two- and three-dimensional, laminar and turbulent flow problems are employed to evaluate the accuracy and efficiency of the present computer code. The user's guide and computer program listing of the present code are also included.
Atmospheric particulate analysis using angular light scattering
NASA Technical Reports Server (NTRS)
Hansen, M. Z.
1980-01-01
Using the light scattering matrix elements measured by a polar nephelometer, a procedure for estimating the characteristics of atmospheric particulates was developed. A theoretical library data set of scattering matrices derived from Mie theory was tabulated for a range of values of the size parameter and refractive index typical of atmospheric particles. Integration over the size parameter yielded the scattering matrix elements for a variety of hypothesized particulate size distributions. A least squares curve fitting technique was used to find a best fit from the library data for the experimental measurements. This was used as a first guess for a nonlinear iterative inversion of the size distributions. A real index of 1.50 and an imaginary index of -0.005 are representative of the smoothed inversion results for the near ground level atmospheric aerosol in Tucson.
Shi, Hongli; Yang, Zhi; Luo, Shuqian
2017-01-01
The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.
Producing Satisfactory Solutions to Scheduling Problems: An Iterative Constraint Relaxation Approach
NASA Technical Reports Server (NTRS)
Chien, S.; Gratch, J.
1994-01-01
One drawback to using constraint-propagation in planning and scheduling systems is that when a problem has an unsatisfiable set of constraints such algorithms typically only show that no solution exists. While, technically correct, in practical situations, it is desirable in these cases to produce a satisficing solution that satisfies the most important constraints (typically defined in terms of maximizing a utility function). This paper describes an iterative constraint relaxation approach in which the scheduler uses heuristics to progressively relax problem constraints until the problem becomes satisfiable. We present empirical results of applying these techniques to the problem of scheduling spacecraft communications for JPL/NASA antenna resources.
Xu, Q; Yang, D; Tan, J; Anastasio, M
2012-06-01
To improve image quality and reduce imaging dose in CBCT for radiation therapy applications and to realize near real-time image reconstruction based on use of a fast convergence iterative algorithm and acceleration by multi-GPUs. An iterative image reconstruction that sought to minimize a weighted least squares cost function that employed total variation (TV) regularization was employed to mitigate projection data incompleteness and noise. To achieve rapid 3D image reconstruction (< 1 min), a highly optimized multiple-GPU implementation of the algorithm was developed. The convergence rate and reconstruction accuracy were evaluated using a modified 3D Shepp-Logan digital phantom and a Catphan-600 physical phantom. The reconstructed images were compared with the clinical FDK reconstruction results. Digital phantom studies showed that only 15 iterations and 60 iterations are needed to achieve algorithm convergence for 360-view and 60-view cases, respectively. The RMSE was reduced to 10-4 and 10-2, respectively, by using 15 iterations for each case. Our algorithm required 5.4s to complete one iteration for the 60-view case using one Tesla C2075 GPU. The few-view study indicated that our iterative algorithm has great potential to reduce the imaging dose and preserve good image quality. For the physical Catphan studies, the images obtained from the iterative algorithm possessed better spatial resolution and higher SNRs than those obtained from by use of a clinical FDK reconstruction algorithm. We have developed a fast convergence iterative algorithm for CBCT image reconstruction. The developed algorithm yielded images with better spatial resolution and higher SNR than those produced by a commercial FDK tool. In addition, from the few-view study, the iterative algorithm has shown great potential for significantly reducing imaging dose. We expect that the developed reconstruction approach will facilitate applications including IGART and patient daily CBCT-based treatment localization. © 2012 American Association of Physicists in Medicine.
Development challenges for Low Temperature Plasma Sources ``from Idea to Prototype''
NASA Astrophysics Data System (ADS)
Gerling, T.; Baudler, J.-S.; Horn, S.; Schmidt, M.; Weltmann, K.-D.
2015-09-01
While plasma medicine is a well-motivated and intensively investigated topic, the requirements on the plasma sources change for individual applications. For example in dermatology, a large scale treatment is favored, while in dentistry, a localized application of plasma sources is required. Meanwhile, plasma source development is based on feasibility and not on the application. When a source is developed, it is usually motivated towards an application instead of considering an application and designing a plasma source to fit its needs. Each approach has its advantage and can lead to an advance in the field. With this contribution, we will present an approach from idea to prototype and show challenges in the plasma source development. For example, the consideration of legal regulations, adaption of the plasma source for a specific field of application and the interplay of gas flow dynamics with electrical field distribution. The solution was developed within several iterations to optimize it for different requirements. The obstacles that occurred during the development process will be highlighted and discussed. Afterwards the final source is characterized for a potential medical application and compared directly with a plasma source certified as a medical product. Acknowledging grants: AU 11 038; ESF/IV-BM-B35-0010/13.
People detection in crowded scenes using active contour models
NASA Astrophysics Data System (ADS)
Sidla, Oliver
2009-01-01
The detection of pedestrians in real-world scenes is a daunting task, especially in crowded situations. Our experience over the last years has shown that active shape models (ASM) can contribute significantly to a robust pedestrian detection system. The paper starts with an overview of shape model approaches, it then explains our approach which builds on top of Eigenshape models which have been trained using real-world data. These models are placed over candidate regions and matched to image gradients using a scoring function which integrates i) point distribution, ii) local gradient orientations iii) local image gradient strengths. A matching and shape model update process is iteratively applied in order to fit the flexible models to the local image content. The weights of the scoring function have a significant impact on the ASM performance. We analyze different settings of scoring weights for gradient magnitude, relative orientation differences, distance between model and gradient in an experiment which uses real-world data. Although for only one pedestrian model in an image computation time is low, the number of necessary processing cycles which is needed to track many people in crowded scenes can become the bottleneck in a real-time application. We describe the measures which have been taken in order to improve the speed of the ASM implementation and make it real-time capable.
NASA Astrophysics Data System (ADS)
Heßelmann, Andreas
2017-06-01
A many-body Green's-function method employing an infinite order summation of ring and exchange-ring contributions to the self-energy is presented. The individual correlation and relaxation contributions to the quasiparticle energies are calculated using an iterative scheme which utilizes density fitting of the particle-hole, particle-particle and hole-hole densities. It is shown that the ionization energies and electron affinities of this approach agree better with highly accurate coupled-cluster singles and doubles with perturbative triples energy difference results than those obtained with second-order Green's-function approaches. An analysis of the correlation and relaxation terms of the self-energy for the direct- and exchange-random-phase-approximation (RPA) Green's-function methods shows that the inclusion of exchange interactions leads to a reduction of the two contributions in magnitude. These differences, however, strongly cancel each other when summing the individual terms to the quasiparticle energies. Due to this, the direct- and exchange-RPA methods perform similarly for the description of ionization energies (IPs) and electron affinities (EAs). The coupled-cluster reference IPs and EAs, if corrected to the adiabatic energy differences between the neutral and charged molecules, were shown to be in very good agreement with experimental measurements.
Value-focused framework for defining landscape-scale conservation targets
Romañach, Stephanie; Benscoter, Allison M.; Brandt, Laura A.
2016-01-01
Conservation of natural resources can be challenging in a rapidly changing world and require collaborative efforts for success. Conservation planning is the process of deciding how to protect, conserve, and enhance or minimize loss of natural and cultural resources. Establishing conservation targets (also called indicators or endpoints), the measurable expressions of desired resource conditions, can help with site-specific up to landscape-scale conservation planning. Using conservation targets and tracking them through time can deliver benefits such as insight into ecosystem health and providing early warnings about undesirable trends. We describe an approach using value-focused thinking to develop statewide conservation targets for Florida. Using such an approach allowed us to first identify stakeholder objectives and then define conservation targets to meet those objectives. Stakeholders were able to see how their shared efforts fit into the broader conservation context, and also anticipate the benefits of multi-agency and -organization collaboration. We developed an iterative process for large-scale conservation planning that included defining a shared framework for the process, defining the conservation targets themselves, as well as developing management and monitoring strategies for evaluation of their effectiveness. The process we describe is applicable to other geographies where multiple parties are seeking to implement collaborative, large-scale biological planning.
A New Pivoting and Iterative Text Detection Algorithm for Biomedical Images
Xu, Songhua; Krauthammer, Michael
2010-01-01
There is interest to expand the reach of literature mining to include the analysis of biomedical images, which often contain a paper’s key findings. Examples include recent studies that use Optical Character Recognition (OCR) to extract image text, which is used to boost biomedical image retrieval and classification. Such studies rely on the robust identification of text elements in biomedical images, which is a non-trivial task. In this work, we introduce a new text detection algorithm for biomedical images based on iterative projection histograms. We study the effectiveness of our algorithm by evaluating the performance on a set of manually labeled random biomedical images, and compare the performance against other state-of-the-art text detection algorithms. In this paper, we demonstrate that a projection histogram-based text detection approach is well suited for text detection in biomedical images, with a performance of F score of .60. The approach performs better than comparable approaches for text detection. Further, we show that the iterative application of the algorithm is boosting overall detection performance. A C++ implementation of our algorithm is freely available through email request for academic use. PMID:20887803
Improve accuracy for automatic acetabulum segmentation in CT images.
Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong
2014-01-01
Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
Panayotov, Dobromir; Poitevin, Yves; Grief, Andrew; ...
2016-09-23
'Fusion for Energy' (F4E) is designing, developing, and implementing the European Helium-Cooled Lead-Lithium (HCLL) and Helium-Cooled Pebble-Bed (HCPB) Test Blanket Systems (TBSs) for ITER (Nuclear Facility INB-174). Safety demonstration is an essential element for the integration of these TBSs into ITER and accident analysis is one of its critical components. A systematic approach to accident analysis has been developed under the F4E contract on TBS safety analyses. F4E technical requirements, together with Amec Foster Wheeler and INL efforts, have resulted in a comprehensive methodology for fusion breeding blanket accident analysis that addresses the specificity of the breeding blanket designs, materials,more » and phenomena while remaining consistent with the approach already applied to ITER accident analyses. Furthermore, the methodology phases are illustrated in the paper by its application to the EU HCLL TBS using both MELCOR and RELAP5 codes.« less
Iterative tensor voting for perceptual grouping of ill-defined curvilinear structures.
Loss, Leandro A; Bebis, George; Parvin, Bahram
2011-08-01
In this paper, a novel approach is proposed for perceptual grouping and localization of ill-defined curvilinear structures. Our approach builds upon the tensor voting and the iterative voting frameworks. Its efficacy lies on iterative refinements of curvilinear structures by gradually shifting from an exploratory to an exploitative mode. Such a mode shifting is achieved by reducing the aperture of the tensor voting fields, which is shown to improve curve grouping and inference by enhancing the concentration of the votes over promising, salient structures. The proposed technique is validated on delineating adherens junctions that are imaged through fluorescence microscopy. However, the method is also applicable for screening other organisms based on characteristics of their cell wall structures. Adherens junctions maintain tissue structural integrity and cell-cell interactions. Visually, they exhibit fibrous patterns that may be diffused, heterogeneous in fluorescence intensity, or punctate and frequently perceptual. Besides the application to real data, the proposed method is compared to prior methods on synthetic and annotated real data, showing high precision rates.
Iterative Integration of Visual Insights during Scalable Patent Search and Analysis.
Koch, S; Bosch, H; Giereth, M; Ertl, T
2011-05-01
Patents are of growing importance in current economic markets. Analyzing patent information has, therefore, become a common task for many interest groups. As a prerequisite for patent analysis, extensive search for relevant patent information is essential. Unfortunately, the complexity of patent material inhibits a straightforward retrieval of all relevant patent documents and leads to iterative, time-consuming approaches in practice. Already the amount of patent data to be analyzed poses challenges with respect to scalability. Further scalability issues arise concerning the diversity of users and the large variety of analysis tasks. With "PatViz", a system for interactive analysis of patent information has been developed addressing scalability at various levels. PatViz provides a visual environment allowing for interactive reintegration of insights into subsequent search iterations, thereby bridging the gap between search and analytic processes. Because of its extensibility, we expect that the approach we have taken can be employed in different problem domains that require high quality of search results regarding their completeness.
Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags
NASA Astrophysics Data System (ADS)
ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu
2017-05-01
Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.
A Unique Technique to get Kaprekar Iteration in Linear Programming Problem
NASA Astrophysics Data System (ADS)
Sumathi, P.; Preethy, V.
2018-04-01
This paper explores about a frivolous number popularly known as Kaprekar constant and Kaprekar numbers. A large number of courses and the different classroom capacities with difference in study periods make the assignment between classrooms and courses complicated. An approach of getting the minimum value of number of iterations to reach the Kaprekar constant for four digit numbers and maximum value is also obtained through linear programming techniques.
From the Rendering Equation to Stratified Light Transport Inversion
2010-12-09
iteratively. These approaches relate closely to the radiosity method for diffuse global illumination in forward rendering (Hanrahan et al, 1991; Gortler et...currently simply use sparse matrices to represent T, we are also interested in exploring connections with hierar- chical and wavelet radiosity as in...Seidel iterative methods used in radiosity . 2.4 Inverse Light Transport Previous work on inverse rendering has considered inversion of the direct
Flexible Method for Developing Tactics, Techniques, and Procedures for Future Capabilities
2009-02-01
levels of ability, military experience, and motivation, (b) number and type of significant events, and (c) other sources of natural variability...research has developed a number of specific instruments designed to aid in this process. Second, the iterative, feed-forward nature of the method allows...FLEX method), but still lack the structured KE approach and iterative, feed-forward nature of the FLEX method. To facilitate decision making
Maxwell iteration for the lattice Boltzmann method with diffusive scaling
NASA Astrophysics Data System (ADS)
Zhao, Weifeng; Yong, Wen-An
2017-03-01
In this work, we present an alternative derivation of the Navier-Stokes equations from Bhatnagar-Gross-Krook models of the lattice Boltzmann method with diffusive scaling. This derivation is based on the Maxwell iteration and can expose certain important features of the lattice Boltzmann solutions. Moreover, it will be seen to be much more straightforward and logically clearer than the existing approaches including the Chapman-Enskog expansion.
Investigation of a Parabolic Iterative Solver for Three-dimensional Configurations
NASA Technical Reports Server (NTRS)
Nark, Douglas M.; Watson, Willie R.; Mani, Ramani
2007-01-01
A parabolic iterative solution procedure is investigated that seeks to extend the parabolic approximation used within the internal propagation module of the duct noise propagation and radiation code CDUCT-LaRC. The governing convected Helmholtz equation is split into a set of coupled equations governing propagation in the positive and negative directions. The proposed method utilizes an iterative procedure to solve the coupled equations in an attempt to account for possible reflections from internal bifurcations, impedance discontinuities, and duct terminations. A geometry consistent with the NASA Langley Curved Duct Test Rig is considered and the effects of acoustic treatment and non-anechoic termination are included. Two numerical implementations are studied and preliminary results indicate that improved accuracy in predicted amplitude and phase can be obtained for modes at a cut-off ratio of 1.7. Further predictions for modes at a cut-off ratio of 1.1 show improvement in predicted phase at the expense of increased amplitude error. Possible methods of improvement are suggested based on analytic and numerical analysis. It is hoped that coupling the parabolic iterative approach with less efficient, high fidelity finite element approaches will ultimately provide the capability to perform efficient, higher fidelity acoustic calculations within complex 3-D geometries for impedance eduction and noise propagation and radiation predictions.
NASA Astrophysics Data System (ADS)
Domnisoru, L.; Modiga, A.; Gasparotti, C.
2016-08-01
At the ship's design, the first step of the hull structural assessment is based on the longitudinal strength analysis, with head wave equivalent loads by the ships' classification societies’ rules. This paper presents an enhancement of the longitudinal strength analysis, considering the general case of the oblique quasi-static equivalent waves, based on the own non-linear iterative procedure and in-house program. The numerical approach is developed for the mono-hull ships, without restrictions on 3D-hull offset lines non-linearities, and involves three interlinked iterative cycles on floating, pitch and roll trim equilibrium conditions. Besides the ship-wave equilibrium parameters, the ship's girder wave induced loads are obtained. As numerical study case we have considered a large LPG liquefied petroleum gas carrier. The numerical results of the large LPG are compared with the statistical design values from several ships' classification societies’ rules. This study makes possible to obtain the oblique wave conditions that are inducing the maximum loads into the large LPG ship's girder. The numerical results of this study are pointing out that the non-linear iterative approach is necessary for the computation of the extreme loads induced by the oblique waves, ensuring better accuracy of the large LPG ship's longitudinal strength assessment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gürsoy, Doğa; Hong, Young P.; He, Kuan
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Broadband excitation in nuclear magnetic resonance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tycko, Robert
1984-10-01
Theoretical methods for designing sequences of radio frequency (rf) radiation pulses for broadband excitation of spin systems in nuclear magnetic resonance (NMR) are described. The sequences excite spins uniformly over large ranges of resonant frequencies arising from static magnetic field inhomogeneity, chemical shift differences, or spin couplings, or over large ranges of rf field amplitudes. Specific sequences for creating a population inversion or transverse magnetization are derived and demonstrated experimentally in liquid and solid state NMR. One approach to broadband excitation is based on principles of coherent averaging theory. A general formalism for deriving pulse sequences is given, along withmore » computational methods for specific cases. This approach leads to sequences that produce strictly constant transformations of a spin system. The importance of this feature in NMR applications is discussed. A second approach to broadband excitation makes use of iterative schemes, i.e. sets of operations that are applied repetitively to a given initial pulse sequences, generating a series of increasingly complex sequences with increasingly desirable properties. A general mathematical framework for analyzing iterative schemes is developed. An iterative scheme is treated as a function that acts on a space of operators corresponding to the transformations produced by all possible pulse sequences. The fixed points of the function and the stability of the fixed points are shown to determine the essential behavior of the scheme. Iterative schemes for broadband population inversion are treated in detail. Algebraic and numerical methods for performing the mathematical analysis are presented. Two additional topics are treated. The first is the construction of sequences for uniform excitation of double-quantum coherence and for uniform polarization transfer over a range of spin couplings. Double-quantum excitation sequences are demonstrated in a liquid crystal system. The second additional topic is the construction of iterative schemes for narrowband population inversion. The use of sequences that invert spin populations only over a narrow range of rf field amplitudes to spatially localize NMR signals in an rf field gradient is discussed.« less
Aerodynamic optimization by simultaneously updating flow variables and design parameters
NASA Technical Reports Server (NTRS)
Rizk, M. H.
1990-01-01
The application of conventional optimization schemes to aerodynamic design problems leads to inner-outer iterative procedures that are very costly. An alternative approach is presented based on the idea of updating the flow variable iterative solutions and the design parameter iterative solutions simultaneously. Two schemes based on this idea are applied to problems of correcting wind tunnel wall interference and optimizing advanced propeller designs. The first of these schemes is applicable to a limited class of two-design-parameter problems with an equality constraint. It requires the computation of a single flow solution. The second scheme is suitable for application to general aerodynamic problems. It requires the computation of several flow solutions in parallel. In both schemes, the design parameters are updated as the iterative flow solutions evolve. Computations are performed to test the schemes' efficiency, accuracy, and sensitivity to variations in the computational parameters.
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Duhr, Claude; Dulat, Falko; Tancredi, Lorenzo
2018-06-01
We introduce a class of iterated integrals that generalize multiple polylogarithms to elliptic curves. These elliptic multiple polylogarithms are closely related to similar functions defined in pure mathematics and string theory. We then focus on the equal-mass and non-equal-mass sunrise integrals, and we develop a formalism that enables us to compute these Feynman integrals in terms of our iterated integrals on elliptic curves. The key idea is to use integration-by-parts identities to identify a set of integral kernels, whose precise form is determined by the branch points of the integral in question. These kernels allow us to express all iterated integrals on an elliptic curve in terms of them. The flexibility of our approach leads us to expect that it will be applicable to a large variety of integrals in high-energy physics.
Multivariable frequency domain identification via 2-norm minimization
NASA Technical Reports Server (NTRS)
Bayard, David S.
1992-01-01
The author develops a computational approach to multivariable frequency domain identification, based on 2-norm minimization. In particular, a Gauss-Newton (GN) iteration is developed to minimize the 2-norm of the error between frequency domain data and a matrix fraction transfer function estimate. To improve the global performance of the optimization algorithm, the GN iteration is initialized using the solution to a particular sequentially reweighted least squares problem, denoted as the SK iteration. The least squares problems which arise from both the SK and GN iterations are shown to involve sparse matrices with identical block structure. A sparse matrix QR factorization method is developed to exploit the special block structure, and to efficiently compute the least squares solution. A numerical example involving the identification of a multiple-input multiple-output (MIMO) plant having 286 unknown parameters is given to illustrate the effectiveness of the algorithm.
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-01-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of TOF scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (Direct Image Reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias vs. variance performance to iterative TOF reconstruction with a matched resolution model. PMID:27032968
Analytic TOF PET reconstruction algorithm within DIRECT data partitioning framework
NASA Astrophysics Data System (ADS)
Matej, Samuel; Daube-Witherspoon, Margaret E.; Karp, Joel S.
2016-05-01
Iterative reconstruction algorithms are routinely used for clinical practice; however, analytic algorithms are relevant candidates for quantitative research studies due to their linear behavior. While iterative algorithms also benefit from the inclusion of accurate data and noise models the widespread use of time-of-flight (TOF) scanners with less sensitivity to noise and data imperfections make analytic algorithms even more promising. In our previous work we have developed a novel iterative reconstruction approach (DIRECT: direct image reconstruction for TOF) providing convenient TOF data partitioning framework and leading to very efficient reconstructions. In this work we have expanded DIRECT to include an analytic TOF algorithm with confidence weighting incorporating models of both TOF and spatial resolution kernels. Feasibility studies using simulated and measured data demonstrate that analytic-DIRECT with appropriate resolution and regularization filters is able to provide matched bias versus variance performance to iterative TOF reconstruction with a matched resolution model.
Simulation and study of small numbers of random events
NASA Technical Reports Server (NTRS)
Shelton, R. D.
1986-01-01
Random events were simulated by computer and subjected to various statistical methods to extract important parameters. Various forms of curve fitting were explored, such as least squares, least distance from a line, maximum likelihood. Problems considered were dead time, exponential decay, and spectrum extraction from cosmic ray data using binned data and data from individual events. Computer programs, mostly of an iterative nature, were developed to do these simulations and extractions and are partially listed as appendices. The mathematical basis for the compuer programs is given.
Genetic Evolution of Shape-Altering Programs for Supersonic Aerodynamics
NASA Technical Reports Server (NTRS)
Kennelly, Robert A., Jr.; Bencze, Daniel P. (Technical Monitor)
2002-01-01
Two constrained shape optimization problems relevant to aerodynamics are solved by genetic programming, in which a population of computer programs evolves automatically under pressure of fitness-driven reproduction and genetic crossover. Known optimal solutions are recovered using a small, naive set of elementary operations. Effectiveness is improved through use of automatically defined functions, especially when one of them is capable of a variable number of iterations, even though the test problems lack obvious exploitable regularities. An attempt at evolving new elementary operations was only partially successful.
1983-03-01
values of these functions on the two sides of the slits. The acceleration parameters for the iteration at each point are in the field array WACC (I,J...code will calculate a locally optimum value at each point in the field, these values being placed in the field array WACC . This calculation is...changes in x and y, are calculated by calling subroutine ERROR.) The acceleration parameter is placed in the field 65 array WACC . The addition to the
Improved Image Quality in Head and Neck CT Using a 3D Iterative Approach to Reduce Metal Artifact.
Wuest, W; May, M S; Brand, M; Bayerl, N; Krauss, A; Uder, M; Lell, M
2015-10-01
Metal artifacts from dental fillings and other devices degrade image quality and may compromise the detection and evaluation of lesions in the oral cavity and oropharynx by CT. The aim of this study was to evaluate the effect of iterative metal artifact reduction on CT of the oral cavity and oropharynx. Data from 50 consecutive patients with metal artifacts from dental hardware were reconstructed with standard filtered back-projection, linear interpolation metal artifact reduction (LIMAR), and iterative metal artifact reduction. The image quality of sections that contained metal was analyzed for the severity of artifacts and diagnostic value. A total of 455 sections (mean ± standard deviation, 9.1 ± 4.1 sections per patient) contained metal and were evaluated with each reconstruction method. Sections without metal were not affected by the algorithms and demonstrated image quality identical to each other. Of these sections, 38% were considered nondiagnostic with filtered back-projection, 31% with LIMAR, and only 7% with iterative metal artifact reduction. Thirty-three percent of the sections had poor image quality with filtered back-projection, 46% with LIMAR, and 10% with iterative metal artifact reduction. Thirteen percent of the sections with filtered back-projection, 17% with LIMAR, and 22% with iterative metal artifact reduction were of moderate image quality, 16% of the sections with filtered back-projection, 5% with LIMAR, and 30% with iterative metal artifact reduction were of good image quality, and 1% of the sections with LIMAR and 31% with iterative metal artifact reduction were of excellent image quality. Iterative metal artifact reduction yields the highest image quality in comparison with filtered back-projection and linear interpolation metal artifact reduction in patients with metal hardware in the head and neck area. © 2015 by American Journal of Neuroradiology.
Angelis, G I; Reader, A J; Markiewicz, P J; Kotasidis, F A; Lionheart, W R; Matthews, J C
2013-08-07
Recent studies have demonstrated the benefits of a resolution model within iterative reconstruction algorithms in an attempt to account for effects that degrade the spatial resolution of the reconstructed images. However, these algorithms suffer from slower convergence rates, compared to algorithms where no resolution model is used, due to the additional need to solve an image deconvolution problem. In this paper, a recently proposed algorithm, which decouples the tomographic and image deconvolution problems within an image-based expectation maximization (EM) framework, was evaluated. This separation is convenient, because more computational effort can be placed on the image deconvolution problem and therefore accelerate convergence. Since the computational cost of solving the image deconvolution problem is relatively small, multiple image-based EM iterations do not significantly increase the overall reconstruction time. The proposed algorithm was evaluated using 2D simulations, as well as measured 3D data acquired on the high-resolution research tomograph. Results showed that bias reduction can be accelerated by interleaving multiple iterations of the image-based EM algorithm solving the resolution model problem, with a single EM iteration solving the tomographic problem. Significant improvements were observed particularly for voxels that were located on the boundaries between regions of high contrast within the object being imaged and for small regions of interest, where resolution recovery is usually more challenging. Minor differences were observed using the proposed nested algorithm, compared to the single iteration normally performed, when an optimal number of iterations are performed for each algorithm. However, using the proposed nested approach convergence is significantly accelerated enabling reconstruction using far fewer tomographic iterations (up to 70% fewer iterations for small regions). Nevertheless, the optimal number of nested image-based EM iterations is hard to be defined and it should be selected according to the given application.
A stopping criterion to halt iterations at the Richardson-Lucy deconvolution of radiographic images
NASA Astrophysics Data System (ADS)
Almeida, G. L.; Silvani, M. I.; Souza, E. S.; Lopes, R. T.
2015-07-01
Radiographic images, as any experimentally acquired ones, are affected by spoiling agents which degrade their final quality. The degradation caused by agents of systematic character, can be reduced by some kind of treatment such as an iterative deconvolution. This approach requires two parameters, namely the system resolution and the best number of iterations in order to achieve the best final image. This work proposes a novel procedure to estimate the best number of iterations, which replaces the cumbersome visual inspection by a comparison of numbers. These numbers are deduced from the image histograms, taking into account the global difference G between them for two subsequent iterations. The developed algorithm, including a Richardson-Lucy deconvolution procedure has been embodied into a Fortran program capable to plot the 1st derivative of G as the processing progresses and to stop it automatically when this derivative - within the data dispersion - reaches zero. The radiograph of a specially chosen object acquired with thermal neutrons from the Argonauta research reactor at Institutode Engenharia Nuclear - CNEN, Rio de Janeiro, Brazil, have undergone this treatment with fair results.
Reconstruction of loads in the fibrosa and ventricularis of porcine aortic valves.
Vesely, I
1996-01-01
The main structural components of aortic valve cusps, the fibrosa and ventricularis, are pre loaded by virtue of their attachment to each other. The fibrosa is under compression and the ventricularis is under tension. Once separated from each other, these internal stresses are relieved, and the fibrosa elongates and the ventricularis shrinks. It then becomes impossible to determine what fraction of the load is carried by the two layers at a given strain, using the standard superposition of tension vs strain curves. To enable the superposition approach, we needed to adjust the tension/ strain curves of the fibrosa and ventricularis, and duplicate the pre load that exists in these layers. We, therefore, iteratively shifted these curves and compared their arithmetic sum to the tension curve for the whole intact cusp, using a sum-of-squares error function. The best fits occurred when the fibrosa and ventricularis were shifted to the right and left by amounts corresponding to a true strain of epsilon = 0.26 and 0.10 for the fibrosa and ventricularis in the radial directions. In the circumferential direction, the best fit was achieved for shifts of epsilon = -0.11 and 0.010 for the fibrosa and ventricularis, respectively. This 26% compressive strain of the radial fibrosa compares well with direct observations. The reconstructed tension curves indicate that the ventricularis carries much of the radial loads, whereas circumferentially the two layers share loads equally up to 25% strain, beyond which the fibrosa takes over.
CuCrZr alloy microstructure and mechanical properties after hot isostatic pressing bonding cycles
NASA Astrophysics Data System (ADS)
Frayssines, P.-E.; Gentzbittel, J.-M.; Guilloud, A.; Bucci, P.; Soreau, T.; Francois, N.; Primaux, F.; Heikkinen, S.; Zacchia, F.; Eaton, R.; Barabash, V.; Mitteau, R.
2014-04-01
ITER first wall (FW) panels are a layered structure made of the three following materials: 316L(N) austenitic stainless steel, CuCrZr alloy and beryllium. Two hot isostatic pressing (HIP) cycles are included in the reference fabrication route to bond these materials together for the normal heat flux design supplied by the European Union (EU). This reference fabrication route ensures sufficiently good mechanical properties for the materials and joints, which fulfil the ITER mechanical specifications, but often results in a coarse grain size for the CuCrZr alloy, which is not favourable, especially, for the thermal creep properties of the FW panels. To limit the abnormal grain growth of CuCrZr and make the ITER FW fabrication route more reliable, a study began in 2010 in the EU in the frame of an ITER task agreement. Two material fabrication approaches have been investigated. The first one was dedicated to the fabrication of solid CuCrZr alloy in close collaboration with an industrial copper alloys manufacturer. The second approach investigated was the manufacturing of CuCrZr alloy using the powder metallurgy (PM) route and HIP consolidation. This paper presents the main mechanical and microstructural results associated with the two CuCrZr approaches mentioned above. The mechanical properties of solid CuCrZr, PM CuCrZr and joints (solid CuCrZr/solid CuCrZr and solid CuCrZr/316L(N) and PM CuCrZr/316L(N)) are also presented.
ISS Double-Gimbaled CMG Subsystem Simulation Using the Agile Development Method
NASA Technical Reports Server (NTRS)
Inampudi, Ravi
2016-01-01
This paper presents an evolutionary approach in simulating a cluster of 4 Control Moment Gyros (CMG) on the International Space Station (ISS) using a common sense approach (the agile development method) for concurrent mathematical modeling and simulation of the CMG subsystem. This simulation is part of Training systems for the 21st Century simulator which will provide training for crew members, instructors, and flight controllers. The basic idea of how the CMGs on the space station are used for its non-propulsive attitude control is briefly explained to set up the context for simulating a CMG subsystem. Next different reference frames and the detailed equations of motion (EOM) for multiple double-gimbal variable-speed control moment gyroscopes (DGVs) are presented. Fixing some of the terms in the EOM becomes the special case EOM for ISS's double-gimbaled fixed speed CMGs. CMG simulation development using the agile development method is presented in which customer's requirements and solutions evolve through iterative analysis, design, coding, unit testing and acceptance testing. At the end of the iteration a set of features implemented in that iteration are demonstrated to the flight controllers thus creating a short feedback loop and helping in creating adaptive development cycles. The unified modeling language (UML) tool is used in illustrating the user stories, class designs and sequence diagrams. This incremental development approach of mathematical modeling and simulating the CMG subsystem involved the development team and the customer early on, thus improving the quality of the working CMG system in each iteration and helping the team to accurately predict the cost, schedule and delivery of the software.
Boiret, Mathieu; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2016-02-20
Raman chemical imaging provides both spectral and spatial information on a pharmaceutical drug product. Even if the main objective of chemical imaging is to obtain distribution maps of each formulation compound, identification of pure signals in a mixture dataset remains of huge interest. In this work, an iterative approach is proposed to identify the compounds in a pharmaceutical drug product, assuming that the chemical composition of the product is not known by the analyst and that a low dose compound can be present in the studied medicine. The proposed approach uses a spectral library, spectral distances and orthogonal projections to iteratively detect pure compounds of a tablet. Since the proposed method is not based on variance decomposition, it should be well adapted for a drug product which contains a low dose product, interpreted as a compound located in few pixels and with low spectral contributions. The method is tested on a tablet specifically manufactured for this study with one active pharmaceutical ingredient and five excipients. A spectral library, constituted of 24 pure pharmaceutical compounds, is used as a reference spectral database. Pure spectra of active and excipients, including a modification of the crystalline form and a low dose compound, are iteratively detected. Once the pure spectra are identified, multivariate curve resolution-alternating least squares process is performed on the data to provide distribution maps of each compound in the studied sample. Distributions of the two crystalline forms of active and the five excipients were in accordance with the theoretical formulation. Copyright © 2015 Elsevier B.V. All rights reserved.
Rueda, Oscar M; Diaz-Uriarte, Ramon
2007-10-16
Yu et al. (BMC Bioinformatics 2007,8: 145+) have recently compared the performance of several methods for the detection of genomic amplification and deletion breakpoints using data from high-density single nucleotide polymorphism arrays. One of the methods compared is our non-homogenous Hidden Markov Model approach. Our approach uses Markov Chain Monte Carlo for inference, but Yu et al. ran the sampler for a severely insufficient number of iterations for a Markov Chain Monte Carlo-based method. Moreover, they did not use the appropriate reference level for the non-altered state. We rerun the analysis in Yu et al. using appropriate settings for both the Markov Chain Monte Carlo iterations and the reference level. Additionally, to show how easy it is to obtain answers to additional specific questions, we have added a new analysis targeted specifically to the detection of breakpoints. The reanalysis shows that the performance of our method is comparable to that of the other methods analyzed. In addition, we can provide probabilities of a given spot being a breakpoint, something unique among the methods examined. Markov Chain Monte Carlo methods require using a sufficient number of iterations before they can be assumed to yield samples from the distribution of interest. Running our method with too small a number of iterations cannot be representative of its performance. Moreover, our analysis shows how our original approach can be easily adapted to answer specific additional questions (e.g., identify edges).
Nonnegative least-squares image deblurring: improved gradient projection approaches
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.
2010-02-01
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
Intensity Conserving Spectral Fitting
NASA Technical Reports Server (NTRS)
Klimchuk, J. A.; Patsourakos, S.; Tripathi, D.
2015-01-01
The detailed shapes of spectral line profiles provide valuable information about the emitting plasma, especially when the plasma contains an unresolved mixture of velocities, temperatures, and densities. As a result of finite spectral resolution, the intensity measured by a spectrometer is the average intensity across a wavelength bin of non-zero size. It is assigned to the wavelength position at the center of the bin. However, the actual intensity at that discrete position will be different if the profile is curved, as it invariably is. Standard fitting routines (spline, Gaussian, etc.) do not account for this difference, and this can result in significant errors when making sensitive measurements. Detection of asymmetries in solar coronal emission lines is one example. Removal of line blends is another. We have developed an iterative procedure that corrects for this effect. It can be used with any fitting function, but we employ a cubic spline in a new analysis routine called Intensity Conserving Spline Interpolation (ICSI). As the name implies, it conserves the observed intensity within each wavelength bin, which ordinary fits do not. Given the rapid convergence, speed of computation, and ease of use, we suggest that ICSI be made a standard component of the processing pipeline for spectroscopic data.
Iterative projection algorithms for ab initio phasing in virus crystallography.
Lo, Victor L; Kingston, Richard L; Millane, Rick P
2016-12-01
Iterative projection algorithms are proposed as a tool for ab initio phasing in virus crystallography. The good global convergence properties of these algorithms, coupled with the spherical shape and high structural redundancy of icosahedral viruses, allows high resolution phases to be determined with no initial phase information. This approach is demonstrated by determining the electron density of a virus crystal with 5-fold non-crystallographic symmetry, starting with only a spherical shell envelope. The electron density obtained is sufficiently accurate for model building. The results indicate that iterative projection algorithms should be routinely applicable in virus crystallography, without the need for ancillary phase information. Copyright © 2016 Elsevier Inc. All rights reserved.
Numerical solution of Euler's equation by perturbed functionals
NASA Technical Reports Server (NTRS)
Dey, S. K.
1985-01-01
A perturbed functional iteration has been developed to solve nonlinear systems. It adds at each iteration level, unique perturbation parameters to nonlinear Gauss-Seidel iterates which enhances its convergence properties. As convergence is approached these parameters are damped out. Local linearization along the diagonal has been used to compute these parameters. The method requires no computation of Jacobian or factorization of matrices. Analysis of convergence depends on properties of certain contraction-type mappings, known as D-mappings. In this article, application of this method to solve an implicit finite difference approximation of Euler's equation is studied. Some representative results for the well known shock tube problem and compressible flows in a nozzle are given.
Over the past decade, our research team at the US EPA Mid-Continent Ecology Division has employed systems biology approaches to examine and understand impacts of environmental contaminants on fish reproduction. Our systems biology approach is one in which iterations of model cons...
Simulation of RF-fields in a fusion device
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Witte, Dieter; Bogaert, Ignace; De Zutter, Daniel
2009-11-26
In this paper the problem of scattering off a fusion plasma is approached from the point of view of integral equations. Using the volume equivalence principle an integral equation is derived which describes the electromagnetic fields in a plasma. The equation is discretized with MoM using conforming basis functions. This reduces the problem to solving a dense matrix equation. This can be done iteratively. Each iteration can be sped up using FFTs.
NASA Astrophysics Data System (ADS)
Frank, Marius S.; Hättig, Christof
2018-04-01
We present a pair natural orbital (PNO)-based implementation of coupled cluster singles and doubles (CCSD) excitation energies that builds upon the previously proposed state-specific PNO approach to the excited state eigenvalue problem. We construct the excited state PNOs for each state separately in a truncated orbital specific virtual basis and use a local density-fitting approximation to achieve an at most quadratic scaling of the computational costs for the PNO construction. The earlier reported excited state PNO construction is generalized such that a smooth convergence of the results for charge transfer states is ensured for general coupled cluster methods. We investigate the accuracy of our implementation by applying it to a large and diverse test set comprising 153 singlet excitations in organic molecules. Already moderate PNO thresholds yield mean absolute errors below 0.01 eV. The performance of the implementation is investigated through the calculations on alkene chains and reveals an at most cubic cost-scaling for the CCSD iterations with the system size.
Atomic Radius and Charge Parameter Uncertainty in Biomolecular Solvation Energy Calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Xiu; Lei, Huan; Gao, Peiyuan
Atomic radii and charges are two major parameters used in implicit solvent electrostatics and energy calculations. The optimization problem for charges and radii is under-determined, leading to uncertainty in the values of these parameters and in the results of solvation energy calculations using these parameters. This paper presents a method for quantifying this uncertainty in solvation energies using surrogate models based on generalized polynomial chaos (gPC) expansions. There are relatively few atom types used to specify radii parameters in implicit solvation calculations; therefore, surrogate models for these low-dimensional spaces could be constructed using least-squares fitting. However, there are many moremore » types of atomic charges; therefore, construction of surrogate models for the charge parameter space required compressed sensing combined with an iterative rotation method to enhance problem sparsity. We present results for the uncertainty in small molecule solvation energies based on these approaches. Additionally, we explore the correlation between uncertainties due to radii and charges which motivates the need for future work in uncertainty quantification methods for high-dimensional parameter spaces.« less
Boosted Multivariate Trees for Longitudinal Data
Pande, Amol; Li, Liang; Rajeswaran, Jeevanantham; Ehrlinger, John; Kogalur, Udaya B.; Blackstone, Eugene H.; Ishwaran, Hemant
2017-01-01
Machine learning methods provide a powerful approach for analyzing longitudinal data in which repeated measurements are observed for a subject over time. We boost multivariate trees to fit a novel flexible semi-nonparametric marginal model for longitudinal data. In this model, features are assumed to be nonparametric, while feature-time interactions are modeled semi-nonparametrically utilizing P-splines with estimated smoothing parameter. In order to avoid overfitting, we describe a relatively simple in sample cross-validation method which can be used to estimate the optimal boosting iteration and which has the surprising added benefit of stabilizing certain parameter estimates. Our new multivariate tree boosting method is shown to be highly flexible, robust to covariance misspecification and unbalanced designs, and resistant to overfitting in high dimensions. Feature selection can be used to identify important features and feature-time interactions. An application to longitudinal data of forced 1-second lung expiratory volume (FEV1) for lung transplant patients identifies an important feature-time interaction and illustrates the ease with which our method can find complex relationships in longitudinal data. PMID:29249866
Optimized bio-inspired stiffening design for an engine nacelle.
Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark
2015-11-04
Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.
Rowe, Charlotte A.; Patton, Howard J.
2015-10-01
Here, we present analyses of the 2D seismic structure beneath Source Physics Experiments (SPE) geophone lines that extended radially at 100 m spacing from 100 to 2000 m from the source borehole. With seismic sources at only one end of the geophone lines, standard refraction profiling methods cannot resolve seismic velocity structures unambiguously. In previous work, we demonstrated overall agreement between body-wave refraction modeling and Rg dispersion curves for the least complex of the five lines. A more detailed inspection supports a 2D reinterpretation of the structure. We obtained Rg phase velocity measurements in both the time and frequency domains,more » then used iterative adjustment of the initial 1D body-wave model to predict Rg dispersion curves to fit the observed values. Our method applied to the most topographically severe of the geophone lines is supplemented with a 2D ray-tracing approach, whose application to P-wave arrivals supports the Rg analysis. In addition, midline sources will allow us to refine our characterization in future work.« less
Experiences of family physicians who practise primary care obstetrics in groups.
Koppula, Sudha; Brown, Judith B; Jordan, John M
2011-02-01
The purpose of this study was to explore the experiences of family physicians in primary care obstetrical groups. Using a qualitative approach, in-depth interviews were conducted with 12 Edmonton family physicians who participated in primary care obstetrical groups. Experiences with respect to several aspects of group obstetrical practice were examined including advantages and challenges of primary care obstetrical groups, provision of patient care by a group, fit with other work commitments, and sustainability of the groups. Study data were audiotaped and transcribed verbatim. Independent and team analysis was iterative and interpretive. Primary care obstetrical groups were found to preserve a family physician's enjoyment of obstetrics and allowed for continuity of care. They afforded work-life balance, allowed for collaboration, and provided support and a social network for group members. Such groups were found to facilitate short-term family physician absences, although long-term absences (such as maternity leaves) were considered challenging. Participants described conflict within primary care obstetrical groups and considered sustainability to be a challenge. Family physicians' continued involvement in obstetrics could be facilitated by their participation in primary care obstetrical groups.
Measuring the Accelerations of Water Megamasers in Active Galaxy J0437+2456
NASA Astrophysics Data System (ADS)
Turner, Jeremy; Jeremy Turner
2018-01-01
The Megamaser Cosmology Project is measuring the Hubble constant using observations of 22 GHz water megamasers in the accretion disks of active galaxies within the Hubble flow. This approach uses the dynamics of the megamaser disks to determine their physical sizes and thereby find the angular-diameter distances to galaxies without relying on the cosmic distance ladder. We present Green Bank Telescope observations and analysis of the maser disk in the galaxy J0437+2456, which encircles a 2.9×106 M⊙ supermassive black hole. With spectral monitoring observations spanning over four years, we measure the centripetal acceleration of each individual maser component by tracking its velocity drift over time. These accelerations will be used in later work to model the maser disk and determine the distance to the galaxy. Our acceleration measurements use an iterative least squares fitting technique. For the systemic maser features, we find a mean acceleration of 1.87 ± 0.47 km/s/yr. This project was completed as part of the NSF REU program at NRAO.
NASA Astrophysics Data System (ADS)
Sun, Shu-Ting; Li, Xiao-Dong; Zhong, Ren-Xin
2017-10-01
For nonlinear switched discrete-time systems with input constraints, this paper presents an open-closed-loop iterative learning control (ILC) approach, which includes a feedforward ILC part and a feedback control part. Under a given switching rule, the mathematical induction is used to prove the convergence of ILC tracking error in each subsystem. It is demonstrated that the convergence of ILC tracking error is dependent on the feedforward control gain, but the feedback control can speed up the convergence process of ILC by a suitable selection of feedback control gain. A switched freeway traffic system is used to illustrate the effectiveness of the proposed ILC law.
Learning to improve iterative repair scheduling
NASA Technical Reports Server (NTRS)
Zweben, Monte; Davis, Eugene
1992-01-01
This paper presents a general learning method for dynamically selecting between repair heuristics in an iterative repair scheduling system. The system employs a version of explanation-based learning called Plausible Explanation-Based Learning (PEBL) that uses multiple examples to confirm conjectured explanations. The basic approach is to conjecture contradictions between a heuristic and statistics that measure the quality of the heuristic. When these contradictions are confirmed, a different heuristic is selected. To motivate the utility of this approach we present an empirical evaluation of the performance of a scheduling system with respect to two different repair strategies. We show that the scheduler that learns to choose between the heuristics outperforms the same scheduler with any one of two heuristics alone.
NASA Astrophysics Data System (ADS)
Huang, C. L.; Hsu, N. S.; Yeh, W. W. G.; Hsieh, I. H.
2017-12-01
This study develops an innovative calibration method for regional groundwater modeling by using multi-class empirical orthogonal functions (EOFs). The developed method is an iterative approach. Prior to carrying out the iterative procedures, the groundwater storage hydrographs associated with the observation wells are calculated. The combined multi-class EOF amplitudes and EOF expansion coefficients of the storage hydrographs are then used to compute the initial gauss of the temporal and spatial pattern of multiple recharges. The initial guess of the hydrogeological parameters are also assigned according to in-situ pumping experiment. The recharges include net rainfall recharge and boundary recharge, and the hydrogeological parameters are riverbed leakage conductivity, horizontal hydraulic conductivity, vertical hydraulic conductivity, storage coefficient, and specific yield. The first step of the iterative algorithm is to conduct the numerical model (i.e. MODFLOW) by the initial guess / adjusted values of the recharges and parameters. Second, in order to determine the best EOF combination of the error storage hydrographs for determining the correction vectors, the objective function is devised as minimizing the root mean square error (RMSE) of the simulated storage hydrographs. The error storage hydrograph are the differences between the storage hydrographs computed from observed and simulated groundwater level fluctuations. Third, adjust the values of recharges and parameters and repeat the iterative procedures until the stopping criterion is reached. The established methodology was applied to the groundwater system of Ming-Chu Basin, Taiwan. The study period is from January 1st to December 2ed in 2012. Results showed that the optimal EOF combination for the multiple recharges and hydrogeological parameters can decrease the RMSE of the simulated storage hydrographs dramatically within three calibration iterations. It represents that the iterative approach that using EOF techniques can capture the groundwater flow tendency and detects the correction vector of the simulated error sources. Hence, the established EOF-based methodology can effectively and accurately identify the multiple recharges and hydrogeological parameters.
Evolutionary squeaky wheel optimization: a new framework for analysis.
Li, Jingpeng; Parkes, Andrew J; Burke, Edmund K
2011-01-01
Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.
Estimating cosmic velocity fields from density fields and tidal tensors
NASA Astrophysics Data System (ADS)
Kitaura, Francisco-Shu; Angulo, Raul E.; Hoffman, Yehuda; Gottlöber, Stefan
2012-10-01
In this work we investigate the non-linear and non-local relation between cosmological density and peculiar velocity fields. Our goal is to provide an algorithm for the reconstruction of the non-linear velocity field from the fully non-linear density. We find that including the gravitational tidal field tensor using second-order Lagrangian perturbation theory based upon an estimate of the linear component of the non-linear density field significantly improves the estimate of the cosmic flow in comparison to linear theory not only in the low density, but also and more dramatically in the high-density regions. In particular we test two estimates of the linear component: the lognormal model and the iterative Lagrangian linearization. The present approach relies on a rigorous higher order Lagrangian perturbation theory analysis which incorporates a non-local relation. It does not require additional fitting from simulations being in this sense parameter free, it is independent of statistical-geometrical optimization and it is straightforward and efficient to compute. The method is demonstrated to yield an unbiased estimator of the velocity field on scales ≳5 h-1 Mpc with closely Gaussian distributed errors. Moreover, the statistics of the divergence of the peculiar velocity field is extremely well recovered showing a good agreement with the true one from N-body simulations. The typical errors of about 10 km s-1 (1σ confidence intervals) are reduced by more than 80 per cent with respect to linear theory in the scale range between 5 and 10 h-1 Mpc in high-density regions (δ > 2). We also find that iterative Lagrangian linearization is significantly superior in the low-density regime with respect to the lognormal model.
Kim, Yong-Hwan; Kim, Junghoe; Lee, Jong-Hwan
2012-12-01
This study proposes an iterative dual-regression (DR) approach with sparse prior regularization to better estimate an individual's neuronal activation using the results of an independent component analysis (ICA) method applied to a temporally concatenated group of functional magnetic resonance imaging (fMRI) data (i.e., Tc-GICA method). An ordinary DR approach estimates the spatial patterns (SPs) of neuronal activation and corresponding time courses (TCs) specific to each individual's fMRI data with two steps involving least-squares (LS) solutions. Our proposed approach employs iterative LS solutions to refine both the individual SPs and TCs with an additional a priori assumption of sparseness in the SPs (i.e., minimally overlapping SPs) based on L(1)-norm minimization. To quantitatively evaluate the performance of this approach, semi-artificial fMRI data were created from resting-state fMRI data with the following considerations: (1) an artificially designed spatial layout of neuronal activation patterns with varying overlap sizes across subjects and (2) a BOLD time series (TS) with variable parameters such as onset time, duration, and maximum BOLD levels. To systematically control the spatial layout variability of neuronal activation patterns across the "subjects" (n=12), the degree of spatial overlap across all subjects was varied from a minimum of 1 voxel (i.e., 0.5-voxel cubic radius) to a maximum of 81 voxels (i.e., 2.5-voxel radius) across the task-related SPs with a size of 100 voxels for both the block-based and event-related task paradigms. In addition, several levels of maximum percentage BOLD intensity (i.e., 0.5, 1.0, 2.0, and 3.0%) were used for each degree of spatial overlap size. From the results, the estimated individual SPs of neuronal activation obtained from the proposed iterative DR approach with a sparse prior showed an enhanced true positive rate and reduced false positive rate compared to the ordinary DR approach. The estimated TCs of the task-related SPs from our proposed approach showed greater temporal correlation coefficients with a reference hemodynamic response function than those of the ordinary DR approach. Moreover, the efficacy of the proposed DR approach was also successfully demonstrated by the results of real fMRI data acquired from left-/right-hand clenching tasks in both block-based and event-related task paradigms. Copyright © 2012 Elsevier Inc. All rights reserved.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
Acciarri, R.; Adams, C.; Asaadi, J.; ...
2017-03-09
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Lattice dynamics and thermal conductivity of lithium fluoride via first-principles calculations
NASA Astrophysics Data System (ADS)
Liang, Ting; Chen, Wen-Qi; Hu, Cui-E.; Chen, Xiang-Rong; Chen, Qi-Feng
2018-04-01
The lattice thermal conductivity of lithium fluoride (LiF) is accurately computed from a first-principles approach based on an iterative solution of the Boltzmann transport equation. Real-space finite-difference supercell approach is employed to generate the second- and third-order interatomic force constants. The related physical quantities of LiF are calculated by the second- and third- order potential interactions at 30 K-1000 K. The calculated lattice thermal conductivity 13.89 W/(m K) for LiF at room temperature agrees well with the experimental value, demonstrating that the parameter-free approach can furnish precise descriptions of the lattice thermal conductivity for this material. Besides, the Born effective charges, dielectric constants and phonon spectrum of LiF accord well with the existing data. The lattice thermal conductivities for the iterative solution of BTE are also presented.
Formulation for Simultaneous Aerodynamic Analysis and Design Optimization
NASA Technical Reports Server (NTRS)
Hou, G. W.; Taylor, A. C., III; Mani, S. V.; Newman, P. A.
1993-01-01
An efficient approach for simultaneous aerodynamic analysis and design optimization is presented. This approach does not require the performance of many flow analyses at each design optimization step, which can be an expensive procedure. Thus, this approach brings us one step closer to meeting the challenge of incorporating computational fluid dynamic codes into gradient-based optimization techniques for aerodynamic design. An adjoint-variable method is introduced to nullify the effect of the increased number of design variables in the problem formulation. The method has been successfully tested on one-dimensional nozzle flow problems, including a sample problem with a normal shock. Implementations of the above algorithm are also presented that incorporate Newton iterations to secure a high-quality flow solution at the end of the design process. Implementations with iterative flow solvers are possible and will be required for large, multidimensional flow problems.
Construction and assembly of the wire planes for the MicroBooNE Time Projection Chamber
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acciarri, R.; Adams, C.; Asaadi, J.
As x-ray and electron tomography is pushed further into the nanoscale, the limitations of rotation stages become more apparent, leading to challenges in the alignment of the acquired projection images. Here we present an approach for rapid post-acquisition alignment of these projections to obtain high quality three-dimensional images. Our approach is based on a joint estimation of alignment errors, and the object, using an iterative refinement procedure. With simulated data where we know the alignment error of each projection image, our approach shows a residual alignment error that is a factor of a thousand smaller, and it reaches the samemore » error level in the reconstructed image in less than half the number of iterations. We then show its application to experimental data in x-ray and electron nanotomography.« less
Lambert W function for applications in physics
NASA Astrophysics Data System (ADS)
Veberič, Darko
2012-12-01
The Lambert W(x) function and its possible applications in physics are presented. The actual numerical implementation in C++ consists of Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued-logarithm recursion. Program summaryProgram title: LambertW Catalogue identifier: AENC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 1335 No. of bytes in distributed program, including test data, etc.: 25 283 Distribution format: tar.gz Programming language: C++ (with suitable wrappers it can be called from C, Fortran etc.), the supplied command-line utility is suitable for other scripting languages like sh, csh, awk, perl etc. Computer: All systems with a C++ compiler. Operating system: All Unix flavors, Windows. It might work with others. RAM: Small memory footprint, less than 1 MB Classification: 1.1, 4.7, 11.3, 11.9. Nature of problem: Find fast and accurate numerical implementation for the Lambert W function. Solution method: Halley's and Fritsch's iterations with initial approximations based on branch-point expansion, asymptotic series, rational fits, and continued logarithm recursion. Additional comments: Distribution file contains the command-line utility lambert-w. Doxygen comments, included in the source files. Makefile. Running time: The tests provided take only a few seconds to run.
Improved Real-Time Scan Matching Using Corner Features
NASA Astrophysics Data System (ADS)
Mohamed, H. A.; Moussa, A. M.; Elhabiby, M. M.; El-Sheimy, N.; Sesay, Abu B.
2016-06-01
The automation of unmanned vehicle operation has gained a lot of research attention, in the last few years, because of its numerous applications. The vehicle localization is more challenging in indoor environments where absolute positioning measurements (e.g. GPS) are typically unavailable. Laser range finders are among the most widely used sensors that help the unmanned vehicles to localize themselves in indoor environments. Typically, automatic real-time matching of the successive scans is performed either explicitly or implicitly by any localization approach that utilizes laser range finders. Many accustomed approaches such as Iterative Closest Point (ICP), Iterative Matching Range Point (IMRP), Iterative Dual Correspondence (IDC), and Polar Scan Matching (PSM) handles the scan matching problem in an iterative fashion which significantly affects the time consumption. Furthermore, the solution convergence is not guaranteed especially in cases of sharp maneuvers or fast movement. This paper proposes an automated real-time scan matching algorithm where the matching process is initialized using the detected corners. This initialization step aims to increase the convergence probability and to limit the number of iterations needed to reach convergence. The corner detection is preceded by line extraction from the laser scans. To evaluate the probability of line availability in indoor environments, various data sets, offered by different research groups, have been tested and the mean numbers of extracted lines per scan for these data sets are ranging from 4.10 to 8.86 lines of more than 7 points. The set of all intersections between extracted lines are detected as corners regardless of the physical intersection of these line segments in the scan. To account for the uncertainties of the detected corners, the covariance of the corners is estimated using the extracted lines variances. The detected corners are used to estimate the transformation parameters between the successive scan using least squares. These estimated transformation parameters are used to calculate an adjusted initialization for scan matching process. The presented method can be employed solely to match the successive scans and also can be used to aid other accustomed iterative methods to achieve more effective and faster converge. The performance and time consumption of the proposed approach is compared with ICP algorithm alone without initialization in different scenarios such as static period, fast straight movement, and sharp manoeuvers.
A new Bayesian recursive technique for parameter estimation
NASA Astrophysics Data System (ADS)
Kaheil, Yasir H.; Gill, M. Kashif; McKee, Mac; Bastidas, Luis
2006-08-01
The performance of any model depends on how well its associated parameters are estimated. In the current application, a localized Bayesian recursive estimation (LOBARE) approach is devised for parameter estimation. The LOBARE methodology is an extension of the Bayesian recursive estimation (BARE) method. It is applied in this paper on two different types of models: an artificial intelligence (AI) model in the form of a support vector machine (SVM) application for forecasting soil moisture and a conceptual rainfall-runoff (CRR) model represented by the Sacramento soil moisture accounting (SAC-SMA) model. Support vector machines, based on statistical learning theory (SLT), represent the modeling task as a quadratic optimization problem and have already been used in various applications in hydrology. They require estimation of three parameters. SAC-SMA is a very well known model that estimates runoff. It has a 13-dimensional parameter space. In the LOBARE approach presented here, Bayesian inference is used in an iterative fashion to estimate the parameter space that will most likely enclose a best parameter set. This is done by narrowing the sampling space through updating the "parent" bounds based on their fitness. These bounds are actually the parameter sets that were selected by BARE runs on subspaces of the initial parameter space. The new approach results in faster convergence toward the optimal parameter set using minimum training/calibration data and fewer sets of parameter values. The efficacy of the localized methodology is also compared with the previously used BARE algorithm.
Trial latencies estimation of event-related potentials in EEG by means of genetic algorithms
NASA Astrophysics Data System (ADS)
Da Pelo, P.; De Tommaso, M.; Monaco, A.; Stramaglia, S.; Bellotti, R.; Tangaro, S.
2018-04-01
Objective. Event-related potentials (ERPs) are usually obtained by averaging thus neglecting the trial-to-trial latency variability in cognitive electroencephalography (EEG) responses. As a consequence the shape and the peak amplitude of the averaged ERP are smeared and reduced, respectively, when the single-trial latencies show a relevant variability. To date, the majority of the methodologies for single-trial latencies inference are iterative schemes providing suboptimal solutions, the most commonly used being the Woody’s algorithm. Approach. In this study, a global approach is developed by introducing a fitness function whose global maximum corresponds to the set of latencies which renders the trial signals most aligned as possible. A suitable genetic algorithm has been implemented to solve the optimization problem, characterized by new genetic operators tailored to the present problem. Main results. The results, on simulated trials, showed that the proposed algorithm performs better than Woody’s algorithm in all conditions, at the cost of an increased computational complexity (justified by the improved quality of the solution). Application of the proposed approach on real data trials, resulted in an increased correlation between latencies and reaction times w.r.t. the output from RIDE method. Significance. The above mentioned results on simulated and real data indicate that the proposed method, providing a better estimate of single-trial latencies, will open the way to more accurate study of neural responses as well as to the issue of relating the variability of latencies to the proper cognitive and behavioural correlates.
Dawes, Richard; Passalacqua, Alessio; Wagner, Albert F; Sewell, Thomas D; Minkoff, Michael; Thompson, Donald L
2009-04-14
We develop two approaches for growing a fitted potential energy surface (PES) by the interpolating moving least-squares (IMLS) technique using classical trajectories. We illustrate both approaches by calculating nitrous acid (HONO) cis-->trans isomerization trajectories under the control of ab initio forces from low-level HF/cc-pVDZ electronic structure calculations. In this illustrative example, as few as 300 ab initio energy/gradient calculations are required to converge the isomerization rate constant at a fixed energy to approximately 10%. Neither approach requires any preliminary electronic structure calculations or initial approximate representation of the PES (beyond information required for trajectory initial conditions). Hessians are not required. Both approaches rely on the fitting error estimation properties of IMLS fits. The first approach, called IMLS-accelerated direct dynamics, propagates individual trajectories directly with no preliminary exploratory trajectories. The PES is grown "on the fly" with the computation of new ab initio data only when a fitting error estimate exceeds a prescribed tight tolerance. The second approach, called dynamics-driven IMLS fitting, uses relatively inexpensive exploratory trajectories to both determine and fit the dynamically accessible configuration space. Once exploratory trajectories no longer find configurations with fitting error estimates higher than the designated accuracy, the IMLS fit is considered to be complete and usable in classical trajectory calculations or other applications.
The V3, V4 and V6 bands of formaldehyde: A spectral catalog from 900 cm(-1) to 1580 cm(-1)
NASA Technical Reports Server (NTRS)
Nadler, Shachar; Reuter, D. C.; Daunt, S. J.; Johns, J. W. C.
1988-01-01
The results of a complete high resolution study of the three vibration-rotation bands v sub 3, v sub 4, and V sub 6 using both TDLs and FT-IR spectroscopy are presented. The reults are given in terms of a table of over 8000 predicted transition frequencies and strengths. A plot of the predicted and calculated spectra is shown. Over 3000 transitions were assigned and used in the simultaneous analysis of the three bands. The simultaneous fit permitted a rigorous study of Coriolis and other type iterations among bands yielding improved molecular constants. Line intensities of 28 transitions measured by a TDL and 20 transitions from FTS data were used, along with the eigenvectors from the frequency fitting, in a least squares analysis to evaluate the band strengths.
NASA Astrophysics Data System (ADS)
Cornelius, Reinold R.; Voight, Barry
1995-03-01
The Materials Failure Forecasting Method for volcanic eruptions (FFM) analyses the rate of precursory phenomena. Time of eruption onset is derived from the time of "failure" implied by accelerating rate of deformation. The approach attempts to fit data, Ω, to the differential relationship Ω¨=AΩ˙, where the dot superscript represents the time derivative, and the data Ω may be any of several parameters describing the accelerating deformation or energy release of the volcanic system. Rate coefficients, A and α, may be derived from appropriate data sets to provide an estimate of time to "failure". As the method is still an experimental technique, it should be used with appropriate judgment during times of volcanic crisis. Limitations of the approach are identified and discussed. Several kinds of eruption precursory phenomena, all simulating accelerating creep during the mechanical deformation of the system, can be used with FFM. Among these are tilt data, slope-distance measurements, crater fault movements and seismicity. The use of seismic coda, seismic amplitude-derived energy release and time-integrated amplitudes or coda lengths are examined. Usage of cumulative coda length directly has some practical advantages to more rigorously derived parameters, and RSAM and SSAM technologies appear to be well suited to real-time applications. One graphical and four numerical techniques of applying FFM are discussed. The graphical technique is based on an inverse representation of rate versus time. For α = 2, the inverse rate plot is linear; it is concave upward for α < 2 and concave downward for α > 2. The eruption time is found by simple extrapolation of the data set toward the time axis. Three numerical techniques are based on linear least-squares fits to linearized data sets. The "linearized least-squares technique" is most robust and is expected to be the most practical numerical technique. This technique is based on an iterative linearization of the given rate-time series. The hindsight technique is disadvantaged by a bias favouring a too early eruption time in foresight applications. The "log rate versus log acceleration technique", utilizing a logarithmic representation of the fundamental differential equation, is disadvantaged by large data scatter after interpolation of accelerations. One further numerical technique, a nonlinear least-squares fit to rate data, requires special and more complex software. PC-oriented computer codes were developed for data manipulation, application of the three linearizing numerical methods, and curve fitting. Separate software is required for graphing purposes. All three linearizing techniques facilitate an eruption window based on a data envelope according to the linear least-squares fit, at a specific level of confidence, and an estimated rate at time of failure.
1990-11-01
1 = Q- 1 - 1 QlaaQ- 1.1 + a’Q-1a This is a simple case of a general formula called Woodbury’s formula by some authors; see, for example, Phadke and...1 2. The First-Order Moving Average Model ..... .................. 3. Some Approaches to the Iterative...the approximate likelihood function in some time series models. Useful suggestions have been the Cholesky decomposition of the covariance matrix and
1990-02-01
noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is
User interface issues in supporting human-computer integrated scheduling
NASA Technical Reports Server (NTRS)
Cooper, Lynne P.; Biefeld, Eric W.
1991-01-01
Explored here is the user interface problems encountered with the Operations Missions Planner (OMP) project at the Jet Propulsion Laboratory (JPL). OMP uses a unique iterative approach to planning that places additional requirements on the user interface, particularly to support system development and maintenance. These requirements are necessary to support the concepts of heuristically controlled search, in-progress assessment, and iterative refinement of the schedule. The techniques used to address the OMP interface needs are given.
NASA Astrophysics Data System (ADS)
Nguyen, An Hung; Guillemette, Thomas; Lambert, Andrew J.; Pickering, Mark R.; Garratt, Matthew A.
2017-09-01
Image registration is a fundamental image processing technique. It is used to spatially align two or more images that have been captured at different times, from different sensors, or from different viewpoints. There have been many algorithms proposed for this task. The most common of these being the well-known Lucas-Kanade (LK) and Horn-Schunck approaches. However, the main limitation of these approaches is the computational complexity required to implement the large number of iterations necessary for successful alignment of the images. Previously, a multi-pass image interpolation algorithm (MP-I2A) was developed to considerably reduce the number of iterations required for successful registration compared with the LK algorithm. This paper develops a kernel-warping algorithm (KWA), a modified version of the MP-I2A, which requires fewer iterations to successfully register two images and less memory space for the field-programmable gate array (FPGA) implementation than the MP-I2A. These reductions increase feasibility of the implementation of the proposed algorithm on FPGAs with very limited memory space and other hardware resources. A two-FPGA system rather than single FPGA system is successfully developed to implement the KWA in order to compensate insufficiency of hardware resources supported by one FPGA, and increase parallel processing ability and scalability of the system.
Development of an evidence-based review with recommendations using an online iterative process.
Rudmik, Luke; Smith, Timothy L
2011-01-01
The practice of modern medicine is governed by evidence-based principles. Due to the plethora of medical literature, clinicians often rely on systematic reviews and clinical guidelines to summarize the evidence and provide best practices. Implementation of an evidence-based clinical approach can minimize variation in health care delivery and optimize the quality of patient care. This article reports a method for developing an "Evidence-based Review with Recommendations" using an online iterative process. The manuscript describes the following steps involved in this process: Clinical topic selection, Evidence-hased review assignment, Literature review and initial manuscript preparation, Iterative review process with author selection, and Manuscript finalization. The goal of this article is to improve efficiency and increase the production of evidence-based reviews while maintaining the high quality and transparency associated with the rigorous methodology utilized for clinical guideline development. With the rise of evidence-based medicine, most medical and surgical specialties have an abundance of clinical topics which would benefit from a formal evidence-based review. Although clinical guideline development is an important methodology, the associated challenges limit development to only the absolute highest priority clinical topics. As outlined in this article, the online iterative approach to the development of an Evidence-based Review with Recommendations may improve productivity without compromising the quality associated with formal guideline development methodology. Copyright © 2011 American Rhinologic Society-American Academy of Otolaryngic Allergy, LLC.
Implementation of a Curriculum-Integrated Computer Game for Introducing Scientific Argumentation
NASA Astrophysics Data System (ADS)
Wallon, Robert C.; Jasti, Chandana; Lauren, Hillary Z. G.; Hug, Barbara
2017-11-01
Argumentation has been emphasized in recent US science education reform efforts (NGSS Lead States 2013; NRC 2012), and while existing studies have investigated approaches to introducing and supporting argumentation (e.g., McNeill and Krajcik in Journal of Research in Science Teaching, 45(1), 53-78, 2008; Kang et al. in Science Education, 98(4), 674-704, 2014), few studies have investigated how game-based approaches may be used to introduce argumentation to students. In this paper, we report findings from a design-based study of a teacher's use of a computer game intended to introduce the claim, evidence, reasoning (CER) framework (McNeill and Krajcik 2012) for scientific argumentation. We studied the implementation of the game over two iterations of development in a high school biology teacher's classes. The results of this study include aspects of enactment of the activities and student argument scores. We found the teacher used the game in aspects of explicit instruction of argumentation during both iterations, although the ways in which the game was used differed. Also, students' scores in the second iteration were significantly higher than the first iteration. These findings support the notion that students can learn argumentation through a game, especially when used in conjunction with explicit instruction and support in student materials. These findings also highlight the importance of analyzing classroom implementation in studies of game-based learning.
A new method for the automatic interpretation of Schlumberger and Wenner sounding curves
Zohdy, A.A.R.
1989-01-01
A fast iterative method for the automatic interpretation of Schlumberger and Wenner sounding curves is based on obtaining interpreted depths and resistivities from shifted electrode spacings and adjusted apparent resistivities, respectively. The method is fully automatic. It does not require an initial guess of the number of layers, their thicknesses, or their resistivities; and it does not require extrapolation of incomplete sounding curves. The number of layers in the interpreted model equals the number of digitized points on the sounding curve. The resulting multilayer model is always well-behaved with no thin layers of unusually high or unusually low resistivities. For noisy data, interpretation is done in two sets of iterations (two passes). Anomalous layers, created because of noise in the first pass, are eliminated in the second pass. Such layers are eliminated by considering the best-fitting curve from the first pass to be a smoothed version of the observed curve and automatically reinterpreting it (second pass). The application of the method is illustrated by several examples. -Author
Constitutive law for thermally-activated plasticity of recrystallized tungsten
NASA Astrophysics Data System (ADS)
Zinovev, Aleksandr; Terentyev, Dmitry; Dubinko, Andrii; Delannay, Laurent
2017-12-01
A physically-based constitutive law relevant for ITER-specification tungsten grade in as-recrystallized state is proposed. The material demonstrates stages III and IV of the plastic deformation, in which hardening rate does not drop to zero with the increase of applied stress. Despite the classical Kocks-Mecking model, valid at stage III, the strain hardening asymptotically decreases resembling a hyperbolic function. The material parameters are fitted by relying on tensile test data and by requiring that the strain and stress at the onset of diffuse necking (uniform elongation and ultimate tensile strength correspondingly) as well as the yield stress be reproduced. The model is then validated in the temperature range 300-600 °C with the help of finite element analysis of tensile tests which confirms the reproducibility of the experimental engineering curves up to the onset of diffuse necking, beyond which the development of ductile damage accelerates the material failure. This temperature range represents the low temperature application window for tungsten as divertor material in fusion reactor ITER.
NASA Technical Reports Server (NTRS)
Hallidy, William H. (Inventor); Chin, Robert C. (Inventor)
1999-01-01
The present invention is a system for chemometric analysis for the extraction of the individual component fluorescence spectra and fluorescence lifetimes from a target mixture. The present invention combines a processor with an apparatus for generating an excitation signal to transmit at a target mixture and an apparatus for detecting the emitted signal from the target mixture. The present invention extracts the individual fluorescence spectrum and fluorescence lifetime measurements from the frequency and wavelength data acquired from the emitted signal. The present invention uses an iterative solution that first requires the initialization of several decision variables and the initial approximation determinations of intermediate matrices. The iterative solution compares the decision variables for convergence to see if further approximation determinations are necessary. If the solution converges, the present invention then determines the reduced best fit error for the analysis of the individual fluorescence lifetime and the fluorescence spectrum before extracting the individual fluorescence lifetime and fluorescence spectrum from the emitted signal of the target mixture.
The ZH ratio Analysis of Global Seismic Data
NASA Astrophysics Data System (ADS)
Yano, T.; Shikato, S.; Rivera, L.; Tanimoto, T.
2007-12-01
The ZH ratio, the ratio of vertical to horizontal component of the fundamental Rayleigh wave as a function of frequency, is an alternative approach to phase/group velocity analysis for constructing the S-wave velocity structure. In this study, teleseismic Rayleigh wave data for the frequency range between 0.004Hz to 0.04Hz is used to investigate the interior structure. We have analyzed most of the GEOSCOPE network data and some IRIS GSN stations using a technique developed by Tanimoto and Rivera (2007). Stable estimates of the ZH ratios were obtained for the frequency range for most stations. We have performed the inversion of the measured ZH ratios for the structure in the crust and mantle by using nonlinear iterative scheme. The depth sensitivity kernels for inversion are numerically calculated. Depth sensitivity of the lowest frequency extends to depths beyond 500 km but the sensitivity of the overall data for the frequency band extends down to about 300km. We found that an appropriate selection of an initial model, particularly the depth of Mohorovicic discontinuity, is important for this inversion. The inversion result depends on the initial model and turned out to be non-unique. We have constructed the initial model from the CRUST 2.0. Inversion with equal weighting to each data point tends to reduce variance of certain frequency range only. Therefore, we have developed a scheme to increase weighting to data points that do not fit well after the fifth iteration. This occurs more often for low frequency range, 0.004-0.007Hz. After fitting the lower frequency region, the low velocity zone around a depth of 100km is observed under some stations such as KIP (Kipapa, Hawaii) and ATD (Arta Cave, Djibouti). We have also carried out an analysis on the resolving power of data by examining the eigenvalues-eigenvectors of the least-squares problem. Unfortunately, the normal matrix usually has 1-2 very large eigenvalues, followed by much smaller eigenvalues. The third one is often an order of magnitude smaller. The largest eigenvalue is always dominated by an eigenfunction that has the peak at the surface. It indicates that the ZH ratio is sensitive to shallow structure but it has limited form in resolving power for underlying structure. We will report on the details on the resolving capabilities of the ZH ratios.
A study of core Thomson scattering measurements in ITER using a multi-laser approach
NASA Astrophysics Data System (ADS)
Kurskiev, G. S.; Sdvizhenskii, P. A.; Bassan, M.; Andrew, P.; Bazhenov, A. N.; Bukreev, I. M.; Chernakov, P. V.; Kochergin, M. M.; Kukushkin, A. B.; Kukushkin, A. S.; Mukhin, E. E.; Razdobarin, A. G.; Samsonov, D. S.; Semenov, V. V.; Tolstyakov, S. Yu.; Kajita, S.; Masyukevich, S. V.
2015-05-01
The electron component is the main channel for anomalous power loss and the main indicator of transient processes in the tokamak plasma. The electron temperature and density profiles mainly determine the operational mode of the machine. This imposes demanding requirements on the precision and on the spatial and temporal resolution of the Thomson scattering (TS) measurements. Measurements of such high electron temperature with good accuracy in a large fusion device such as ITER using TS encounter a number of physical problems. The 40 keV TS spectrum has a significant blue shift. Due to the transmission functions of the fibres and to their darkening that can occur under a strong neutron irradiation, the operational wavelength range is bounded on the blue side. For example, high temperature measurements become impossible with the 1064 nm probing wavelength since the TS signal within the boundaries of the operational window weakly depends on Te. The second problem is connected with the TS calibration. The TS system for a large fusion machine like ITER will have a set of optical components inaccessible for maintenance, and their spectral characteristics may change with time. Since the present concept of the TS system for ITER relies on the classical approach to measuring the shape of the scattered spectra using wide spectral channels, the diagnostic will be very sensitive to the changes in the optical transmission. The third complication is connected with the deviation of the electron velocity distribution function from a Maxwellian that can happen under a strong ECRH/ECCD, and it may additionally hamper the measurements. This paper analyses the advantages of a ‘multi-laser approach’ implementation for the current design of the core TS system. Such an approach assumes simultaneous plasma probing with different wavelengths that allows the measurement accuracy to be improved significantly and to perform the spectral calibration of the TS system. Comparative analysis of the conservative and advanced approaches is given.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
Dynamic adaptive learning for decision-making supporting systems
NASA Astrophysics Data System (ADS)
He, Haibo; Cao, Yuan; Chen, Sheng; Desai, Sachi; Hohil, Myron E.
2008-03-01
This paper proposes a novel adaptive learning method for data mining in support of decision-making systems. Due to the inherent characteristics of information ambiguity/uncertainty, high dimensionality and noisy in many homeland security and defense applications, such as surveillances, monitoring, net-centric battlefield, and others, it is critical to develop autonomous learning methods to efficiently learn useful information from raw data to help the decision making process. The proposed method is based on a dynamic learning principle in the feature spaces. Generally speaking, conventional approaches of learning from high dimensional data sets include various feature extraction (principal component analysis, wavelet transform, and others) and feature selection (embedded approach, wrapper approach, filter approach, and others) methods. However, very limited understandings of adaptive learning from different feature spaces have been achieved. We propose an integrative approach that takes advantages of feature selection and hypothesis ensemble techniques to achieve our goal. Based on the training data distributions, a feature score function is used to provide a measurement of the importance of different features for learning purpose. Then multiple hypotheses are iteratively developed in different feature spaces according to their learning capabilities. Unlike the pre-set iteration steps in many of the existing ensemble learning approaches, such as adaptive boosting (AdaBoost) method, the iterative learning process will automatically stop when the intelligent system can not provide a better understanding than a random guess in that particular subset of feature spaces. Finally, a voting algorithm is used to combine all the decisions from different hypotheses to provide the final prediction results. Simulation analyses of the proposed method on classification of different US military aircraft databases show the effectiveness of this method.
NASA Astrophysics Data System (ADS)
Dyar, M. Darby; Giguere, Stephen; Carey, CJ; Boucher, Thomas
2016-12-01
This project examines the causes, effects, and optimization of continuum removal in laser-induced breakdown spectroscopy (LIBS) to produce the best possible prediction accuracy of elemental composition in geological samples. We compare prediction accuracy resulting from several different techniques for baseline removal, including asymmetric least squares (ALS), adaptive iteratively reweighted penalized least squares (Air-PLS), fully automatic baseline correction (FABC), continuous wavelet transformation, median filtering, polynomial fitting, the iterative thresholding Dietrich method, convex hull/rubber band techniques, and a newly-developed technique for Custom baseline removal (BLR). We assess the predictive performance of these methods using partial least-squares analysis for 13 elements of geological interest, expressed as the weight percentages of SiO2, Al2O3, TiO2, FeO, MgO, CaO, Na2O, K2O, and the parts per million concentrations of Ni, Cr, Zn, Mn, and Co. We find that previously published methods for baseline subtraction generally produce equivalent prediction accuracies for major elements. When those pre-existing methods are used, automated optimization of their adjustable parameters is always necessary to wring the best predictive accuracy out of a data set; ideally, it should be done for each individual variable. The new technique of Custom BLR produces significant improvements in prediction accuracy over existing methods across varying geological data sets, instruments, and varying analytical conditions. These results also demonstrate the dual objectives of the continuum removal problem: removing a smooth underlying signal to fit individual peaks (univariate analysis) versus using feature selection to select only those channels that contribute to best prediction accuracy for multivariate analyses. Overall, the current practice of using generalized, one-method-fits-all-spectra baseline removal results in poorer predictive performance for all methods. The extra steps needed to optimize baseline removal for each predicted variable and empower multivariate techniques with the best possible input data for optimal prediction accuracy are shown to be well worth the slight increase in necessary computations and complexity.
Andrianakis, Ioannis; Vernon, Ian R.; McCreesh, Nicky; McKinley, Trevelyan J.; Oakley, Jeremy E.; Nsubuga, Rebecca N.; Goldstein, Michael; White, Richard G.
2015-01-01
Advances in scientific computing have allowed the development of complex models that are being routinely applied to problems in disease epidemiology, public health and decision making. The utility of these models depends in part on how well they can reproduce empirical data. However, fitting such models to real world data is greatly hindered both by large numbers of input and output parameters, and by long run times, such that many modelling studies lack a formal calibration methodology. We present a novel method that has the potential to improve the calibration of complex infectious disease models (hereafter called simulators). We present this in the form of a tutorial and a case study where we history match a dynamic, event-driven, individual-based stochastic HIV simulator, using extensive demographic, behavioural and epidemiological data available from Uganda. The tutorial describes history matching and emulation. History matching is an iterative procedure that reduces the simulator's input space by identifying and discarding areas that are unlikely to provide a good match to the empirical data. History matching relies on the computational efficiency of a Bayesian representation of the simulator, known as an emulator. Emulators mimic the simulator's behaviour, but are often several orders of magnitude faster to evaluate. In the case study, we use a 22 input simulator, fitting its 18 outputs simultaneously. After 9 iterations of history matching, a non-implausible region of the simulator input space was identified that was times smaller than the original input space. Simulator evaluations made within this region were found to have a 65% probability of fitting all 18 outputs. History matching and emulation are useful additions to the toolbox of infectious disease modellers. Further research is required to explicitly address the stochastic nature of the simulator as well as to account for correlations between outputs. PMID:25569850
NASA Astrophysics Data System (ADS)
Puķīte, Jānis; Wagner, Thomas
2016-05-01
We address the application of differential optical absorption spectroscopy (DOAS) of scattered light observations in the presence of strong absorbers (in particular ozone), for which the absorption optical depth is a non-linear function of the trace gas concentration. This is the case because Beer-Lambert law generally does not hold for scattered light measurements due to many light paths contributing to the measurement. While in many cases linear approximation can be made, for scenarios with strong absorptions non-linear effects cannot always be neglected. This is especially the case for observation geometries, for which the light contributing to the measurement is crossing the atmosphere under spatially well-separated paths differing strongly in length and location, like in limb geometry. In these cases, often full retrieval algorithms are applied to address the non-linearities, requiring iterative forward modelling of absorption spectra involving time-consuming wavelength-by-wavelength radiative transfer modelling. In this study, we propose to describe the non-linear effects by additional sensitivity parameters that can be used e.g. to build up a lookup table. Together with widely used box air mass factors (effective light paths) describing the linear response to the increase in the trace gas amount, the higher-order sensitivity parameters eliminate the need for repeating the radiative transfer modelling when modifying the absorption scenario even in the presence of a strong absorption background. While the higher-order absorption structures can be described as separate fit parameters in the spectral analysis (so-called DOAS fit), in practice their quantitative evaluation requires good measurement quality (typically better than that available from current measurements). Therefore, we introduce an iterative retrieval algorithm correcting for the higher-order absorption structures not yet considered in the DOAS fit as well as the absorption dependence on temperature and scattering processes.
Analysis of Anderson Acceleration on a Simplified Neutronics/Thermal Hydraulics System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Toth, Alex; Kelley, C. T.; Slattery, Stuart R
ABSTRACT A standard method for solving coupled multiphysics problems in light water reactors is Picard iteration, which sequentially alternates between solving single physics applications. This solution approach is appealing due to simplicity of implementation and the ability to leverage existing software packages to accurately solve single physics applications. However, there are several drawbacks in the convergence behavior of this method; namely slow convergence and the necessity of heuristically chosen damping factors to achieve convergence in many cases. Anderson acceleration is a method that has been seen to be more robust and fast converging than Picard iteration for many problems, withoutmore » significantly higher cost per iteration or complexity of implementation, though its effectiveness in the context of multiphysics coupling is not well explored. In this work, we develop a one-dimensional model simulating the coupling between the neutron distribution and fuel and coolant properties in a single fuel pin. We show that this model generally captures the convergence issues noted in Picard iterations which couple high-fidelity physics codes. We then use this model to gauge potential improvements with regard to rate of convergence and robustness from utilizing Anderson acceleration as an alternative to Picard iteration.« less
Li, Chuan; Li, Lin; Zhang, Jie; Alexov, Emil
2012-01-01
The Gauss-Seidel method is a standard iterative numerical method widely used to solve a system of equations and, in general, is more efficient comparing to other iterative methods, such as the Jacobi method. However, standard implementation of the Gauss-Seidel method restricts its utilization in parallel computing due to its requirement of using updated neighboring values (i.e., in current iteration) as soon as they are available. Here we report an efficient and exact (not requiring assumptions) method to parallelize iterations and to reduce the computational time as a linear/nearly linear function of the number of CPUs. In contrast to other existing solutions, our method does not require any assumptions and is equally applicable for solving linear and nonlinear equations. This approach is implemented in the DelPhi program, which is a finite difference Poisson-Boltzmann equation solver to model electrostatics in molecular biology. This development makes the iterative procedure on obtaining the electrostatic potential distribution in the parallelized DelPhi several folds faster than that in the serial code. Further we demonstrate the advantages of the new parallelized DelPhi by computing the electrostatic potential and the corresponding energies of large supramolecular structures. PMID:22674480
Density control in ITER: an iterative learning control and robust control approach
NASA Astrophysics Data System (ADS)
Ravensbergen, T.; de Vries, P. C.; Felici, F.; Blanken, T. C.; Nouailletas, R.; Zabeo, L.
2018-01-01
Plasma density control for next generation tokamaks, such as ITER, is challenging because of multiple reasons. The response of the usual gas valve actuators in future, larger fusion devices, might be too slow for feedback control. Both pellet fuelling and the use of feedforward-based control may help to solve this problem. Also, tight density limits arise during ramp-up, due to operational limits related to divertor detachment and radiative collapses. As the number of shots available for controller tuning will be limited in ITER, in this paper, iterative learning control (ILC) is proposed to determine optimal feedforward actuator inputs based on tracking errors, obtained in previous shots. This control method can take the actuator and density limits into account and can deal with large actuator delays. However, a purely feedforward-based density control may not be sufficient due to the presence of disturbances and shot-to-shot differences. Therefore, robust control synthesis is used to construct a robustly stabilizing feedback controller. In simulations, it is shown that this combined controller strategy is able to achieve good tracking performance in the presence of shot-to-shot differences, tight constraints, and model mismatches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Candy, J.
2015-12-01
This work was motivated by the observation, as early as 2008, that GYRO simulations of some ITER operating scenarios exhibited nonlinear zonal-flow generation large enough to effectively quench turbulence inside r /a ~ 0.5. This observation of flow-dominated, low-transport states persisted even as more accurate and comprehensive predictions of ITER profiles were made using the state-of-the-art TGLF transport model. This core stabilization is in stark contrast to GYRO-TGLF comparisons for modern-day tokamaks, for which GYRO and TGLF are typically in very close agreement. So, we began to suspect that TGLF needed to be generalized to include the effect of zonal-flowmore » stabilization in order to be more accurate for the conditions of reactor simulations. While the precise cause of the GYRO-TGLF discrepancy for ITER parameters was not known, it was speculated that closeness to threshold in the absence of driven rotation, as well as electromagnetic stabilization, created conditions more sensitive the self-generated zonal-flow stabilization than in modern tokamaks. Need for nonlinear zonal-flow stabilization: To explore the inclusion of a zonal-flow stabilization mechanism in TGLF, we started with a nominal ITER profile predicted by TGLF, and then performed linear and nonlinear GYRO simulations to characterize the behavior at and slightly above the nominal temperature gradients for finite levels of energy transport. Then, we ran TGLF on these cases to see where the discrepancies were largest. The predicted ITER profiles were indeed near to the TGLF threshold over most of the plasma core in the hybrid discharge studied (weak magnetic shear, q > 1). Scanning temperature gradients above the TGLF power balance values also showed that TGLF overpredicted the electron energy transport in the low-collisionality ITER plasma. At first (in Q3), a model of only the zonal-flow stabilization (Dimits shift) was attempted. Although we were able to construct an ad hoc model of the zonal flows that fit the GYRO simulations, the parameters of the model had to be tuned to each case. A physics basis for the zonal flow model was lacking. Electron energy transport at short wavelength: A secondary issue – the high-k electron energy flux – was initially assumed to be independent of the zonal flow effect. However, detailed studies of the fluctuation spectra from recent multiscale (electron and ion scale) GYRO simulations provided a critical new insight into the role of zonal flows. The multiscale simulations suggested that advection by the zonal flows strongly suppressed electron-scale turbulence. Radial shear of the zonal E×B fluctuation could not compete with the large electron-scale linear growth rate, but the k x-mixing rate of the E×B advection could. This insight led to a preliminary new model for the way zonal flows saturate both electron- and ion-scale turbulence. It was also discovered that the strength of the zonal E×B velocity could be computed from the linear growth rate spectrum. The new saturation model (SAT1), which replaces the original model (SAT0), was fit to the multiscale GYRO simulations as well as the ion-scale GYRO simulations used to calibrate the original SAT0 model. Thus, SAT1 captures the physics of both multiscale electron transport and zonal-flow stabilization. In future work, the SAT1 model will require significant further testing and (expensive) calibration with nonlinear multiscale gyrokinetic simulations over a wider variety of plasma conditions – certainly more than the small set of scans about a single C-Mod L-mode discharge. We believe the SAT1 model holds great promise as a physics-based model of the multiscale turbulent transport in fusion devices. Correction to ITER performance predictions: Finally, the impact of the SAT1model on the ITER hybrid case is mixed. Without the electron-scale contribution to the fluxes, the Dimits shift makes a significant improvement in the predicted fusion power as originally posited. Alas, including the high-k electron transport reduces the improvement, yielding a modest net increase in predicted fusion power compared to the TGLF prediction with the original SAT0 model.« less
Computational aspects of helicopter trim analysis and damping levels from Floquet theory
NASA Technical Reports Server (NTRS)
Gaonkar, Gopal H.; Achar, N. S.
1992-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Non-iterative distance constraints enforcement for cloth drapes simulation
NASA Astrophysics Data System (ADS)
Hidajat, R. L. L. G.; Wibowo, Arifin, Z.; Suyitno
2016-03-01
A cloth simulation represents the behavior of cloth objects such as flag, tablecloth, or even garments has application in clothing animation for games and virtual shops. Elastically deformable models have widely used to provide realistic and efficient simulation, however problem of overstretching is encountered. We introduce a new cloth simulation algorithm that replaces iterative distance constraint enforcement steps with non-iterative ones for preventing over stretching in a spring-mass system for cloth modeling. Our method is based on a simple position correction procedure applied at one end of a spring. In our experiments, we developed a rectangle cloth model which is initially at a horizontal position with one point is fixed, and it is allowed to drape by its own weight. Our simulation is able to achieve a plausible cloth drapes as in reality. This paper aims to demonstrate the reliability of our approach to overcome overstretches while decreasing the computational cost of the constraint enforcement process due to an iterative procedure that is eliminated.
Beamforming Based Full-Duplex for Millimeter-Wave Communication
Liu, Xiao; Xiao, Zhenyu; Bai, Lin; Choi, Jinho; Xia, Pengfei; Xia, Xiang-Gen
2016-01-01
In this paper, we study beamforming based full-duplex (FD) systems in millimeter-wave (mmWave) communications. A joint transmission and reception (Tx/Rx) beamforming problem is formulated to maximize the achievable rate by mitigating self-interference (SI). Since the optimal solution is difficult to find due to the non-convexity of the objective function, suboptimal schemes are proposed in this paper. A low-complexity algorithm, which iteratively maximizes signal power while suppressing SI, is proposed and its convergence is proven. Moreover, two closed-form solutions, which do not require iterations, are also derived under minimum-mean-square-error (MMSE), zero-forcing (ZF), and maximum-ratio transmission (MRT) criteria. Performance evaluations show that the proposed iterative scheme converges fast (within only two iterations on average) and approaches an upper-bound performance, while the two closed-form solutions also achieve appealing performances, although there are noticeable differences from the upper bound depending on channel conditions. Interestingly, these three schemes show different robustness against the geometry of Tx/Rx antenna arrays and channel estimation errors. PMID:27455256
Iterative Importance Sampling Algorithms for Parameter Estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray W; Morzfeld, Matthias; Day, Marcus S.
In parameter estimation problems one computes a posterior distribution over uncertain parameters defined jointly by a prior distribution, a model, and noisy data. Markov chain Monte Carlo (MCMC) is often used for the numerical solution of such problems. An alternative to MCMC is importance sampling, which can exhibit near perfect scaling with the number of cores on high performance computing systems because samples are drawn independently. However, finding a suitable proposal distribution is a challenging task. Several sampling algorithms have been proposed over the past years that take an iterative approach to constructing a proposal distribution. We investigate the applicabilitymore » of such algorithms by applying them to two realistic and challenging test problems, one in subsurface flow, and one in combustion modeling. More specifically, we implement importance sampling algorithms that iterate over the mean and covariance matrix of Gaussian or multivariate t-proposal distributions. Our implementation leverages massively parallel computers, and we present strategies to initialize the iterations using 'coarse' MCMC runs or Gaussian mixture models.« less
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
Testing goodness of fit in regression: a general approach for specified alternatives.
Solari, Aldo; le Cessie, Saskia; Goeman, Jelle J
2012-12-10
When fitting generalized linear models or the Cox proportional hazards model, it is important to have tools to test for lack of fit. Because lack of fit comes in all shapes and sizes, distinguishing among different types of lack of fit is of practical importance. We argue that an adequate diagnosis of lack of fit requires a specified alternative model. Such specification identifies the type of lack of fit the test is directed against so that if we reject the null hypothesis, we know the direction of the departure from the model. The goodness-of-fit approach of this paper allows to treat different types of lack of fit within a unified general framework and to consider many existing tests as special cases. Connections with penalized likelihood and random effects are discussed, and the application of the proposed approach is illustrated with medical examples. Tailored functions for goodness-of-fit testing have been implemented in the R package global test. Copyright © 2012 John Wiley & Sons, Ltd.
Simultaneous fitting of genomic-BLUP and Bayes-C components in a genomic prediction model.
Iheshiulor, Oscar O M; Woolliams, John A; Svendsen, Morten; Solberg, Trygve; Meuwissen, Theo H E
2017-08-24
The rapid adoption of genomic selection is due to two key factors: availability of both high-throughput dense genotyping and statistical methods to estimate and predict breeding values. The development of such methods is still ongoing and, so far, there is no consensus on the best approach. Currently, the linear and non-linear methods for genomic prediction (GP) are treated as distinct approaches. The aim of this study was to evaluate the implementation of an iterative method (called GBC) that incorporates aspects of both linear [genomic-best linear unbiased prediction (G-BLUP)] and non-linear (Bayes-C) methods for GP. The iterative nature of GBC makes it less computationally demanding similar to other non-Markov chain Monte Carlo (MCMC) approaches. However, as a Bayesian method, GBC differs from both MCMC- and non-MCMC-based methods by combining some aspects of G-BLUP and Bayes-C methods for GP. Its relative performance was compared to those of G-BLUP and Bayes-C. We used an imputed 50 K single-nucleotide polymorphism (SNP) dataset based on the Illumina Bovine50K BeadChip, which included 48,249 SNPs and 3244 records. Daughter yield deviations for somatic cell count, fat yield, milk yield, and protein yield were used as response variables. GBC was frequently (marginally) superior to G-BLUP and Bayes-C in terms of prediction accuracy and was significantly better than G-BLUP only for fat yield. On average across the four traits, GBC yielded a 0.009 and 0.006 increase in prediction accuracy over G-BLUP and Bayes-C, respectively. Computationally, GBC was very much faster than Bayes-C and similar to G-BLUP. Our results show that incorporating some aspects of G-BLUP and Bayes-C in a single model can improve accuracy of GP over the commonly used method: G-BLUP. Generally, GBC did not statistically perform better than G-BLUP and Bayes-C, probably due to the close relationships between reference and validation individuals. Nevertheless, it is a flexible tool, in the sense, that it simultaneously incorporates some aspects of linear and non-linear models for GP, thereby exploiting family relationships while also accounting for linkage disequilibrium between SNPs and genes with large effects. The application of GBC in GP merits further exploration.
Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-01-01
Background Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. Objective The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. Methods This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. Results This study is in its preliminary stages and the results are expected to be made available by April, 2016. Conclusions This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students. PMID:26888076
Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-02-17
Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new simulation approach designed to enhance clinical reasoning. By assessing more closely reasoning processes throughout a simulation session, we believe that Simulation with Iterative Discussions will be an interesting and more effective approach for students. The findings of the study will benefit medical educators, education programs, and medical students.
Towards Current Profile Control in ITER: Potential Approaches and Research Needs
NASA Astrophysics Data System (ADS)
Schuster, E.; Barton, J. E.; Wehner, W. P.
2014-10-01
Many challenging plasma control problems still need to be addressed in order for the ITER Plasma Control System (PCS) to be able to successfully achieve the ITER project goals. For instance, setting up a suitable toroidal current density profile is key for one possible advanced scenario characterized by noninductive sustainment of the plasma current and steady-state operation. The nonlinearity and high dimensionality exhibited by the plasma demand a model-based current-profile control synthesis procedure that can accommodate this complexity through embedding the known physics within the design. The development of a model capturing the dynamics of the plasma relevant for control design enables not only the design of feedback controllers for regulation or tracking but also the design of optimal feedforward controllers for a systematic model-based approach to scenario planning, the design of state estimators for a reliable real-time reconstruction of the plasma internal profiles based on limited and noisy diagnostics, and the development of a fast predictive simulation code for closed-loop performance evaluation before implementation. Progress towards control-oriented modeling of the current profile evolution and associated control design has been reported following both data-driven and first-principles-driven approaches. An overview of these two approaches will be provided, as well as a discussion on research needs associated with each one of the model applications described above. Supported by the US Department of Energy under DE-SC0001334 and DE-SC0010661.
A possibilistic approach to clustering
NASA Technical Reports Server (NTRS)
Krishnapuram, Raghu; Keller, James M.
1993-01-01
Fuzzy clustering has been shown to be advantageous over crisp (or traditional) clustering methods in that total commitment of a vector to a given class is not required at each image pattern recognition iteration. Recently fuzzy clustering methods have shown spectacular ability to detect not only hypervolume clusters, but also clusters which are actually 'thin shells', i.e., curves and surfaces. Most analytic fuzzy clustering approaches are derived from the 'Fuzzy C-Means' (FCM) algorithm. The FCM uses the probabilistic constraint that the memberships of a data point across classes sum to one. This constraint was used to generate the membership update equations for an iterative algorithm. Recently, we cast the clustering problem into the framework of possibility theory using an approach in which the resulting partition of the data can be interpreted as a possibilistic partition, and the membership values may be interpreted as degrees of possibility of the points belonging to the classes. We show the ability of this approach to detect linear and quartic curves in the presence of considerable noise.
Symmetry dependence of holograms for optical trapping
NASA Astrophysics Data System (ADS)
Curtis, Jennifer E.; Schmitz, Christian H. J.; Spatz, Joachim P.
2005-08-01
No iterative algorithm is necessary to calculate holograms for most holographic optical trapping patterns. Instead, holograms may be produced by a simple extension of the prisms-and-lenses method. This formulaic approach yields the same diffraction efficiency as iterative algorithms for any asymmetric or symmetric but nonperiodic pattern of points while requiring less calculation time. A slight spatial disordering of periodic patterns significantly reduces intensity variations between the different traps without extra calculation costs. Eliminating laborious hologram calculations should greatly facilitate interactive holographic trapping.
First Monte Carlo analysis of fragmentation functions from single-inclusive e + e - annihilation
Sato, Nobuo; Ethier, J. J.; Melnitchouk, W.; ...
2016-12-02
Here, we perform the first iterative Monte Carlo (IMC) analysis of fragmentation functions constrained by all available data from single-inclusive $e^+ e^-$ annihilation into pions and kaons. The IMC method eliminates potential bias in traditional analyses based on single fits introduced by fixing parameters not well contrained by the data, and provides a statistically rigorous determination of uncertainties. Our analysis reveals specific features of fragmentation functions using the new IMC methodology and those obtained from previous analyses, especially for light quarks and for strange quark fragmentation to kaons.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
SPLICER - A GENETIC ALGORITHM TOOL FOR SEARCH AND OPTIMIZATION, VERSION 1.0 (MACINTOSH VERSION)
NASA Technical Reports Server (NTRS)
Wang, L.
1994-01-01
SPLICER is a genetic algorithm tool which can be used to solve search and optimization problems. Genetic algorithms are adaptive search procedures (i.e. problem solving methods) based loosely on the processes of natural selection and Darwinian "survival of the fittest." SPLICER provides the underlying framework and structure for building a genetic algorithm application. These algorithms apply genetically-inspired operators to populations of potential solutions in an iterative fashion, creating new populations while searching for an optimal or near-optimal solution to the problem at hand. SPLICER 1.0 was created using a modular architecture that includes a Genetic Algorithm Kernel, interchangeable Representation Libraries, Fitness Modules and User Interface Libraries, and well-defined interfaces between these components. The architecture supports portability, flexibility, and extensibility. SPLICER comes with all source code and several examples. For instance, a "traveling salesperson" example searches for the minimum distance through a number of cities visiting each city only once. Stand-alone SPLICER applications can be used without any programming knowledge. However, to fully utilize SPLICER within new problem domains, familiarity with C language programming is essential. SPLICER's genetic algorithm (GA) kernel was developed independent of representation (i.e. problem encoding), fitness function or user interface type. The GA kernel comprises all functions necessary for the manipulation of populations. These functions include the creation of populations and population members, the iterative population model, fitness scaling, parent selection and sampling, and the generation of population statistics. In addition, miscellaneous functions are included in the kernel (e.g., random number generators). Different problem-encoding schemes and functions are defined and stored in interchangeable representation libraries. This allows the GA kernel to be used with any representation scheme. The SPLICER tool provides representation libraries for binary strings and for permutations. These libraries contain functions for the definition, creation, and decoding of genetic strings, as well as multiple crossover and mutation operators. Furthermore, the SPLICER tool defines the appropriate interfaces to allow users to create new representation libraries. Fitness modules are the only component of the SPLICER system a user will normally need to create or alter to solve a particular problem. Fitness functions are defined and stored in interchangeable fitness modules which must be created using C language. Within a fitness module, a user can create a fitness (or scoring) function, set the initial values for various SPLICER control parameters (e.g., population size), create a function which graphically displays the best solutions as they are found, and provide descriptive information about the problem. The tool comes with several example fitness modules, while the process of developing a fitness module is fully discussed in the accompanying documentation. The user interface is event-driven and provides graphic output in windows. SPLICER is written in Think C for Apple Macintosh computers running System 6.0.3 or later and Sun series workstations running SunOS. The UNIX version is easily ported to other UNIX platforms and requires MIT's X Window System, Version 11 Revision 4 or 5, MIT's Athena Widget Set, and the Xw Widget Set. Example executables and source code are included for each machine version. The standard distribution media for the Macintosh version is a set of three 3.5 inch Macintosh format diskettes. The standard distribution medium for the UNIX version is a .25 inch streaming magnetic tape cartridge in UNIX tar format. For the UNIX version, alternate distribution media and formats are available upon request. SPLICER was developed in 1991.
Linking the Long Tail of Data: A Bottoms-up Approach to Connecting Scientific Research
NASA Astrophysics Data System (ADS)
Jacob, B.; Arctur, D. K.
2016-12-01
Highly curated ontologies are often developed for big scientific data, but the long tail of research data rarely receives the same treatment. The learning curve for Semantic Web technology is steep, and the value of linking each long-tail data set to known taxonomies and ontologies in isolation rarely justifies the level of effort required to bring a Knowledge Engineer into the project. We present an approach that takes a bottoms-up approach of producing a Linked Data model of datasets mechanically, inferring the shape and structure of the data from the original format, and adding derived variables and semantic linkages via iterative, interactive refinements of that model. In this way, the vast corpus of small but rich scientific data becomes part of the greater linked web of knowledge, and the connectivity of that data can be iteratively improved over time.