NASA Astrophysics Data System (ADS)
Quan, Lulin; Yang, Zhixin
2010-05-01
To address the issues in the area of design customization, this paper expressed the specification and application of the constrained surface deformation, and reported the experimental performance comparison of three prevail effective similarity assessment algorithms on constrained surface deformation domain. Constrained surface deformation becomes a promising method that supports for various downstream applications of customized design. Similarity assessment is regarded as the key technology for inspecting the success of new design via measuring the difference level between the deformed new design and the initial sample model, and indicating whether the difference level is within the limitation. According to our theoretical analysis and pre-experiments, three similarity assessment algorithms are suitable for this domain, including shape histogram based method, skeleton based method, and U system moment based method. We analyze their basic functions and implementation methodologies in detail, and do a series of experiments on various situations to test their accuracy and efficiency using precision-recall diagram. Shoe model is chosen as an industrial example for the experiments. It shows that shape histogram based method gained an optimal performance in comparison. Based on the result, we proposed a novel approach that integrating surface constrains and shape histogram description with adaptive weighting method, which emphasize the role of constrains during the assessment. The limited initial experimental result demonstrated that our algorithm outperforms other three algorithms. A clear direction for future development is also drawn at the end of the paper.
OCT despeckling via weighted nuclear norm constrained non-local low-rank representation
NASA Astrophysics Data System (ADS)
Tang, Chang; Zheng, Xiao; Cao, Lijuan
2017-10-01
As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Guo, Ping
2017-10-01
The vague and fuzzy parametric information is a challenging issue in irrigation water management problems. In response to this problem, a generalized fuzzy credibility-constrained linear fractional programming (GFCCFP) model is developed for optimal irrigation water allocation under uncertainty. The model can be derived from integrating generalized fuzzy credibility-constrained programming (GFCCP) into a linear fractional programming (LFP) optimization framework. Therefore, it can solve ratio optimization problems associated with fuzzy parameters, and examine the variation of results under different credibility levels and weight coefficients of possibility and necessary. It has advantages in: (1) balancing the economic and resources objectives directly; (2) analyzing system efficiency; (3) generating more flexible decision solutions by giving different credibility levels and weight coefficients of possibility and (4) supporting in-depth analysis of the interrelationships among system efficiency, credibility level and weight coefficient. The model is applied to a case study of irrigation water allocation in the middle reaches of Heihe River Basin, northwest China. Therefore, optimal irrigation water allocation solutions from the GFCCFP model can be obtained. Moreover, factorial analysis on the two parameters (i.e. λ and γ) indicates that the weight coefficient is a main factor compared with credibility level for system efficiency. These results can be effective for support reasonable irrigation water resources management and agricultural production.
Weighting climate model projections using observational constraints.
Gillett, Nathan P
2015-11-13
Projected climate change integrates the net response to multiple climate feedbacks. Whereas existing long-term climate change projections are typically based on unweighted individual climate model simulations, as observed climate change intensifies it is increasingly becoming possible to constrain the net response to feedbacks and hence projected warming directly from observed climate change. One approach scales simulated future warming based on a fit to observations over the historical period, but this approach is only accurate for near-term projections and for scenarios of continuously increasing radiative forcing. For this reason, the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5) included such observationally constrained projections in its assessment of warming to 2035, but used raw model projections of longer term warming to 2100. Here a simple approach to weighting model projections based on an observational constraint is proposed which does not assume a linear relationship between past and future changes. This approach is used to weight model projections of warming in 2081-2100 relative to 1986-2005 under the Representative Concentration Pathway 4.5 forcing scenario, based on an observationally constrained estimate of the Transient Climate Response derived from a detection and attribution analysis. The resulting observationally constrained 5-95% warming range of 0.8-2.5 K is somewhat lower than the unweighted range of 1.1-2.6 K reported in the IPCC AR5. © 2015 The Authors.
Weighted mining of massive collections of [Formula: see text]-values by convex optimization.
Dobriban, Edgar
2018-06-01
Researchers in data-rich disciplines-think of computational genomics and observational cosmology-often wish to mine large bodies of [Formula: see text]-values looking for significant effects, while controlling the false discovery rate or family-wise error rate. Increasingly, researchers also wish to prioritize certain hypotheses, for example, those thought to have larger effect sizes, by upweighting, and to impose constraints on the underlying mining, such as monotonicity along a certain sequence. We introduce Princessp , a principled method for performing weighted multiple testing by constrained convex optimization. Our method elegantly allows one to prioritize certain hypotheses through upweighting and to discount others through downweighting, while constraining the underlying weights involved in the mining process. When the [Formula: see text]-values derive from monotone likelihood ratio families such as the Gaussian means model, the new method allows exact solution of an important optimal weighting problem previously thought to be non-convex and computationally infeasible. Our method scales to massive data set sizes. We illustrate the applications of Princessp on a series of standard genomics data sets and offer comparisons with several previous 'standard' methods. Princessp offers both ease of operation and the ability to scale to extremely large problem sizes. The method is available as open-source software from github.com/dobriban/pvalue_weighting_matlab (accessed 11 October 2017).
Periodic Forced Response of Structures Having Three-Dimensional Frictional Constraints
NASA Astrophysics Data System (ADS)
CHEN, J. J.; YANG, B. D.; MENQ, C. H.
2000-01-01
Many mechanical systems have moving components that are mutually constrained through frictional contacts. When subjected to cyclic excitations, a contact interface may undergo constant changes among sticks, slips and separations, which leads to very complex contact kinematics. In this paper, a 3-D friction contact model is employed to predict the periodic forced response of structures having 3-D frictional constraints. Analytical criteria based on this friction contact model are used to determine the transitions among sticks, slips and separations of the friction contact, and subsequently the constrained force which consists of the induced stick-slip friction force on the contact plane and the contact normal load. The resulting constrained force is often a periodic function and can be considered as a feedback force that influences the response of the constrained structures. By using the Multi-Harmonic Balance Method along with Fast Fourier Transform, the constrained force can be integrated with the receptance of the structures so as to calculate the forced response of the constrained structures. It results in a set of non-linear algebraic equations that can be solved iteratively to yield the relative motion as well as the constrained force at the friction contact. This method is used to predict the periodic response of a frictionally constrained 3-d.o.f. oscillator. The predicted results are compared with those of the direct time integration method so as to validate the proposed method. In addition, the effect of super-harmonic components on the resonant response and jump phenomenon is examined.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Analysis of elastically tailored viscoelastic damping member
NASA Technical Reports Server (NTRS)
Chen, G.-S.; Dolgin, B. P.
1990-01-01
For more than two decades, viscoelastic materials have been commonly used as a passive damping source in a variety of structures because of their high material loss factors. In most of the applications, viscoelastic materials are used either in series with or parallel to the structural load path. The latter is also known as the constrained-layer damping treatment. The advantage of the constrained-layer damping treatment is that it can be incorporated without loss in structural integrity, namely, stiffness and strength. However, the disadvantages are that: (1) it is not the most effective use of the viscoelastic material when compared with the series-type application, and (2) weight penalty from the stiff constraining layer requirement can be excessive. To overcome the disadvantages of the constrained-layer damping treatment, a new approach for using viscoelastic material in axial-type structural components, e.g., truss members, was studied in this investigation.
An Alternating Least Squares Method for the Weighted Approximation of a Symmetric Matrix.
ERIC Educational Resources Information Center
ten Berge, Jos M. F.; Kiers, Henk A. L.
1993-01-01
R. A. Bailey and J. C. Gower explored approximating a symmetric matrix "B" by another, "C," in the least squares sense when the squared discrepancies for diagonal elements receive specific nonunit weights. A solution is proposed where "C" is constrained to be positive semidefinite and of a fixed rank. (SLD)
NASA Astrophysics Data System (ADS)
Kalscheuer, Thomas; Yan, Ping; Hedin, Peter; Garcia Juanatey, Maria d. l. A.
2017-04-01
We introduce a new constrained 2D magnetotelluric (MT) inversion scheme, in which the local weights of the regularization operator with smoothness constraints are based directly on the envelope attribute of a reflection seismic image. The weights resemble those of a previously published seismic modification of the minimum gradient support method introducing a global stabilization parameter. We measure the directional gradients of the seismic envelope to modify the horizontal and vertical smoothness constraints separately. An appropriate choice of the new stabilization parameter is based on a simple trial-and-error procedure. Our proposed constrained inversion scheme was easily implemented in an existing Gauss-Newton inversion package. From a theoretical perspective, we compare our new constrained inversion to similar constrained inversion methods, which are based on image theory and seismic attributes. Successful application of the proposed inversion scheme to the MT field data of the Collisional Orogeny in the Scandinavian Caledonides (COSC) project using constraints from the envelope attribute of the COSC reflection seismic profile (CSP) helped to reduce the uncertainty of the interpretation of the main décollement. Thus, the new model gave support to the proposed location of a future borehole COSC-2 which is supposed to penetrate the main décollement and the underlying Precambrian basement.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Method for determining the weight of functional objectives on manufacturing system.
Zhang, Qingshan; Xu, Wei; Zhang, Jiekun
2014-01-01
We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method.
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilchenko, Yuriy
The top quark is the heaviest fundamental particle observed to date. The mass of the top quark is a free parameter in the Standard Model (SM). A precise measurement of its mass is particularly important as it sets an indirect constraint on the mass of the Higgs boson. It is also a useful constraint on contributions from physics beyond the SM and may play a fundamental role in the electroweak symmetry breaking mechanism. I present a measurement of the top quark mass in the dilepton channel using the Neutrino Weighting Method. The data sample corresponds to an integrated luminosity of 4.3 fb -1 of pmore » $$\\bar{p}$$ collisions at Tevatron with √s = 1.96 TeV, collected with the DØ detector. Kinematically under-constrained dilepton events are analyzed by integrating over neutrino rapidity. Weight distributions of t$$\\bar{t}$$ signal and background are produced as a function of the top quark mass for different top quark mass hypotheses. The measurement is performed by constructing templates from the moments of the weight distributions and input top quark mass, followed by a subsequent likelihood t to data. The dominant systematic uncertainties from jet energy calibration is reduced by using a correction from `+jets channel. To replicate the quark avor dependence of the jet response in data, jets in the simulated events are additionally corrected. The result is combined with our preceding measurement on 1 fb -1 and yields m t = 174.0± 2.4 (stat.) ±1.4 (syst.) GeV.« less
Method for Determining the Weight of Functional Objectives on Manufacturing System
Zhang, Qingshan; Xu, Wei; Zhang, Jiekun
2014-01-01
We propose a three-dimensional integrated weight determination to solve manufacturing system functional objectives, where consumers are weighted by triangular fuzzy numbers to determine the enterprises. The weights, subjective parts are determined by the expert scoring method, the objective parts are determined by the entropy method with the competitive advantage of determining. Based on the integration of three methods and comprehensive weight, we provide some suggestions for the manufacturing system. This paper provides the numerical example analysis to illustrate the feasibility of this method. PMID:25243203
Plan View Pattern Control for Steel Plates through Constrained Locally Weighted Regression
NASA Astrophysics Data System (ADS)
Shigemori, Hiroyasu; Nambu, Koji; Nagao, Ryo; Araki, Tadashi; Mizushima, Narihito; Kano, Manabu; Hasebe, Shinji
A technique for performing parameter identification in a locally weighted regression model using foresight information on the physical properties of the object of interest as constraints was proposed. This method was applied to plan view pattern control of steel plates, and a reduction of shape nonconformity (crop) at the plate head end was confirmed by computer simulation based on real operation data.
NASA Astrophysics Data System (ADS)
Jafari, S.; Hojjati, M. H.
2011-12-01
Rotating disks work mostly at high angular velocity and this results a large centrifugal force and consequently induce large stresses and deformations. Minimizing weight of such disks yields to benefits such as low dead weights and lower costs. This paper aims at finding an optimal disk thickness profile for minimum weight design using the simulated annealing (SA) and particle swarm optimization (PSO) as two modern optimization techniques. In using semi-analytical the radial domain of the disk is divided into some virtual sub-domains as rings where the weight of each rings must be minimized. Inequality constrain equation used in optimization is to make sure that maximum von Mises stress is always less than yielding strength of the material of the disk and rotating disk does not fail. The results show that the minimum weight obtained for all two methods is almost identical. The PSO method gives a profile with slightly less weight (6.9% less than SA) while the implementation of both PSO and SA methods are easy and provide more flexibility compared with classical methods.
The general 2-D moments via integral transform method for acoustic radiation and scattering
NASA Astrophysics Data System (ADS)
Smith, Jerry R.; Mirotznik, Mark S.
2004-05-01
The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.
Self-consistent Bulge/Disk/Halo Galaxy Dynamical Modeling Using Integral Field Kinematics
NASA Astrophysics Data System (ADS)
Taranu, D. S.; Obreschkow, D.; Dubinski, J. J.; Fogarty, L. M. R.; van de Sande, J.; Catinella, B.; Cortese, L.; Moffett, A.; Robotham, A. S. G.; Allen, J. T.; Bland-Hawthorn, J.; Bryant, J. J.; Colless, M.; Croom, S. M.; D'Eugenio, F.; Davies, R. L.; Drinkwater, M. J.; Driver, S. P.; Goodwin, M.; Konstantopoulos, I. S.; Lawrence, J. S.; López-Sánchez, Á. R.; Lorente, N. P. F.; Medling, A. M.; Mould, J. R.; Owers, M. S.; Power, C.; Richards, S. N.; Tonini, C.
2017-11-01
We introduce a method for modeling disk galaxies designed to take full advantage of data from integral field spectroscopy (IFS). The method fits equilibrium models to simultaneously reproduce the surface brightness, rotation, and velocity dispersion profiles of a galaxy. The models are fully self-consistent 6D distribution functions for a galaxy with a Sérsic profile stellar bulge, exponential disk, and parametric dark-matter halo, generated by an updated version of GalactICS. By creating realistic flux-weighted maps of the kinematic moments (flux, mean velocity, and dispersion), we simultaneously fit photometric and spectroscopic data using both maximum-likelihood and Bayesian (MCMC) techniques. We apply the method to a GAMA spiral galaxy (G79635) with kinematics from the SAMI Galaxy Survey and deep g- and r-band photometry from the VST-KiDS survey, comparing parameter constraints with those from traditional 2D bulge-disk decomposition. Our method returns broadly consistent results for shared parameters while constraining the mass-to-light ratios of stellar components and reproducing the H I-inferred circular velocity well beyond the limits of the SAMI data. Although the method is tailored for fitting integral field kinematic data, it can use other dynamical constraints like central fiber dispersions and H I circular velocities, and is well-suited for modeling galaxies with a combination of deep imaging and H I and/or optical spectra (resolved or otherwise). Our implementation (MagRite) is computationally efficient and can generate well-resolved models and kinematic maps in under a minute on modern processors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Previous studies have proposed several methods for integrating characterized environmental impacts as a single index in life cycle assessment. Each of them, however, may lead to different results. This study presents internal and external normalization methods, weighting factors proposed by panel methods, and a monetary valuation based on an endpoint life cycle impact assessment method as the integration methods. Furthermore, this study investigates the differences among the integration methods and identifies the causes of the differences through a case study in which five elementary school buildings were used. As a result, when using internal normalization with weighting factors, the weightingmore » factors had a significant influence on the total environmental impacts whereas the normalization had little influence on the total environmental impacts. When using external normalization with weighting factors, the normalization had more significant influence on the total environmental impacts than weighing factors. Due to such differences, the ranking of the five buildings varied depending on the integration methods. The ranking calculated by the monetary valuation method was significantly different from that calculated by the normalization and weighting process. The results aid decision makers in understanding the differences among these integration methods, and, finally, help them select the method most appropriate for the goal at hand.« less
An RBF-based compression method for image-based relighting.
Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung
2006-04-01
In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.
Optimal apodization design for medical ultrasound using constrained least squares part I: theory.
Guenther, Drake A; Walker, William F
2007-02-01
Aperture weighting functions are critical design parameters in the development of ultrasound systems because beam characteristics affect the contrast and point resolution of the final output image. In previous work by our group, we developed a metric that quantifies a broadband imaging system's contrast resolution performance. We now use this metric to formulate a novel general ultrasound beamformer design method. In our algorithm, we use constrained least squares (CLS) techniques and a linear algebra formulation to describe the system point spread function (PSF) as a function of the aperture weightings. In one approach, we minimize the energy of the PSF outside a certain boundary and impose a linear constraint on the aperture weights. In a second approach, we minimize the energy of the PSF outside a certain boundary while imposing a quadratic constraint on the energy of the PSF inside the boundary. We present detailed analysis for an arbitrary ultrasound imaging system and discuss several possible applications of the CLS techniques, such as designing aperture weightings to maximize contrast resolution and improve the system depth of field.
Maximum Constrained Directivity of Oversteered End-Fire Sensor Arrays
Trucco, Andrea; Traverso, Federico; Crocco, Marco
2015-01-01
For linear arrays with fixed steering and an inter-element spacing smaller than one half of the wavelength, end-fire steering of a data-independent beamformer offers better directivity than broadside steering. The introduction of a lower bound on the white noise gain ensures the necessary robustness against random array errors and sensor mismatches. However, the optimum broadside performance can be obtained using a simple processing architecture, whereas the optimum end-fire performance requires a more complicated system (because complex weight coefficients are needed). In this paper, we reconsider the oversteering technique as a possible way to simplify the processing architecture of equally spaced end-fire arrays. We propose a method for computing the amount of oversteering and the related real-valued weight vector that allows the constrained directivity to be maximized for a given inter-element spacing. Moreover, we verify that the maximized oversteering performance is very close to the optimum end-fire performance. We conclude that optimized oversteering is a viable method for designing end-fire arrays that have better constrained directivity than broadside arrays but with a similar implementation complexity. A numerical simulation is used to perform a statistical analysis, which confirms that the maximized oversteering performance is robust against sensor mismatches. PMID:26066987
Jordaan, Sarah M; Diaz Anadon, Laura; Mielke, Erik; Schrag, Daniel P
2013-01-01
The Renewable Fuel Standard (RFS) is among the cornerstone policies created to increase U.S. energy independence by using biofuels. Although greenhouse gas emissions have played a role in shaping the RFS, water implications are less understood. We demonstrate a spatial, life cycle approach to estimate water consumption of transportation fuel scenarios, including a comparison to current water withdrawals and drought incidence by state. The water consumption and land footprint of six scenarios are compared to the RFS, including shale oil, coal-to-liquids, shale gas-to-liquids, corn ethanol, and cellulosic ethanol from switchgrass. The corn scenario is the most water and land intense option and is weighted toward drought-prone states. Fossil options and cellulosic ethanol require significantly less water and are weighted toward less drought-prone states. Coal-to-liquids is an exception, where water consumption is partially weighted toward drought-prone states. Results suggest that there may be considerable water and land impacts associated with meeting energy security goals through using only biofuels. Ultimately, water and land requirements may constrain energy security goals without careful planning, indicating that there is a need to better balance trade-offs. Our approach provides policymakers with a method to integrate federal policies with regional planning over various temporal and spatial scales.
Free energy from molecular dynamics with multiple constraints
NASA Astrophysics Data System (ADS)
den Otter, W. K.; Briels, W. J.
In molecular dynamics simulations of reacting systems, the key step to determining the equilibrium constant and the reaction rate is the calculation of the free energy as a function of the reaction coordinate. Intuitively the derivative of the free energy is equal to the average force needed to constrain the reaction coordinate to a constant value, but the metric tensor effect of the constraint on the sampled phase space distribution complicates this relation. The appropriately corrected expression for the potential of mean constraint force method (PMCF) for systems in which only the reaction coordinate is constrained was published recently. Here we will consider the general case of a system with multiple constraints. This situation arises when both the reaction coordinate and the 'hard' coordinates are constrained, and also in systems with several reaction coordinates. The obvious advantage of this method over the established thermodynamic integration and free energy perturbation methods is that it avoids the cumbersome introduction of a full set of generalized coordinates complementing the constrained coordinates. Simulations of n -butane and n -pentane in vacuum illustrate the method.
Czakó, Gábor; Kaledin, Alexey L; Bowman, Joel M
2010-04-28
We report the implementation of a previously suggested method to constrain a molecular system to have mode-specific vibrational energy greater than or equal to the zero-point energy in quasiclassical trajectory calculations [J. M. Bowman et al., J. Chem. Phys. 91, 2859 (1989); W. H. Miller et al., J. Chem. Phys. 91, 2863 (1989)]. The implementation is made practical by using a technique described recently [G. Czako and J. M. Bowman, J. Chem. Phys. 131, 244302 (2009)], where a normal-mode analysis is performed during the course of a trajectory and which gives only real-valued frequencies. The method is applied to the water dimer, where its effectiveness is shown by computing mode energies as a function of integration time. Radial distribution functions are also calculated using constrained quasiclassical and standard classical molecular dynamics at low temperature and at 300 K and compared to rigorous quantum path integral calculations.
New Internet search volume-based weighting method for integrating various environmental impacts
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr
Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less
Method of multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2004-01-06
A method of determining the properties of a sample from measured spectral data collected from the sample by performing a multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used to analyze X-ray spectral data generated by operating a Scanning Electron Microscope (SEM) with an attached Energy Dispersive Spectrometer (EDS).
Advances in locally constrained k-space-based parallel MRI.
Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S
2006-02-01
In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.
A TV-constrained decomposition method for spectral CT
NASA Astrophysics Data System (ADS)
Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang
2017-03-01
Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.
Taimouri, Vahid; Afacan, Onur; Perez-Rossello, Jeannette M.; Callahan, Michael J.; Mulkern, Robert V.; Warfield, Simon K.; Freiman, Moti
2015-01-01
Purpose: To evaluate the effect of the spatially constrained incoherent motion (SCIM) method on improving the precision and robustness of fast and slow diffusion parameter estimates from diffusion-weighted MRI in liver and spleen in comparison to the independent voxel-wise intravoxel incoherent motion (IVIM) model. Methods: We collected diffusion-weighted MRI (DW-MRI) data of 29 subjects (5 healthy subjects and 24 patients with Crohn’s disease in the ileum). We evaluated parameters estimates’ robustness against different combinations of b-values (i.e., 4 b-values and 7 b-values) by comparing the variance of the estimates obtained with the SCIM and the independent voxel-wise IVIM model. We also evaluated the improvement in the precision of parameter estimates by comparing the coefficient of variation (CV) of the SCIM parameter estimates to that of the IVIM. Results: The SCIM method was more robust compared to IVIM (up to 70% in liver and spleen) for different combinations of b-values. Also, the CV values of the parameter estimations using the SCIM method were significantly lower compared to repeated acquisition and signal averaging estimated using IVIM, especially for the fast diffusion parameter in liver (CVIV IM = 46.61 ± 11.22, CVSCIM = 16.85 ± 2.160, p < 0.001) and spleen (CVIV IM = 95.15 ± 19.82, CVSCIM = 52.55 ± 1.91, p < 0.001). Conclusions: The SCIM method characterizes fast and slow diffusion more precisely compared to the independent voxel-wise IVIM model fitting in the liver and spleen. PMID:25832079
Joint seismic data denoising and interpolation with double-sparsity dictionary learning
NASA Astrophysics Data System (ADS)
Zhu, Lingchen; Liu, Entao; McClellan, James H.
2017-08-01
Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.
NASA Technical Reports Server (NTRS)
Gardner, Adrian
2010-01-01
National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.
NASA Astrophysics Data System (ADS)
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Objective. Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis {{\\ell}1} -minimization (GWALM), is proposed for wireless neural recording. Approach. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis {{\\ell}1} -minimization. Main results. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Significance. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Sun, Biao; Zhao, Wenfeng; Zhu, Xinshan
2017-06-01
Data compression is crucial for resource-constrained wireless neural recording applications with limited data bandwidth, and compressed sensing (CS) theory has successfully demonstrated its potential in neural recording applications. In this paper, an analytical, training-free CS recovery method, termed group weighted analysis [Formula: see text]-minimization (GWALM), is proposed for wireless neural recording. The GWALM method consists of three parts: (1) the analysis model is adopted to enforce sparsity of the neural signals, therefore overcoming the drawbacks of conventional synthesis models and enhancing the recovery performance. (2) A multi-fractional-order difference matrix is constructed as the analysis operator, thus avoiding the dictionary learning procedure and reducing the need for previously acquired data and computational complexities. (3) By exploiting the statistical properties of the analysis coefficients, a group weighting approach is developed to enhance the performance of analysis [Formula: see text]-minimization. Experimental results on synthetic and real datasets reveal that the proposed approach outperforms state-of-the-art CS-based methods in terms of both spike recovery quality and classification accuracy. Energy and area efficiency of the GWALM make it an ideal candidate for resource-constrained, large scale wireless neural recording applications. The training-free feature of the GWALM further improves its robustness to spike shape variation, thus making it more practical for long term wireless neural recording.
Interactogeneous: Disease Gene Prioritization Using Heterogeneous Networks and Full Topology Scores
Gonçalves, Joana P.; Francisco, Alexandre P.; Moreau, Yves; Madeira, Sara C.
2012-01-01
Disease gene prioritization aims to suggest potential implications of genes in disease susceptibility. Often accomplished in a guilt-by-association scheme, promising candidates are sorted according to their relatedness to known disease genes. Network-based methods have been successfully exploiting this concept by capturing the interaction of genes or proteins into a score. Nonetheless, most current approaches yield at least some of the following limitations: (1) networks comprise only curated physical interactions leading to poor genome coverage and density, and bias toward a particular source; (2) scores focus on adjacencies (direct links) or the most direct paths (shortest paths) within a constrained neighborhood around the disease genes, ignoring potentially informative indirect paths; (3) global clustering is widely applied to partition the network in an unsupervised manner, attributing little importance to prior knowledge; (4) confidence weights and their contribution to edge differentiation and ranking reliability are often disregarded. We hypothesize that network-based prioritization related to local clustering on graphs and considering full topology of weighted gene association networks integrating heterogeneous sources should overcome the above challenges. We term such a strategy Interactogeneous. We conducted cross-validation tests to assess the impact of network sources, alternative path inclusion and confidence weights on the prioritization of putative genes for 29 diseases. Heat diffusion ranking proved the best prioritization method overall, increasing the gap to neighborhood and shortest paths scores mostly on single source networks. Heterogeneous associations consistently delivered superior performance over single source data across the majority of methods. Results on the contribution of confidence weights were inconclusive. Finally, the best Interactogeneous strategy, heat diffusion ranking and associations from the STRING database, was used to prioritize genes for Parkinson’s disease. This method effectively recovered known genes and uncovered interesting candidates which could be linked to pathogenic mechanisms of the disease. PMID:23185389
NASA Technical Reports Server (NTRS)
Hrinda, Glenn A.; Nguyen, Duc T.
2008-01-01
A technique for the optimization of stability constrained geometrically nonlinear shallow trusses with snap through behavior is demonstrated using the arc length method and a strain energy density approach within a discrete finite element formulation. The optimization method uses an iterative scheme that evaluates the design variables' performance and then updates them according to a recursive formula controlled by the arc length method. A minimum weight design is achieved when a uniform nonlinear strain energy density is found in all members. This minimal condition places the design load just below the critical limit load causing snap through of the structure. The optimization scheme is programmed into a nonlinear finite element algorithm to find the large strain energy at critical limit loads. Examples of highly nonlinear trusses found in literature are presented to verify the method.
LINKS: learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images.
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H; Lin, Weili; Shen, Dinggang
2015-03-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. Copyright © 2014 Elsevier Inc. All rights reserved.
LINKS: Learning-based multi-source IntegratioN frameworK for Segmentation of infant brain images
Wang, Li; Gao, Yaozong; Shi, Feng; Li, Gang; Gilmore, John H.; Lin, Weili; Shen, Dinggang
2014-01-01
Segmentation of infant brain MR images is challenging due to insufficient image quality, severe partial volume effect, and ongoing maturation and myelination processes. In the first year of life, the image contrast between white and gray matters of the infant brain undergoes dramatic changes. In particular, the image contrast is inverted around 6-8 months of age, and the white and gray matter tissues are isointense in both T1- and T2-weighted MR images and thus exhibit the extremely low tissue contrast, which poses significant challenges for automated segmentation. Most previous studies used multi-atlas label fusion strategy, which has the limitation of equally treating the different available image modalities and is often computationally expensive. To cope with these limitations, in this paper, we propose a novel learning-based multi-source integration framework for segmentation of infant brain images. Specifically, we employ the random forest technique to effectively integrate features from multi-source images together for tissue segmentation. Here, the multi-source images include initially only the multi-modality (T1, T2 and FA) images and later also the iteratively estimated and refined tissue probability maps of gray matter, white matter, and cerebrospinal fluid. Experimental results on 119 infants show that the proposed method achieves better performance than other state-of-the-art automated segmentation methods. Further validation was performed on the MICCAI grand challenge and the proposed method was ranked top among all competing methods. Moreover, to alleviate the possible anatomical errors, our method can also be combined with an anatomically-constrained multi-atlas labeling approach for further improving the segmentation accuracy. PMID:25541188
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
Canonical Drude Weight for Non-integrable Quantum Spin Chains
NASA Astrophysics Data System (ADS)
Mastropietro, Vieri; Porta, Marcello
2018-03-01
The Drude weight is a central quantity for the transport properties of quantum spin chains. The canonical definition of Drude weight is directly related to Kubo formula of conductivity. However, the difficulty in the evaluation of such expression has led to several alternative formulations, accessible to different methods. In particular, the Euclidean, or imaginary-time, Drude weight can be studied via rigorous renormalization group. As a result, in the past years several universality results have been proven for such quantity at zero temperature; remarkably, the proofs work for both integrable and non-integrable quantum spin chains. Here we establish the equivalence of Euclidean and canonical Drude weights at zero temperature. Our proof is based on rigorous renormalization group methods, Ward identities, and complex analytic ideas.
NASA Astrophysics Data System (ADS)
Wang, Mingming; Luo, Jianjun; Yuan, Jianping; Walter, Ulrich
2018-05-01
Application of the multi-arm space robot will be more effective than single arm especially when the target is tumbling. This paper investigates the application of particle swarm optimization (PSO) strategy to coordinated trajectory planning of the dual-arm space robot in free-floating mode. In order to overcome the dynamics singularities issue, the direct kinematics equations in conjunction with constrained PSO are employed for coordinated trajectory planning of dual-arm space robot. The joint trajectories are parametrized with Bézier curve to simplify the calculation. Constrained PSO scheme with adaptive inertia weight is implemented to find the optimal solution of joint trajectories while specific objectives and imposed constraints are satisfied. The proposed method is not sensitive to the singularity issue due to the application of forward kinematic equations. Simulation results are presented for coordinated trajectory planning of two kinematically redundant manipulators mounted on a free-floating spacecraft and demonstrate the effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Shivakumar, J.; Ashok, M. H.; Khadakbhavi, Vishwanath; Pujari, Sanjay; Nandurkar, Santosh
2018-02-01
The present work focuses on geometrically nonlinear transient analysis of laminated smart composite plates integrated with the patches of Active fiber composites (AFC) using Active constrained layer damping (ACLD) as the distributed actuators. The analysis has been carried out using generalised energy based finite element model. The coupled electromechanical finite element model is derived using Von Karman type nonlinear strain displacement relations and a first-order shear deformation theory (FSDT). Eight-node iso-parametric serendipity elements are used for discretization of the overall plate integrated with AFC patch material. The viscoelastic constrained layer is modelled using GHM method. The numerical results shows the improvement in the active damping characteristics of the laminated composite plates over the passive damping for suppressing the geometrically nonlinear transient vibrations of laminated composite plates with AFC as patch material.
Simple modification of Oja rule limits L1-norm of weight vector and leads to sparse connectivity.
Aparin, Vladimir
2012-03-01
This letter describes a simple modification of the Oja learning rule, which asymptotically constrains the L1-norm of an input weight vector instead of the L2-norm as in the original rule. This constraining is local as opposed to commonly used instant normalizations, which require the knowledge of all input weights of a neuron to update each one of them individually. The proposed rule converges to a weight vector that is sparser (has more zero weights) than the vector learned by the original Oja rule with or without the zero bound, which could explain the developmental synaptic pruning.
Zhang, Huaguang; Qu, Qiuxia; Xiao, Geyang; Cui, Yang
2018-06-01
Based on integral sliding mode and approximate dynamic programming (ADP) theory, a novel optimal guaranteed cost sliding mode control is designed for constrained-input nonlinear systems with matched and unmatched disturbances. When the system moves on the sliding surface, the optimal guaranteed cost control problem of sliding mode dynamics is transformed into the optimal control problem of a reformulated auxiliary system with a modified cost function. The ADP algorithm based on single critic neural network (NN) is applied to obtain the approximate optimal control law for the auxiliary system. Lyapunov techniques are used to demonstrate the convergence of the NN weight errors. In addition, the derived approximate optimal control is verified to guarantee the sliding mode dynamics system to be stable in the sense of uniform ultimate boundedness. Some simulation results are presented to verify the feasibility of the proposed control scheme.
NASA Technical Reports Server (NTRS)
Olds, John R.; Cowart, Kris
2001-01-01
A method for integrating Aeroheating analysis into conceptual reusable launch vehicle (RLV) design is presented in this thesis. This process allows for faster turn-around time to converge a RLV design through the advent of designing an optimized thermal protection system (TPS). It consists of the coupling and automation of four computer software packages: MINIVER, TPSX, TCAT, and ADS. MINIVER is an Aeroheating code that produces centerline radiation equilibrium temperatures, convective heating rates, and heat loads over simplified vehicle geometries. These include flat plates and swept cylinders that model wings and leading edges, respectively. TPSX is a NASA Ames material properties database that is available on the World Wide Web. The newly developed Thermal Calculation Analysis Tool (TCAT) uses finite difference methods to carry out a transient in-depth 1-D conduction analysis over the center mold line of the vehicle. This is used along with the Automated Design Synthesis (ADS) code to correctly size the vehicle's thermal protection system (TPS). The numerical optimizer ADS uses algorithms that solve constrained and unconstrained design problems. The resulting outputs for this process are TPS material types, unit thicknesses, and acreage percentages. TCAT was developed for several purposes. First, it provides a means to calculate the transient in-depth conduction seen by the surface of the TPS material that protects a vehicle during ascent and reentry. Along with the in-depth conduction, radiation from the surface of the material is calculated along with the temperatures at the backface and interior parts of the TPS material. Secondly, TCAT contributes added speed and automation to the overall design process. Another motivation in the development of TCAT is optimization. In some vehicles, the TPS accounts for a high percentage of the overall vehicle dry weight. Optimizing the weight of the TPS will thereby lower the percentage of the dry weight accounted for by the TPS. Also, this will lower the cost of the TPS and the overall cost of the vehicle.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Apparatus and system for multivariate spectral analysis
Keenan, Michael R.; Kotula, Paul G.
2003-06-24
An apparatus and system for determining the properties of a sample from measured spectral data collected from the sample by performing a method of multivariate spectral analysis. The method can include: generating a two-dimensional matrix A containing measured spectral data; providing a weighted spectral data matrix D by performing a weighting operation on matrix A; factoring D into the product of two matrices, C and S.sup.T, by performing a constrained alternating least-squares analysis of D=CS.sup.T, where C is a concentration intensity matrix and S is a spectral shapes matrix; unweighting C and S by applying the inverse of the weighting used previously; and determining the properties of the sample by inspecting C and S. This method can be used by a spectrum analyzer to process X-ray spectral data generated by a spectral analysis system that can include a Scanning Electron Microscope (SEM) with an Energy Dispersive Detector and Pulse Height Analyzer.
Mine safety assessment using gray relational analysis and bow tie model
2018-01-01
Mine safety assessment is a precondition for ensuring orderly and safety in production. The main purpose of this study was to prevent mine accidents more effectively by proposing a composite risk analysis model. First, the weights of the assessment indicators were determined by the revised integrated weight method, in which the objective weights were determined by a variation coefficient method and the subjective weights determined by the Delphi method. A new formula was then adopted to calculate the integrated weights based on the subjective and objective weights. Second, after the assessment indicator weights were determined, gray relational analysis was used to evaluate the safety of mine enterprises. Mine enterprise safety was ranked according to the gray relational degree, and weak links of mine safety practices identified based on gray relational analysis. Third, to validate the revised integrated weight method adopted in the process of gray relational analysis, the fuzzy evaluation method was used to the safety assessment of mine enterprises. Fourth, for first time, bow tie model was adopted to identify the causes and consequences of weak links and allow corresponding safety measures to be taken to guarantee the mine’s safe production. A case study of mine safety assessment was presented to demonstrate the effectiveness and rationality of the proposed composite risk analysis model, which can be applied to other related industries for safety evaluation. PMID:29561875
Mixed finite-element formulations in piezoelectricity and flexoelectricity
2016-01-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a ‘weighted integral sense’ to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application. PMID:27436967
Mixed finite-element formulations in piezoelectricity and flexoelectricity.
Mao, Sheng; Purohit, Prashant K; Aravas, Nikolaos
2016-06-01
Flexoelectricity, the linear coupling of strain gradient and electric polarization, is inherently a size-dependent phenomenon. The energy storage function for a flexoelectric material depends not only on polarization and strain, but also strain-gradient. Thus, conventional finite-element methods formulated solely on displacement are inadequate to treat flexoelectric solids since gradients raise the order of the governing differential equations. Here, we introduce a computational framework based on a mixed formulation developed previously by one of the present authors and a colleague. This formulation uses displacement and displacement-gradient as separate variables which are constrained in a 'weighted integral sense' to enforce their known relation. We derive a variational formulation for boundary-value problems for piezo- and/or flexoelectric solids. We validate this computational framework against available exact solutions. Our new computational method is applied to more complex problems, including a plate with an elliptical hole, stationary cracks, as well as tension and shear of solids with a repeating unit cell. Our results address several issues of theoretical interest, generate predictions of experimental merit and reveal interesting flexoelectric phenomena with potential for application.
Real-time depth camera tracking with geometrically stable weight algorithm
NASA Astrophysics Data System (ADS)
Fu, Xingyin; Zhu, Feng; Qi, Feng; Wang, Mingming
2017-03-01
We present an approach for real-time camera tracking with depth stream. Existing methods are prone to drift in sceneries without sufficient geometric information. First, we propose a new weight method for an iterative closest point algorithm commonly used in real-time dense mapping and tracking systems. By detecting uncertainty in pose and increasing weight of points that constrain unstable transformations, our system achieves accurate and robust trajectory estimation results. Our pipeline can be fully parallelized with GPU and incorporated into the current real-time depth camera tracking system seamlessly. Second, we compare the state-of-the-art weight algorithms and propose a weight degradation algorithm according to the measurement characteristics of a consumer depth camera. Third, we use Nvidia Kepler Shuffle instructions during warp and block reduction to improve the efficiency of our system. Results on the public TUM RGB-D database benchmark demonstrate that our camera tracking system achieves state-of-the-art results both in accuracy and efficiency.
Vibration control of multiferroic fibrous composite plates using active constrained layer damping
NASA Astrophysics Data System (ADS)
Kattimani, S. C.; Ray, M. C.
2018-06-01
Geometrically nonlinear vibration control of fiber reinforced magneto-electro-elastic or multiferroic fibrous composite plates using active constrained layer damping treatment has been investigated. The piezoelectric (BaTiO3) fibers are embedded in the magnetostrictive (CoFe2O4) matrix forming magneto-electro-elastic or multiferroic smart composite. A three-dimensional finite element model of such fiber reinforced magneto-electro-elastic plates integrated with the active constrained layer damping patches is developed. Influence of electro-elastic, magneto-elastic and electromagnetic coupled fields on the vibration has been studied. The Golla-Hughes-McTavish method in time domain is employed for modeling a constrained viscoelastic layer of the active constrained layer damping treatment. The von Kármán type nonlinear strain-displacement relations are incorporated for developing a three-dimensional finite element model. Effect of fiber volume fraction, fiber orientation and boundary conditions on the control of geometrically nonlinear vibration of the fiber reinforced magneto-electro-elastic plates is investigated. The performance of the active constrained layer damping treatment due to the variation of piezoelectric fiber orientation angle in the 1-3 Piezoelectric constraining layer of the active constrained layer damping treatment has also been emphasized.
Analyses of deep mammalian sequence alignments and constraint predictions for 1% of the human genome
Margulies, Elliott H.; Cooper, Gregory M.; Asimenos, George; Thomas, Daryl J.; Dewey, Colin N.; Siepel, Adam; Birney, Ewan; Keefe, Damian; Schwartz, Ariel S.; Hou, Minmei; Taylor, James; Nikolaev, Sergey; Montoya-Burgos, Juan I.; Löytynoja, Ari; Whelan, Simon; Pardi, Fabio; Massingham, Tim; Brown, James B.; Bickel, Peter; Holmes, Ian; Mullikin, James C.; Ureta-Vidal, Abel; Paten, Benedict; Stone, Eric A.; Rosenbloom, Kate R.; Kent, W. James; Bouffard, Gerard G.; Guan, Xiaobin; Hansen, Nancy F.; Idol, Jacquelyn R.; Maduro, Valerie V.B.; Maskeri, Baishali; McDowell, Jennifer C.; Park, Morgan; Thomas, Pamela J.; Young, Alice C.; Blakesley, Robert W.; Muzny, Donna M.; Sodergren, Erica; Wheeler, David A.; Worley, Kim C.; Jiang, Huaiyang; Weinstock, George M.; Gibbs, Richard A.; Graves, Tina; Fulton, Robert; Mardis, Elaine R.; Wilson, Richard K.; Clamp, Michele; Cuff, James; Gnerre, Sante; Jaffe, David B.; Chang, Jean L.; Lindblad-Toh, Kerstin; Lander, Eric S.; Hinrichs, Angie; Trumbower, Heather; Clawson, Hiram; Zweig, Ann; Kuhn, Robert M.; Barber, Galt; Harte, Rachel; Karolchik, Donna; Field, Matthew A.; Moore, Richard A.; Matthewson, Carrie A.; Schein, Jacqueline E.; Marra, Marco A.; Antonarakis, Stylianos E.; Batzoglou, Serafim; Goldman, Nick; Hardison, Ross; Haussler, David; Miller, Webb; Pachter, Lior; Green, Eric D.; Sidow, Arend
2007-01-01
A key component of the ongoing ENCODE project involves rigorous comparative sequence analyses for the initially targeted 1% of the human genome. Here, we present orthologous sequence generation, alignment, and evolutionary constraint analyses of 23 mammalian species for all ENCODE targets. Alignments were generated using four different methods; comparisons of these methods reveal large-scale consistency but substantial differences in terms of small genomic rearrangements, sensitivity (sequence coverage), and specificity (alignment accuracy). We describe the quantitative and qualitative trade-offs concomitant with alignment method choice and the levels of technical error that need to be accounted for in applications that require multisequence alignments. Using the generated alignments, we identified constrained regions using three different methods. While the different constraint-detecting methods are in general agreement, there are important discrepancies relating to both the underlying alignments and the specific algorithms. However, by integrating the results across the alignments and constraint-detecting methods, we produced constraint annotations that were found to be robust based on multiple independent measures. Analyses of these annotations illustrate that most classes of experimentally annotated functional elements are enriched for constrained sequences; however, large portions of each class (with the exception of protein-coding sequences) do not overlap constrained regions. The latter elements might not be under primary sequence constraint, might not be constrained across all mammals, or might have expendable molecular functions. Conversely, 40% of the constrained sequences do not overlap any of the functional elements that have been experimentally identified. Together, these findings demonstrate and quantify how many genomic functional elements await basic molecular characterization. PMID:17567995
Effect of constrained weight shift on the static balance and muscle activation of stroke patients
Kang, Kyung Woo; Kim, Kyoung; Lee, Na Kyung; Kwon, Jung Won; Son, Sung Min
2015-01-01
[Purpose] The purpose of this study was to evaluate the effects of constrained weight shift induced by shoe lift beneath the unaffected lower extremity, on balance functions and electromyography of the affected lower extremity of stroke patients. [Subjects and Methods] Twelve patients with unilateral stroke were recruited as volunteers for this study. The subjects were repeatedly measured in a randomized order under three conditions: no-shoe lift, and shoe lifts of 5 mm and 10 mm heights beneath the unaffected lower extremity. [Results] Standing with a 10 mm shoe lift for the unaffected lower extremity decreased the mean velocity of mediolateral sway compared to no-shoe lift. Regarding the velocity of anteroposterior sway, standing with 5 mm and 10 mm shoe lifts decreased the mean velocity of anteroposterior sway. The muscle activation of the affected lower extremity was not significantly different among the no-shoe lift, 5 mm shoe lift and 10 mm shoe lift conditions; however, the muscle activities of the rectus femoris, biceps femoris, tibialis anterior, and medial gastrocnemius of the affected lower extremity progressively improved with increasing height of the shoe lift. [Conclusion] A constrained weight shift to the affected side elicited by a shoe insole of 10 mm height on the unaffected side can improve the static standing balance of stroke patients, and it resulted in 14–24% increases in the muscle activities of the affected leg. PMID:25931729
Diameter-Constrained Steiner Tree
NASA Astrophysics Data System (ADS)
Ding, Wei; Lin, Guohui; Xue, Guoliang
Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.
Enhanced reconstruction of weighted networks from strengths and degrees
NASA Astrophysics Data System (ADS)
Mastrandrea, Rossana; Squartini, Tiziano; Fagiolo, Giorgio; Garlaschelli, Diego
2014-04-01
Network topology plays a key role in many phenomena, from the spreading of diseases to that of financial crises. Whenever the whole structure of a network is unknown, one must resort to reconstruction methods that identify the least biased ensemble of networks consistent with the partial information available. A challenging case, frequently encountered due to privacy issues in the analysis of interbank flows and Big Data, is when there is only local (node-specific) aggregate information available. For binary networks, the relevant ensemble is one where the degree (number of links) of each node is constrained to its observed value. However, for weighted networks the problem is much more complicated. While the naïve approach prescribes to constrain the strengths (total link weights) of all nodes, recent counter-intuitive results suggest that in weighted networks the degrees are often more informative than the strengths. This implies that the reconstruction of weighted networks would be significantly enhanced by the specification of both strengths and degrees, a computationally hard and bias-prone procedure. Here we solve this problem by introducing an analytical and unbiased maximum-entropy method that works in the shortest possible time and does not require the explicit generation of reconstructed samples. We consider several real-world examples and show that, while the strengths alone give poor results, the additional knowledge of the degrees yields accurately reconstructed networks. Information-theoretic criteria rigorously confirm that the degree sequence, as soon as it is non-trivial, is irreducible to the strength sequence. Our results have strong implications for the analysis of motifs and communities and whenever the reconstructed ensemble is required as a null model to detect higher-order patterns.
Structural optimization: Status and promise
NASA Astrophysics Data System (ADS)
Kamat, Manohar P.
Chapters contained in this book include fundamental concepts of optimum design, mathematical programming methods for constrained optimization, function approximations, approximate reanalysis methods, dual mathematical programming methods for constrained optimization, a generalized optimality criteria method, and a tutorial and survey of multicriteria optimization in engineering. Also included are chapters on the compromise decision support problem and the adaptive linear programming algorithm, sensitivity analyses of discrete and distributed systems, the design sensitivity analysis of nonlinear structures, optimization by decomposition, mixed elements in shape sensitivity analysis of structures based on local criteria, and optimization of stiffened cylindrical shells subjected to destabilizing loads. Other chapters are on applications to fixed-wing aircraft and spacecraft, integrated optimum structural and control design, modeling concurrency in the design of composite structures, and tools for structural optimization. (No individual items are abstracted in this volume)
Isotropic non-white matter partial volume effects in constrained spherical deconvolution.
Roine, Timo; Jeurissen, Ben; Perrone, Daniele; Aelterman, Jan; Leemans, Alexander; Philips, Wilfried; Sijbers, Jan
2014-01-01
Diffusion-weighted (DW) magnetic resonance imaging (MRI) is a non-invasive imaging method, which can be used to investigate neural tracts in the white matter (WM) of the brain. Significant partial volume effects (PVEs) are present in the DW signal due to relatively large voxel sizes. These PVEs can be caused by both non-WM tissue, such as gray matter (GM) and cerebrospinal fluid (CSF), and by multiple non-parallel WM fiber populations. High angular resolution diffusion imaging (HARDI) methods have been developed to correctly characterize complex WM fiber configurations, but to date, many of the HARDI methods do not account for non-WM PVEs. In this work, we investigated the isotropic PVEs caused by non-WM tissue in WM voxels on fiber orientations extracted with constrained spherical deconvolution (CSD). Experiments were performed on simulated and real DW-MRI data. In particular, simulations were performed to demonstrate the effects of varying the diffusion weightings, signal-to-noise ratios (SNRs), fiber configurations, and tissue fractions. Our results show that the presence of non-WM tissue signal causes a decrease in the precision of the detected fiber orientations and an increase in the detection of false peaks in CSD. We estimated 35-50% of WM voxels to be affected by non-WM PVEs. For HARDI sequences, which typically have a relatively high degree of diffusion weighting, these adverse effects are most pronounced in voxels with GM PVEs. The non-WM PVEs become severe with 50% GM volume for maximum spherical harmonics orders of 8 and below, and already with 25% GM volume for higher orders. In addition, a low diffusion weighting or SNR increases the effects. The non-WM PVEs may cause problems in connectomics, where reliable fiber tracking at the WM-GM interface is especially important. We suggest acquiring data with high diffusion-weighting 2500-3000 s/mm(2), reasonable SNR (~30) and using lower SH orders in GM contaminated regions to minimize the non-WM PVEs in CSD.
A Pulse Rate Detection Method for Mouse Application Based on Multi-PPG Sensors
Chen, Wei-Hao
2017-01-01
Heart rate is an important physiological parameter for healthcare. Among measurement methods, photoplethysmography (PPG) is an easy and convenient method for pulse rate detection. However, as the PPG signal faces the challenge of motion artifacts and is constrained by the position chosen, the purpose of this paper is to implement a comfortable and easy-to-use multi-PPG sensor module combined with a stable and accurate real-time pulse rate detection method on a computer mouse. A weighted average method for multi-PPG sensors is used to adjust the weight of each signal channel in order to raise the accuracy and stability of the detected signal, therefore reducing the disturbance of noise under the environment of moving effectively and efficiently. According to the experiment results, the proposed method can increase the usability and probability of PPG signal detection on palms. PMID:28708112
Phenomenological constraints on A N in p ↑ p → π X from Lorentz invariance relations
Gamberg, Leonard; Kang, Zhong-Bo; Pitonyak, Daniel; ...
2017-04-27
Here, we present a new analysis of A N in p ↑ p → πX within the collinear twist-3 factorization formalism. We incorporate recently derived Lorentz invariance relations into our calculation and focus on input from the kinematical twist-3 functions, which are weighted integrals of transverse momentum dependent (TMD) functions. Particularly, we use the latest extractions of the Sivers and Collins functions with TMD evolution to compute certain terms in AN . Consequently, we are able to constrain the remaining contributions from the lesser known dynamical twist-3 correlators.
Garcia, C. Amanda; Halford, Keith J.; Laczniak, Randell J.
2010-01-01
Hydraulic conductivities of volcanic and carbonate lithologic units at the Nevada Test Site were estimated from flow logs and aquifer-test data. Borehole flow and drawdown were integrated and interpreted using a radial, axisymmetric flow model, AnalyzeHOLE. This integrated approach is used because complex well completions and heterogeneous aquifers and confining units produce vertical flow in the annular space and aquifers adjacent to the wellbore. AnalyzeHOLE simulates vertical flow, in addition to horizontal flow, which accounts for converging flow toward screen ends and diverging flow toward transmissive intervals. Simulated aquifers and confining units uniformly are subdivided by depth into intervals in which the hydraulic conductivity is estimated with the Parameter ESTimation (PEST) software. Between 50 and 150 hydraulic-conductivity parameters were estimated by minimizing weighted differences between simulated and measured flow and drawdown. Transmissivity estimates from single-well or multiple-well aquifer tests were used to constrain estimates of hydraulic conductivity. The distribution of hydraulic conductivity within each lithology had a minimum variance because estimates were constrained with Tikhonov regularization. AnalyzeHOLE simulated hydraulic-conductivity estimates for lithologic units across screened and cased intervals are as much as 100 times less than those estimated using proportional flow-log analyses applied across screened intervals only. Smaller estimates of hydraulic conductivity for individual lithologic units are simulated because sections of the unit behind cased intervals of the wellbore are not assumed to be impermeable, and therefore, can contribute flow to the wellbore. Simulated hydraulic-conductivity estimates vary by more than three orders of magnitude across a lithologic unit, indicating a high degree of heterogeneity in volcanic and carbonate-rock units. The higher water transmitting potential of carbonate-rock units relative to volcanic-rock units is exemplified by the large difference in their estimated maximum hydraulic conductivity; 4,000 and 400 feet per day, respectively. Simulated minimum estimates of hydraulic conductivity are inexact and represent the lower detection limit of the method. Minimum thicknesses of lithologic intervals also were defined for comparing AnalyzeHOLE results to hydraulic properties in regional ground-water flow models.
NASA Technical Reports Server (NTRS)
1980-01-01
The initial ACT configuration design task of the integrated application of active controls (IAAC) technology project within the Energy Efficient Transport Program is summarized. A constrained application of active controls technology (ACT) resulted in significant improvements over a conventional baseline configuration previously established. The configuration uses the same levels of technology, takeoff gross weight, payload, and design requirements/objectives as the baseline, except for flying qualities, flutter, and ACT. The baseline wing is moved forward 1.68 m. The configuration incorporates pitch-augmented stability (which enabled an approximately 10% aft shift in cruise center of gravity and a 45% reduction in horizontal tail size), lateral/directional-augmented stability, an angle of attack limiter, wing load alleviation, and flutter mode control. This resulted in a 930 kg reduction in airplane operating empty weight and a 3.6% improvement in cruise efficiency, yielding a 13% range increase. Adjusted to the 3590 km baseline mission range, this amounts to 6% block fuel reduction and a 15.7% higher incremental return on investment, using 1978 dollars and fuel cost.
Modares, Hamidreza; Lewis, Frank L; Naghibi-Sistani, Mohammad-Bagher
2013-10-01
This paper presents an online policy iteration (PI) algorithm to learn the continuous-time optimal control solution for unknown constrained-input systems. The proposed PI algorithm is implemented on an actor-critic structure where two neural networks (NNs) are tuned online and simultaneously to generate the optimal bounded control policy. The requirement of complete knowledge of the system dynamics is obviated by employing a novel NN identifier in conjunction with the actor and critic NNs. It is shown how the identifier weights estimation error affects the convergence of the critic NN. A novel learning rule is developed to guarantee that the identifier weights converge to small neighborhoods of their ideal values exponentially fast. To provide an easy-to-check persistence of excitation condition, the experience replay technique is used. That is, recorded past experiences are used simultaneously with current data for the adaptation of the identifier weights. Stability of the whole system consisting of the actor, critic, system state, and system identifier is guaranteed while all three networks undergo adaptation. Convergence to a near-optimal control law is also shown. The effectiveness of the proposed method is illustrated with a simulation example.
Altürk, Ahmet
2016-01-01
Mean value theorems for both derivatives and integrals are very useful tools in mathematics. They can be used to obtain very important inequalities and to prove basic theorems of mathematical analysis. In this article, a semi-analytical method that is based on weighted mean-value theorem for obtaining solutions for a wide class of Fredholm integral equations of the second kind is introduced. Illustrative examples are provided to show the significant advantage of the proposed method over some existing techniques.
Yamamoto, Tatsuki; Miura, Chihiro; Fuji, Masako; Nagata, Shotaro; Otani, Yuria; Yagame, Takahiro; Yamato, Masahide; Kaminaka, Hironori
2017-02-21
In nature, orchid plants depend completely on symbiotic fungi for their nutrition at the germination and the subsequent seedling (protocorm) stages. However, only limited quantitative methods for evaluating the orchid-fungus interactions at the protocorm stage are currently available, which greatly constrains our understanding of the symbiosis. Here, we aimed to improve and integrate quantitative evaluations of the growth and fungal colonization in the protocorms of a terrestrial orchid, Blettila striata, growing on a plate medium. We achieved both symbiotic and asymbiotic germinations for the terrestrial orchid B. striata. The protocorms produced by the two germination methods grew almost synchronously for the first three weeks. At week four, however, the length was significantly lower in the symbiotic protocorms. Interestingly, the dry weight of symbiotic protocorms did not significantly change during the growth period, which implies that there was only limited transfer of carbon compounds from the fungus to the protocorms in this relationship. Next, to evaluate the orchid-fungus interactions, we developed an ink-staining method to observe the hyphal coils in protocorms without preparing thin sections. Crushing the protocorm under the coverglass enables us to observe all hyphal coils in the protocorms with high resolution. For this observation, we established a criterion to categorize the stages of hyphal coils, depending on development and degradation. By counting the symbiotic cells within each stage, it was possible to quantitatively evaluate the orchid-fungus symbiosis. We describe a method for quantitative evaluation of orchid-fungus symbiosis by integrating the measurements of plant growth and fungal colonization. The current study revealed that although fungal colonization was observed in the symbiotic protocorms, the weight of the protocorm did not significantly increase, which is probably due to the incompatibility of the fungus in this symbiosis. These results suggest that fungal colonization and nutrition transfer can be differentially regulated in the symbiosis. The evaluation methods developed in this study can be used to study various quantitative aspects of the orchid-fungus symbiosis.
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
NASA Astrophysics Data System (ADS)
Wan, Minjie; Gu, Guohua; Qian, Weixian; Ren, Kan; Chen, Qian; Maldague, Xavier
2018-06-01
Infrared image enhancement plays a significant role in intelligent urban surveillance systems for smart city applications. Unlike existing methods only exaggerating the global contrast, we propose a particle swam optimization-based local entropy weighted histogram equalization which involves the enhancement of both local details and fore-and background contrast. First of all, a novel local entropy weighted histogram depicting the distribution of detail information is calculated based on a modified hyperbolic tangent function. Then, the histogram is divided into two parts via a threshold maximizing the inter-class variance in order to improve the contrasts of foreground and background, respectively. To avoid over-enhancement and noise amplification, double plateau thresholds of the presented histogram are formulated by means of particle swarm optimization algorithm. Lastly, each sub-image is equalized independently according to the constrained sub-local entropy weighted histogram. Comparative experiments implemented on real infrared images prove that our algorithm outperforms other state-of-the-art methods in terms of both visual and quantized evaluations.
Stability analysis of Caisson Cofferdam Based on Strength Reduction Method
NASA Astrophysics Data System (ADS)
Xu, B. B.; Zhang, N. S.
2018-05-01
The working mechanism of the caisson cofferdam depends on the self-weight of the structure and internal filling to ensure its skid resistance and overturn resistance stability. Using the strength reduction method, the safety factor of the caisson cofferdam can be obtained. The potential slide surface can be searched automatically without constraining the range of the arc center. According to the results, the slippage surface goes through the bottom of the caisson. Based on the judgement criterion of the strength reduction method, the final safety factor is about 1.65.
NASA Technical Reports Server (NTRS)
Riehl, John P.; Sjauw, Waldy K.
2004-01-01
Trajectory, mission, and vehicle engineers concern themselves with finding the best way for an object to get from one place to another. These engineers rely upon special software to assist them in this. For a number of years, many engineers have used the OTIS program for this assistance. With OTIS, an engineer can fully optimize trajectories for airplanes, launch vehicles like the space shuttle, interplanetary spacecraft, and orbital transfer vehicles. OTIS provides four modes of operation, with each mode providing successively stronger optimization capability. The most powerful mode uses a mathematical method called implicit integration to solve what engineers and mathematicians call the optimal control problem. OTIS 3.2, which was developed at the NASA Glenn Research Center, is the latest release of this industry workhorse and features new capabilities for parameter optimization and mission design. OTIS stands for Optimal Control by Implicit Simulation, and it is implicit integration that makes OTIS so powerful at solving trajectory optimization problems. Why is this so important? The optimization process not only determines how to get from point A to point B, but it can also determine how to do this with the least amount of propellant, with the lightest starting weight, or in the fastest time possible while avoiding certain obstacles along the way. There are numerous conditions that engineers can use to define optimal, or best. OTIS provides a framework for defining the starting and ending points of the trajectory (point A and point B), the constraints on the trajectory (requirements like "avoid these regions where obstacles occur"), and what is being optimized (e.g., minimize propellant). The implicit integration method can find solutions to very complicated problems when there is not a lot of information available about what the optimal trajectory might be. The method was first developed for solving two-point boundary value problems and was adapted for use in OTIS. Implicit integration usually allows OTIS to find solutions to problems much faster than programs that use explicit integration and parametric methods. Consequently, OTIS is best suited to solving very complicated and highly constrained problems.
Makanza, R; Zaman-Allah, M; Cairns, J E; Eyre, J; Burgueño, J; Pacheco, Ángela; Diepenbrock, C; Magorokosho, C; Tarekegne, A; Olsen, M; Prasanna, B M
2018-01-01
Grain yield, ear and kernel attributes can assist to understand the performance of maize plant under different environmental conditions and can be used in the variety development process to address farmer's preferences. These parameters are however still laborious and expensive to measure. A low-cost ear digital imaging method was developed that provides estimates of ear and kernel attributes i.e., ear number and size, kernel number and size as well as kernel weight from photos of ears harvested from field trial plots. The image processing method uses a script that runs in a batch mode on ImageJ; an open source software. Kernel weight was estimated using the total kernel number derived from the number of kernels visible on the image and the average kernel size. Data showed a good agreement in terms of accuracy and precision between ground truth measurements and data generated through image processing. Broad-sense heritability of the estimated parameters was in the range or higher than that for measured grain weight. Limitation of the method for kernel weight estimation is discussed. The method developed in this work provides an opportunity to significantly reduce the cost of selection in the breeding process, especially for resource constrained crop improvement programs and can be used to learn more about the genetic bases of grain yield determinants.
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
ERIC Educational Resources Information Center
Doherty, Alison J.; Jones, Stephanie P.; Chauhan, Umesh; Gibson, Josephine M. E.
2018-01-01
Background: Obesity is more prevalent in people with intellectual disabilities and increases the risk of developing serious medical conditions. UK guidance recommends multicomponent weight management interventions (MCIs), tailored for different population groups. Methods: An integrative review utilizing systematic review methodology was conducted…
Yendiki, Anastasia; Panneck, Patricia; Srinivasan, Priti; Stevens, Allison; Zöllei, Lilla; Augustinack, Jean; Wang, Ruopeng; Salat, David; Ehrlich, Stefan; Behrens, Tim; Jbabdi, Saad; Gollub, Randy; Fischl, Bruce
2011-01-01
We have developed a method for automated probabilistic reconstruction of a set of major white-matter pathways from diffusion-weighted MR images. Our method is called TRACULA (TRActs Constrained by UnderLying Anatomy) and utilizes prior information on the anatomy of the pathways from a set of training subjects. By incorporating this prior knowledge in the reconstruction procedure, our method obviates the need for manual interaction with the tract solutions at a later stage and thus facilitates the application of tractography to large studies. In this paper we illustrate the application of the method on data from a schizophrenia study and investigate whether the inclusion of both patients and healthy subjects in the training set affects our ability to reconstruct the pathways reliably. We show that, since our method does not constrain the exact spatial location or shape of the pathways but only their trajectory relative to the surrounding anatomical structures, a set a of healthy training subjects can be used to reconstruct the pathways accurately in patients as well as in controls. PMID:22016733
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-01-01
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties. PMID:23531490
Chen, Zhi; Yuan, Yuan; Zhang, Shu-Shen; Chen, Yu; Yang, Feng-Lin
2013-03-26
Critical environmental and human health concerns are associated with the rapidly growing fields of nanotechnology and manufactured nanomaterials (MNMs). The main risk arises from occupational exposure via chronic inhalation of nanoparticles. This research presents a chance-constrained nonlinear programming (CCNLP) optimization approach, which is developed to maximize the nanaomaterial production and minimize the risks of workplace exposure to MNMs. The CCNLP method integrates nonlinear programming (NLP) and chance-constrained programming (CCP), and handles uncertainties associated with both the nanomaterial production and workplace exposure control. The CCNLP method was examined through a single-walled carbon nanotube (SWNT) manufacturing process. The study results provide optimal production strategies and alternatives. It reveal that a high control measure guarantees that environmental health and safety (EHS) standards regulations are met, while a lower control level leads to increased risk of violating EHS regulations. The CCNLP optimization approach is a decision support tool for the optimization of the increasing MNMS manufacturing with workplace safety constraints under uncertainties.
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-01-01
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things. PMID:26506357
Pawlowski, Marcin Piotr; Jara, Antonio; Ogorzalek, Maciej
2015-10-22
Entropy in computer security is associated with the unpredictability of a source of randomness. The random source with high entropy tends to achieve a uniform distribution of random values. Random number generators are one of the most important building blocks of cryptosystems. In constrained devices of the Internet of Things ecosystem, high entropy random number generators are hard to achieve due to hardware limitations. For the purpose of the random number generation in constrained devices, this work proposes a solution based on the least-significant bits concatenation entropy harvesting method. As a potential source of entropy, on-board integrated sensors (i.e., temperature, humidity and two different light sensors) have been analyzed. Additionally, the costs (i.e., time and memory consumption) of the presented approach have been measured. The results obtained from the proposed method with statistical fine tuning achieved a Shannon entropy of around 7.9 bits per byte of data for temperature and humidity sensors. The results showed that sensor-based random number generators are a valuable source of entropy with very small RAM and Flash memory requirements for constrained devices of the Internet of Things.
A floor-map-aided WiFi/pseudo-odometry integration algorithm for an indoor positioning system.
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-03-24
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The "go and back" phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The "cross-wall" problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning.
NASA Astrophysics Data System (ADS)
Goh, Shu Ting
Spacecraft formation flying navigation continues to receive a great deal of interest. The research presented in this dissertation focuses on developing methods for estimating spacecraft absolute and relative positions, assuming measurements of only relative positions using wireless sensors. The implementation of the extended Kalman filter to the spacecraft formation navigation problem results in high estimation errors and instabilities in state estimation at times. This is due to the high nonlinearities in the system dynamic model. Several approaches are attempted in this dissertation aiming at increasing the estimation stability and improving the estimation accuracy. A differential geometric filter is implemented for spacecraft positions estimation. The differential geometric filter avoids the linearization step (which is always carried out in the extended Kalman filter) through a mathematical transformation that converts the nonlinear system into a linear system. A linear estimator is designed in the linear domain, and then transformed back to the physical domain. This approach demonstrated better estimation stability for spacecraft formation positions estimation, as detailed in this dissertation. The constrained Kalman filter is also implemented for spacecraft formation flying absolute positions estimation. The orbital motion of a spacecraft is characterized by two range extrema (perigee and apogee). At the extremum, the rate of change of a spacecraft's range vanishes. This motion constraint can be used to improve the position estimation accuracy. The application of the constrained Kalman filter at only two points in the orbit causes filter instability. Two variables are introduced into the constrained Kalman filter to maintain the stability and improve the estimation accuracy. An extended Kalman filter is implemented as a benchmark for comparison with the constrained Kalman filter. Simulation results show that the constrained Kalman filter provides better estimation accuracy as compared with the extended Kalman filter. A Weighted Measurement Fusion Kalman Filter (WMFKF) is proposed in this dissertation. In wireless localizing sensors, a measurement error is proportional to the distance of the signal travels and sensor noise. In this proposed Weighted Measurement Fusion Kalman Filter, the signal traveling time delay is not modeled; however, each measurement is weighted based on the measured signal travel distance. The obtained estimation performance is compared to the standard Kalman filter in two scenarios. The first scenario assumes using a wireless local positioning system in a GPS denied environment. The second scenario assumes the availability of both the wireless local positioning system and GPS measurements. The simulation results show that the WMFKF has similar accuracy performance as the standard Kalman Filter (KF) in the GPS denied environment. However, the WMFKF maintains the position estimation error within its expected error boundary when the WLPS detection range limit is above 30km. In addition, the WMFKF has a better accuracy and stability performance when GPS is available. Also, the computational cost analysis shows that the WMFKF has less computational cost than the standard KF, and the WMFKF has higher ellipsoid error probable percentage than the standard Measurement Fusion method. A method to determine the relative attitudes between three spacecraft is developed. The method requires four direction measurements between the three spacecraft. The simulation results and covariance analysis show that the method's error falls within a three sigma boundary without exhibiting any singularity issues. A study of the accuracy of the proposed method with respect to the shape of the spacecraft formation is also presented.
A method of evaluating crown fuels in forest stands.
Rodney W. Sando; Charles H. Wick
1972-01-01
A method of describing the crown fuels in a forest fuel complex based on crown weight and crown volume was developed. A computer program is an integral part of the method. Crown weight data are presented in graphical form and are separated into hardwood and coniferous fuels. The fuel complex is described using total crown weight per acre, mean height to the base of...
Lu, Hongwei; Li, Jing; Ren, Lixia; Chen, Yizhong
2018-05-01
Groundwater remediation is a complicated system with time-consuming and costly challenges, which should be carefully controlled by appropriate groundwater management. This study develops an integrated optimization method for groundwater remediation management regarding cost, contamination distribution and health risk under multiple uncertainties. The integration of health risk into groundwater remediation optimization management is capable of not only adequately considering the influence of health risk on optimal remediation strategies, but also simultaneously completing remediation optimization design and risk assessment. A fuzzy chance-constrained programming approach is presented to handle multiple uncertain properties in the process of health risk assessment. The capabilities and effectiveness of the developed method are illustrated through an application of a naphthalene contaminated case in Anhui, China. Results indicate that (a) the pump-and-treat remediation system leads to a low naphthalene contamination but high remediation cost for a short-time remediation, and natural attenuation significantly affects naphthalene removal from groundwater for a long-time remediation; (b) the weighting coefficients have significant influences on the remediation cost and the performances both for naphthalene concentrations and health risks; (c) an increased level of slope factor (sf) for naphthalene corresponds to more optimal strategies characterized by higher environmental benefits and lower economic sacrifice. The developed method could be simultaneously beneficial for public health and environmental protection. Decision makers could obtain the most appropriate remediation strategies according to their specific requirements with high flexibility of economic, environmental, and risk concerns. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R
2009-03-01
The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-01-01
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods. PMID:28425946
Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.
Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal
2017-04-20
This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.
Salient object detection based on discriminative boundary and multiple cues integration
NASA Astrophysics Data System (ADS)
Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei
2016-01-01
In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.
Mining method selection by integrated AHP and PROMETHEE method.
Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana
2012-03-01
Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.
Analytical Fuselage and Wing Weight Estimation of Transport Aircraft
NASA Technical Reports Server (NTRS)
Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.
1996-01-01
A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.
Constrained Least Squares Estimators of Oblique Common Factors.
ERIC Educational Resources Information Center
McDonald, Roderick P.
1981-01-01
An expression is given for weighted least squares estimators of oblique common factors of factor analyses, constrained to have the same covariance matrix as the factors they estimate. A proof of the uniqueness of the solution is given. (Author/JKS)
Temporally-Constrained Group Sparse Learning for Longitudinal Data Analysis in Alzheimer’s Disease
Jie, Biao; Liu, Mingxia; Liu, Jun
2016-01-01
Sparse learning has been widely investigated for analysis of brain images to assist the diagnosis of Alzheimer’s disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). However, most existing sparse learning-based studies only adopt cross-sectional analysis methods, where the sparse model is learned using data from a single time-point. Actually, multiple time-points of data are often available in brain imaging applications, which can be used in some longitudinal analysis methods to better uncover the disease progression patterns. Accordingly, in this paper we propose a novel temporally-constrained group sparse learning method aiming for longitudinal analysis with multiple time-points of data. Specifically, we learn a sparse linear regression model by using the imaging data from multiple time-points, where a group regularization term is first employed to group the weights for the same brain region across different time-points together. Furthermore, to reflect the smooth changes between data derived from adjacent time-points, we incorporate two smoothness regularization terms into the objective function, i.e., one fused smoothness term which requires that the differences between two successive weight vectors from adjacent time-points should be small, and another output smoothness term which requires the differences between outputs of two successive models from adjacent time-points should also be small. We develop an efficient optimization algorithm to solve the proposed objective function. Experimental results on ADNI database demonstrate that, compared with conventional sparse learning-based methods, our proposed method can achieve improved regression performance and also help in discovering disease-related biomarkers. PMID:27093313
Finite-horizon control-constrained nonlinear optimal control using single network adaptive critics.
Heydari, Ali; Balakrishnan, Sivasubramanya N
2013-01-01
To synthesize fixed-final-time control-constrained optimal controllers for discrete-time nonlinear control-affine systems, a single neural network (NN)-based controller called the Finite-horizon Single Network Adaptive Critic is developed in this paper. Inputs to the NN are the current system states and the time-to-go, and the network outputs are the costates that are used to compute optimal feedback control. Control constraints are handled through a nonquadratic cost function. Convergence proofs of: 1) the reinforcement learning-based training method to the optimal solution; 2) the training error; and 3) the network weights are provided. The resulting controller is shown to solve the associated time-varying Hamilton-Jacobi-Bellman equation and provide the fixed-final-time optimal solution. Performance of the new synthesis technique is demonstrated through different examples including an attitude control problem wherein a rigid spacecraft performs a finite-time attitude maneuver subject to control bounds. The new formulation has great potential for implementation since it consists of only one NN with single set of weights and it provides comprehensive feedback solutions online, though it is trained offline.
Interfaces and Integration of Medical Image Analysis Frameworks: Challenges and Opportunities.
Covington, Kelsie; McCreedy, Evan S; Chen, Min; Carass, Aaron; Aucoin, Nicole; Landman, Bennett A
2010-05-25
Clinical research with medical imaging typically involves large-scale data analysis with interdependent software toolsets tied together in a processing workflow. Numerous, complementary platforms are available, but these are not readily compatible in terms of workflows or data formats. Both image scientists and clinical investigators could benefit from using the framework which is a most natural fit to the specific problem at hand, but pragmatic choices often dictate that a compromise platform is used for collaboration. Manual merging of platforms through carefully tuned scripts has been effective, but exceptionally time consuming and is not feasible for large-scale integration efforts. Hence, the benefits of innovation are constrained by platform dependence. Removing this constraint via integration of algorithms from one framework into another is the focus of this work. We propose and demonstrate a light-weight interface system to expose parameters across platforms and provide seamless integration. In this initial effort, we focus on four platforms Medical Image Analysis and Visualization (MIPAV), Java Image Science Toolkit (JIST), command line tools, and 3D Slicer. We explore three case studies: (1) providing a system for MIPAV to expose internal algorithms and utilize these algorithms within JIST, (2) exposing JIST modules through self-documenting command line interface for inclusion in scripting environments, and (3) detecting and using JIST modules in 3D Slicer. We review the challenges and opportunities for light-weight software integration both within development language (e.g., Java in MIPAV and JIST) and across languages (e.g., C/C++ in 3D Slicer and shell in command line tools).
A Spatially Constrained Multi-autoencoder Approach for Multivariate Geochemical Anomaly Recognition
NASA Astrophysics Data System (ADS)
Lirong, C.; Qingfeng, G.; Renguang, Z.; Yihui, X.
2017-12-01
Separating and recognizing geochemical anomalies from the geochemical background is one of the key tasks in geochemical exploration. Many methods have been developed, such as calculating the mean ±2 standard deviation, and fractal/multifractal models. In recent years, deep autoencoder, a deep learning approach, have been used for multivariate geochemical anomaly recognition. While being able to deal with the non-normal distributions of geochemical concentrations and the non-linear relationships among them, this self-supervised learning method does not take into account the spatial heterogeneity of geochemical background and the uncertainty induced by the randomly initialized weights of neurons, leading to ineffective recognition of weak anomalies. In this paper, we introduce a spatially constrained multi-autoencoder (SCMA) approach for multivariate geochemical anomaly recognition, which includes two steps: spatial partitioning and anomaly score computation. The first step divides the study area into multiple sub-regions to segregate the geochemical background, by grouping the geochemical samples through K-means clustering, spatial filtering, and spatial constraining rules. In the second step, for each sub-region, a group of autoencoder neural networks are constructed with an identical structure but different initial weights on neurons. Each autoencoder is trained using the geochemical samples within the corresponding sub-region to learn the sub-regional geochemical background. The best autoencoder of a group is chosen as the final model for the corresponding sub-region. The anomaly score at each location can then be calculated as the euclidean distance between the observed concentrations and reconstructed concentrations of geochemical elements.The experiments using the geochemical data and Fe deposits in the southwestern Fujian province of China showed that our SCMA approach greatly improved the recognition of weak anomalies, achieving the AUC of 0.89, compared with the AUC of 0.77 using a single deep autoencoder approach.
Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.
NASA Astrophysics Data System (ADS)
Zhou, S.; Huang, Q.
2017-12-01
Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.
Brain Network Analysis: Separating Cost from Topology Using Cost-Integration
Ginestet, Cedric E.; Nichols, Thomas E.; Bullmore, Ed T.; Simmons, Andrew
2011-01-01
A statistically principled way of conducting brain network analysis is still lacking. Comparison of different populations of brain networks is hard because topology is inherently dependent on wiring cost, where cost is defined as the number of edges in an unweighted graph. In this paper, we evaluate the benefits and limitations associated with using cost-integrated topological metrics. Our focus is on comparing populations of weighted undirected graphs that differ in mean association weight, using global efficiency. Our key result shows that integrating over cost is equivalent to controlling for any monotonic transformation of the weight set of a weighted graph. That is, when integrating over cost, we eliminate the differences in topology that may be due to a monotonic transformation of the weight set. Our result holds for any unweighted topological measure, and for any choice of distribution over cost levels. Cost-integration is therefore helpful in disentangling differences in cost from differences in topology. By contrast, we show that the use of the weighted version of a topological metric is generally not a valid approach to this problem. Indeed, we prove that, under weak conditions, the use of the weighted version of global efficiency is equivalent to simply comparing weighted costs. Thus, we recommend the reporting of (i) differences in weighted costs and (ii) differences in cost-integrated topological measures with respect to different distributions over the cost domain. We demonstrate the application of these techniques in a re-analysis of an fMRI working memory task. We also provide a Monte Carlo method for approximating cost-integrated topological measures. Finally, we discuss the limitations of integrating topology over cost, which may pose problems when some weights are zero, when multiplicities exist in the ranks of the weights, and when one expects subtle cost-dependent topological differences, which could be masked by cost-integration. PMID:21829437
NASA Astrophysics Data System (ADS)
Qu, Yegao; Shi, Ruchao; Batra, Romesh C.
2018-02-01
We present a robust sharp-interface immersed boundary method for numerically studying high speed flows of compressible and viscous fluids interacting with arbitrarily shaped either stationary or moving rigid solids. The Navier-Stokes equations are discretized on a rectangular Cartesian grid based on a low-diffusion flux splitting method for inviscid fluxes and conservative high-order central-difference schemes for the viscous components. Discontinuities such as those introduced by shock waves and contact surfaces are captured by using a high-resolution weighted essentially non-oscillatory (WENO) scheme. Ghost cells in the vicinity of the fluid-solid interface are introduced to satisfy boundary conditions on the interface. Values of variables in the ghost cells are found by using a constrained moving least squares method (CMLS) that eliminates numerical instabilities encountered in the conventional MLS formulation. The solution of the fluid flow and the solid motion equations is advanced in time by using the third-order Runge-Kutta and the implicit Newmark integration schemes, respectively. The performance of the proposed method has been assessed by computing results for the following four problems: shock-boundary layer interaction, supersonic viscous flows past a rigid cylinder, moving piston in a shock tube and lifting off from a flat surface of circular, rectangular and elliptic cylinders triggered by shock waves, and comparing computed results with those available in the literature.
Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft
NASA Technical Reports Server (NTRS)
Ardema, Mark D.
1996-01-01
In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.
Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.
Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S
2004-01-01
MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.
Point-based warping with optimized weighting factors of displacement vectors
NASA Astrophysics Data System (ADS)
Pielot, Ranier; Scholz, Michael; Obermayer, Klaus; Gundelfinger, Eckart D.; Hess, Andreas
2000-06-01
The accurate comparison of inter-individual 3D image brain datasets requires non-affine transformation techniques (warping) to reduce geometric variations. Constrained by the biological prerequisites we use in this study a landmark-based warping method with weighted sums of displacement vectors, which is enhanced by an optimization process. Furthermore, we investigate fast automatic procedures for determining landmarks to improve the practicability of 3D warping. This combined approach was tested on 3D autoradiographs of Gerbil brains. The autoradiographs were obtained after injecting a non-metabolized radioactive glucose derivative into the Gerbil thereby visualizing neuronal activity in the brain. Afterwards the brain was processed with standard autoradiographical methods. The landmark-generator computes corresponding reference points simultaneously within a given number of datasets by Monte-Carlo-techniques. The warping function is a distance weighted exponential function with a landmark- specific weighting factor. These weighting factors are optimized by a computational evolution strategy. The warping quality is quantified by several coefficients (correlation coefficient, overlap-index, and registration error). The described approach combines a highly suitable procedure to automatically detect landmarks in autoradiographical brain images and an enhanced point-based warping technique, optimizing the local weighting factors. This optimization process significantly improves the similarity between the warped and the target dataset.
NASA Astrophysics Data System (ADS)
Zoraghi, Nima; Amiri, Maghsoud; Talebi, Golnaz; Zowghi, Mahdi
2013-12-01
This paper presents a fuzzy multi-criteria decision-making (FMCDM) model by integrating both subjective and objective weights for ranking and evaluating the service quality in hotels. The objective method selects weights of criteria through mathematical calculation, while the subjective method uses judgments of decision makers. In this paper, we use a combination of weights obtained by both approaches in evaluating service quality in hotel industries. A real case study that considered ranking five hotels is illustrated. Examples are shown to indicate capabilities of the proposed method.
On the inversion of geodetic integrals defined over the sphere using 1-D FFT
NASA Astrophysics Data System (ADS)
García, R. V.; Alejo, C. A.
2005-08-01
An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.
Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui
2017-06-13
The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.
Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems
Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.
2002-01-01
An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.
NASA Astrophysics Data System (ADS)
He, Y.; Xiaohong, C.; Lin, K.; Wang, Z.
2016-12-01
Water demand (WD) is the basis for water allocation (WA) because it can fully reflect the pressure on water resources from population and socioeconomic development. To deal with the great uncertainties and the absence of consideration of water environmental capacity (WEC) in traditional water demand prediction methods, e.g. Statistical models, System Dynamics and quota method, this study develops a two-stage approach to predict WD under constrained total water use from the perspective of ecological restraint. Regional total water demand (RTWD) is constrained by WEC, available water resources amount and total water use quota. Based on RTWD, WD is allocated in two stages according to the game theory, including predicting sub regional total water demand (SRWD) by calculating the sub region weights based on the selected indicators of socioeconomic development and predicting industrial water demand (IWD) according to the game theory. Taking the Dongjiang river basin, South China as an example of WD prediction, according to its constrained total water use quota and WEC, RTWD in 2020 is 9.83 billion m3, and IWD for agriculture, industry, service, ecology (off-stream), and domesticity are 2.32 billion m3, 3.79 billion m3, 0.75 billion m3 , 0.18 billion m3and 1.79 billion m3 respectively. The results from this study provide useful insights for effective water allocation under climate change and the strict policy of water resources management.
NASA Astrophysics Data System (ADS)
Georgiou, Andreas; Skarlatos, Dimitrios
2016-07-01
Among the renewable power sources, solar power is rapidly becoming popular because it is inexhaustible, clean, and dependable. It has also become more efficient since the power conversion efficiency of photovoltaic solar cells has increased. Following these trends, solar power will become more affordable in years to come and considerable investments are to be expected. Despite the size of solar plants, the sitting procedure is a crucial factor for their efficiency and financial viability. Many aspects influence such a decision: legal, environmental, technical, and financial to name a few. This paper describes a general integrated framework to evaluate land suitability for the optimal placement of photovoltaic solar power plants, which is based on a combination of a geographic information system (GIS), remote sensing techniques, and multi-criteria decision-making methods. An application of the proposed framework for the Limassol district in Cyprus is further illustrated. The combination of a GIS and multi-criteria methods produces an excellent analysis tool that creates an extensive database of spatial and non-spatial data, which will be used to simplify problems as well as solve and promote the use of multiple criteria. A set of environmental, economic, social, and technical constrains, based on recent Cypriot legislation, European's Union policies, and expert advice, identifies the potential sites for solar park installation. The pairwise comparison method in the context of the analytic hierarchy process (AHP) is applied to estimate the criteria weights in order to establish their relative importance in site evaluation. In addition, four different methods to combine information layers and check their sensitivity were used. The first considered all the criteria as being equally important and assigned them equal weight, whereas the others grouped the criteria and graded them according to their objective perceived importance. The overall suitability of the study region for sitting solar parks is appraised through the summation rule. Strict application of the framework depicts 3.0 % of the study region scoring a best-suitability index for solar resource exploitation, hence minimizing the risk in a potential investment. However, using different weighting schemes for criteria, suitable areas may reach up to 83 % of the study region. The suggested methodological framework applied can be easily utilized by potential investors and renewable energy developers, through a front end web-based application with proper GUI for personalized weighting schemes.
Flocculation and aggregation in a microgravity environment (FAME)
NASA Technical Reports Server (NTRS)
Ansari, Rafat R.; Dhadwal, Harbans S.; Suh, Kwang I.
1994-01-01
An experiment to study flocculation phenomena in the constrained microgravity environment of a space shuttle or space station is described. The small size and light weight experiment easily fits in a Spacelab Glovebox. Using an integrated fiber optic dynamic light scattering (DLS) system we obtain high precision particle size measurements from dispersions of colloidal particles within seconds, needs no onboard optical alignment, no index matching fluid, and offers sample mixing and shear melting capabilities to study aggregation (flocculation and coagulation) phenomena under both quiescent and controlled agitation conditions. The experimental system can easily be adapted for other microgravity experiments requiring the use of DLS. Preliminary results of ground-based study are reported.
Surface-Constrained Volumetric Brain Registration Using Harmonic Mappings
Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.
2015-01-01
In order to compare anatomical and functional brain imaging data across subjects, the images must first be registered to a common coordinate system in which anatomical features are aligned. Intensity-based volume registration methods can align subcortical structures well, but the variability in sulcal folding patterns typically results in misalignment of the cortical surface. Conversely, surface-based registration using sulcal features can produce excellent cortical alignment but the mapping between brains is restricted to the cortical surface. Here we describe a method for volumetric registration that also produces an accurate one-to-one point correspondence between cortical surfaces. This is achieved by first parameterizing and aligning the cortical surfaces using sulcal landmarks. We then use a constrained harmonic mapping to extend this surface correspondence to the entire cortical volume. Finally, this mapping is refined using an intensity-based warp. We demonstrate the utility of the method by applying it to T1-weighted magnetic resonance images (MRI). We evaluate the performance of our proposed method relative to existing methods that use only intensity information; for this comparison we compute the inter-subject alignment of expert-labeled sub-cortical structures after registration. PMID:18092736
NASA Astrophysics Data System (ADS)
Li, Chuanzhong; He, Jingsong
2016-06-01
We construct Virasoro-type additional symmetries of a kind of constrained multicomponent Kadomtsev-Petviashvili (KP) hierarchy and obtain the Virasoro flow equation for the eigenfunctions and adjoint eigenfunctions. We show that the algebraic structure of the Virasoro symmetry is retained under discretization from the constrained multicomponent KP hierarchy to the discrete constrained multicomponent KP hierarchy.
NASA Technical Reports Server (NTRS)
Zhu, Lei; Jacob, Daniel; Mickley, Loretta; Marais, Eloise; Zhang, Aoxing; Cohan, Daniel; Yoshida, Yasuko; Duncan, Bryan; Abad, Gonzalo Gonzalez; Chance, Kelly;
2014-01-01
Satellite observations of formaldehyde (HCHO) columns provide top-down constraints on emissions of highly reactive volatile organic compounds (HRVOCs). This approach has been used previously to constrain emissions of isoprene from vegetation, but application to US anthropogenic emissions has been stymied by lack of a discernable HCHO signal. Here we show that oversampling of HCHO data from the Ozone Monitoring Instrument (OMI) for 2005 - 2008 enables quantitative detection of urban and industrial plumes in eastern Texas including Houston, Port Arthur, and Dallas-Fort Worth. By spatially integrating the individual urban-industrial HCHO plumes observed by OMI we can constrain the corresponding HCHO-weighted HRVOC emissions. Application to the Houston plume indicates a HCHO source of 260 plus or minus 110 kmol h-1 and implies a factor of 5.5 plus or minus 2.4 underestimate of anthropogenic HRVOC emissions in the US Environmental Protection Agency inventory. With this approach we are able to monitor the trend in HRVOC emissions over the US, in particular from the oil-gas industry, over the past decade.
NASA Astrophysics Data System (ADS)
Moorkamp, Max
2017-09-01
In this review, I discuss the basic principles of joint inversion and constrained inversion approaches and show a few instructive examples of applications of these approaches in the literature. Starting with some basic definitions of the terms joint inversion and constrained inversion, I use a simple three-layered model as a tutorial example that demonstrates the general properties of joint inversion with different coupling methods. In particular, I investigate to which extent combining different geophysical methods can restrict the set of acceptable models and under which circumstances the results can be biased. Some ideas on how to identify such biased results and how negative results can be interpreted conclude the tutorial part. The case studies in the second part have been selected to highlight specific issues such as choosing an appropriate parameter relationship to couple seismic and electromagnetic data and demonstrate the most commonly used approaches, e.g., the cross-gradient constraint and direct parameter coupling. Throughout the discussion, I try to identify topics for future work. Overall, it appears that integrating electromagnetic data with other observations has reached a level of maturity and is starting to move away from fundamental proof-of-concept studies to answering questions about the structure of the subsurface. With a wide selection of coupling methods suited to different geological scenarios, integrated approaches can be applied on all scales and have the potential to deliver new answers to important geological questions.
Orthogonal-blendshape-based editing system for facial motion capture data.
Li, Qing; Deng, Zhigang
2008-01-01
The authors present a novel data-driven 3D facial motion capture data editing system using automated construction of an orthogonal blendshape face model and constrained weight propagation, aiming to bridge the popular facial motion capture technique and blendshape approach. In this work, a 3D facial-motion-capture-editing problem is transformed to a blendshape-animation-editing problem. Given a collected facial motion capture data set, we construct a truncated PCA space spanned by the greatest retained eigenvectors and a corresponding blendshape face model for each anatomical region of the human face. As such, modifying blendshape weights (PCA coefficients) is equivalent to editing their corresponding motion capture sequence. In addition, a constrained weight propagation technique allows animators to balance automation and flexible controls.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
Mori, Toshifumi; Hamers, Robert J; Pedersen, Joel A; Cui, Qiang
2014-07-17
Motivated by specific applications and the recent work of Gao and co-workers on integrated tempering sampling (ITS), we have developed a novel sampling approach referred to as integrated Hamiltonian sampling (IHS). IHS is straightforward to implement and complementary to existing methods for free energy simulation and enhanced configurational sampling. The method carries out sampling using an effective Hamiltonian constructed by integrating the Boltzmann distributions of a series of Hamiltonians. By judiciously selecting the weights of the different Hamiltonians, one achieves rapid transitions among the energy landscapes that underlie different Hamiltonians and therefore an efficient sampling of important regions of the conformational space. Along this line, IHS shares similar motivations as the enveloping distribution sampling (EDS) approach of van Gunsteren and co-workers, although the ways that distributions of different Hamiltonians are integrated are rather different in IHS and EDS. Specifically, we report efficient ways for determining the weights using a combination of histogram flattening and weighted histogram analysis approaches, which make it straightforward to include many end-state and intermediate Hamiltonians in IHS so as to enhance its flexibility. Using several relatively simple condensed phase examples, we illustrate the implementation and application of IHS as well as potential developments for the near future. The relation of IHS to several related sampling methods such as Hamiltonian replica exchange molecular dynamics and λ-dynamics is also briefly discussed.
A spatially constrained ecological classification: rationale, methodology and implementation
Franz Mora; Louis Iverson; Louis Iverson
2002-01-01
The theory, methodology and implementation for an ecological and spatially constrained classification are presented. Ecological and spatial relationships among several landscape variables are analyzed in order to define a new approach for a landscape classification. Using ecological and geostatistical analyses, several ecological and spatial weights are derived to...
NASA Technical Reports Server (NTRS)
Navon, I. M.
1984-01-01
A Lagrange multiplier method using techniques developed by Bertsekas (1982) was applied to solving the problem of enforcing simultaneous conservation of the nonlinear integral invariants of the shallow water equations on a limited area domain. This application of nonlinear constrained optimization is of the large dimensional type and the conjugate gradient method was found to be the only computationally viable method for the unconstrained minimization. Several conjugate-gradient codes were tested and compared for increasing accuracy requirements. Robustness and computational efficiency were the principal criteria.
Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.
Li, Jing; Wang, Jing; Li, Jing
2017-12-01
As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .
NASA Astrophysics Data System (ADS)
Hadi, Fatemeh; Janbozorgi, Mohammad; Sheikhi, M. Reza H.; Metghalchi, Hameed
2016-10-01
The rate-controlled constrained-equilibrium (RCCE) method is employed to study the interactions between mixing and chemical reaction. Considering that mixing can influence the RCCE state, the key objective is to assess the accuracy and numerical performance of the method in simulations involving both reaction and mixing. The RCCE formulation includes rate equations for constraint potentials, density and temperature, which allows taking account of mixing alongside chemical reaction without splitting. The RCCE is a dimension reduction method for chemical kinetics based on thermodynamics laws. It describes the time evolution of reacting systems using a series of constrained-equilibrium states determined by RCCE constraints. The full chemical composition at each state is obtained by maximizing the entropy subject to the instantaneous values of the constraints. The RCCE is applied to a spatially homogeneous constant pressure partially stirred reactor (PaSR) involving methane combustion in oxygen. Simulations are carried out over a wide range of initial temperatures and equivalence ratios. The chemical kinetics, comprised of 29 species and 133 reaction steps, is represented by 12 RCCE constraints. The RCCE predictions are compared with those obtained by direct integration of the same kinetics, termed detailed kinetics model (DKM). The RCCE shows accurate prediction of combustion in PaSR with different mixing intensities. The method also demonstrates reduced numerical stiffness and overall computational cost compared to DKM.
Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance
Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai
2016-01-01
Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by “clock coasting” when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy. PMID:27187399
Coupled Integration of CSAC, MIMU, and GNSS for Improved PNT Performance.
Ma, Lin; You, Zheng; Liu, Tianyi; Shi, Shuai
2016-05-12
Positioning, navigation, and timing (PNT) is a strategic key technology widely used in military and civilian applications. Global navigation satellite systems (GNSS) are the most important PNT techniques. However, the vulnerability of GNSS threatens PNT service quality, and integrations with other information are necessary. A chip scale atomic clock (CSAC) provides high-precision frequency and high-accuracy time information in a short time. A micro inertial measurement unit (MIMU) provides a strap-down inertial navigation system (SINS) with rich navigation information, better real-time feed, anti-jamming, and error accumulation. This study explores the coupled integration of CSAC, MIMU, and GNSS to enhance PNT performance. The architecture of coupled integration is designed and degraded when any subsystem fails. A mathematical model for a precise time aiding navigation filter is derived rigorously. The CSAC aids positioning by weighted linear optimization when the visible satellite number is four or larger. By contrast, CSAC converts the GNSS observations to range measurements by "clock coasting" when the visible satellite number is less than four, thereby constraining the error divergence of micro inertial navigation and improving the availability of GNSS signals and the positioning accuracy of the integration. Field vehicle experiments, both in open-sky area and in a harsh environment, show that the integration can improve the positioning probability and accuracy.
Indoor 3D Route Modeling Based On Estate Spatial Data
NASA Astrophysics Data System (ADS)
Zhang, H.; Wen, Y.; Jiang, J.; Huang, W.
2014-04-01
Indoor three-dimensional route model is essential for space intelligence navigation and emergency evacuation. This paper is motivated by the need of constructing indoor route model automatically and as far as possible. By comparing existing building data sources, this paper firstly explained the reason why the estate spatial management data is chosen as the data source. Then, an applicable method of construction three-dimensional route model in a building is introduced by establishing the mapping relationship between geographic entities and their topological expression. This data model is a weighted graph consist of "node" and "path" to express the spatial relationship and topological structure of a building components. The whole process of modelling internal space of a building is addressed by two key steps: (1) each single floor route model is constructed, including path extraction of corridor using Delaunay triangulation algorithm with constrained edge, fusion of room nodes into the path; (2) the single floor route model is connected with stairs and elevators and the multi-floor route model is eventually generated. In order to validate the method in this paper, a shopping mall called "Longjiang New City Plaza" in Nanjing is chosen as a case of study. And the whole building space is constructed according to the modelling method above. By integrating of existing path finding algorithm, the usability of this modelling method is verified, which shows the indoor three-dimensional route modelling method based on estate spatial data in this paper can support indoor route planning and evacuation route design very well.
Quantization and training of object detection networks with low-precision weights and activations
NASA Astrophysics Data System (ADS)
Yang, Bo; Liu, Jian; Zhou, Li; Wang, Yun; Chen, Jie
2018-01-01
As convolutional neural networks have demonstrated state-of-the-art performance in object recognition and detection, there is a growing need for deploying these systems on resource-constrained mobile platforms. However, the computational burden and energy consumption of inference for these networks are significantly higher than what most low-power devices can afford. To address these limitations, this paper proposes a method to train object detection networks with low-precision weights and activations. The probability density functions of weights and activations of each layer are first directly estimated using piecewise Gaussian models. Then, the optimal quantization intervals and step sizes for each convolution layer are adaptively determined according to the distribution of weights and activations. As the most computationally expensive convolutions can be replaced by effective fixed point operations, the proposed method can drastically reduce computation complexity and memory footprint. Performing on the tiny you only look once (YOLO) and YOLO architectures, the proposed method achieves comparable accuracy to their 32-bit counterparts. As an illustration, the proposed 4-bit and 8-bit quantized versions of the YOLO model achieve a mean average precision of 62.6% and 63.9%, respectively, on the Pascal visual object classes 2012 test dataset. The mAP of the 32-bit full-precision baseline model is 64.0%.
A Floor-Map-Aided WiFi/Pseudo-Odometry Integration Algorithm for an Indoor Positioning System
Wang, Jian; Hu, Andong; Liu, Chunyan; Li, Xin
2015-01-01
This paper proposes a scheme for indoor positioning by fusing floor map, WiFi and smartphone sensor data to provide meter-level positioning without additional infrastructure. A topology-constrained K nearest neighbor (KNN) algorithm based on a floor map layout provides the coordinates required to integrate WiFi data with pseudo-odometry (P-O) measurements simulated using a pedestrian dead reckoning (PDR) approach. One method of further improving the positioning accuracy is to use a more effective multi-threshold step detection algorithm, as proposed by the authors. The “go and back” phenomenon caused by incorrect matching of the reference points (RPs) of a WiFi algorithm is eliminated using an adaptive fading-factor-based extended Kalman filter (EKF), taking WiFi positioning coordinates, P-O measurements and fused heading angles as observations. The “cross-wall” problem is solved based on the development of a floor-map-aided particle filter algorithm by weighting the particles, thereby also eliminating the gross-error effects originating from WiFi or P-O measurements. The performance observed in a field experiment performed on the fourth floor of the School of Environmental Science and Spatial Informatics (SESSI) building on the China University of Mining and Technology (CUMT) campus confirms that the proposed scheme can reliably achieve meter-level positioning. PMID:25811224
Constraints on the ωπ Form Factor from Analyticity and Unitarity
NASA Astrophysics Data System (ADS)
Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian
Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discre-pancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ*. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0:6 GeV, including those from NA60 published in 2016.
Constraints on the ωπ form factor from analyticity and unitarity
NASA Astrophysics Data System (ADS)
Ananthanarayan, B.; Caprini, Irinel; Kubis, Bastian
2016-05-01
Form factors are important low-energy quantities and an accurate knowledge of these sheds light on the strong interactions. A variety of methods based on general principles have been developed to use information known in different energy regimes to constrain them in regions where experimental information needs to be tested precisely. Here we review our recent work on the electromagnetic ωπ form factor in a model-independent framework known as the method of unitarity bounds, partly motivated by the discrepancies noted recently between the theoretical calculations of the form factor based on dispersion relations and certain experimental data measured from the decay ω → π0γ∗. We have applied a modified dispersive formalism, which uses as input the discontinuity of the ωπ form factor calculated by unitarity below the ωπ threshold and an integral constraint on the square of its modulus above this threshold. The latter constraint was obtained by exploiting unitarity and the positivity of the spectral function of a QCD correlator, computed on the spacelike axis by operator product expansion and perturbative QCD. An alternative constraint is obtained by using data available at higher energies for evaluating an integral of the modulus squared with a suitable weight function. From these conditions we derived upper and lower bounds on the modulus of the ωπ form factor in the region below the ωπ threshold. The results confirm the existence of a disagreement between dispersion theory and experimental data on the ωπ form factor around 0.6 GeV, including those from NA60 published in 2016.
Integration of alternative feedstreams for biomass treatment and utilization
Hennessey, Susan Marie [Avondale, PA; Friend, Julie [Claymont, DE; Dunson, Jr., James B.; Tucker, III, Melvin P.; Elander, Richard T [Evergreen, CO; Hames, Bonnie [Westminster, CO
2011-03-22
The present invention provides a method for treating biomass composed of integrated feedstocks to produce fermentable sugars. One aspect of the methods described herein includes a pretreatment step wherein biomass is integrated with an alternative feedstream and the resulting integrated feedstock, at relatively high concentrations, is treated with a low concentration of ammonia relative to the dry weight of biomass. In another aspect, a high solids concentration of pretreated biomass is integrated with an alternative feedstream for saccharifiaction.
NASA Astrophysics Data System (ADS)
Tsagaan, Baigalmaa; Abe, Keiichi; Goto, Masahiro; Yamamoto, Seiji; Terakawa, Susumu
2006-03-01
This paper presents a segmentation method of brain tissues from MR images, invented for our image-guided neurosurgery system under development. Our goal is to segment brain tissues for creating biomechanical model. The proposed segmentation method is based on 3-D region growing and outperforms conventional approaches by stepwise usage of intensity similarities between voxels in conjunction with edge information. Since the intensity and the edge information are complementary to each other in the region-based segmentation, we use them twice by performing a coarse-to-fine extraction. First, the edge information in an appropriate neighborhood of the voxel being considered is examined to constrain the region growing. The expanded region of the first extraction result is then used as the domain for the next processing. The intensity and the edge information of the current voxel only are utilized in the final extraction. Before segmentation, the intensity parameters of the brain tissues as well as partial volume effect are estimated by using expectation-maximization (EM) algorithm in order to provide an accurate data interpretation into the extraction. We tested the proposed method on T1-weighted MR images of brain and evaluated the segmentation effectiveness comparing the results with ground truths. Also, the generated meshes from the segmented brain volume by using mesh generating software are shown in this paper.
Fast Constrained Spectral Clustering and Cluster Ensemble with Random Projection
Liu, Wenfen
2017-01-01
Constrained spectral clustering (CSC) method can greatly improve the clustering accuracy with the incorporation of constraint information into spectral clustering and thus has been paid academic attention widely. In this paper, we propose a fast CSC algorithm via encoding landmark-based graph construction into a new CSC model and applying random sampling to decrease the data size after spectral embedding. Compared with the original model, the new algorithm has the similar results with the increase of its model size asymptotically; compared with the most efficient CSC algorithm known, the new algorithm runs faster and has a wider range of suitable data sets. Meanwhile, a scalable semisupervised cluster ensemble algorithm is also proposed via the combination of our fast CSC algorithm and dimensionality reduction with random projection in the process of spectral ensemble clustering. We demonstrate by presenting theoretical analysis and empirical results that the new cluster ensemble algorithm has advantages in terms of efficiency and effectiveness. Furthermore, the approximate preservation of random projection in clustering accuracy proved in the stage of consensus clustering is also suitable for the weighted k-means clustering and thus gives the theoretical guarantee to this special kind of k-means clustering where each point has its corresponding weight. PMID:29312447
A Method for Precision Closed-Loop Irrigation Using a Modified PID Control Algorithm
NASA Astrophysics Data System (ADS)
Goodchild, Martin; Kühn, Karl; Jenkins, Malcolm; Burek, Kazimierz; Dutton, Andrew
2016-04-01
The benefits of closed-loop irrigation control have been demonstrated in grower trials which show the potential for improved crop yields and resource usage. Managing water use by controlling irrigation in response to soil moisture changes to meet crop water demands is a popular approach but requires knowledge of closed-loop control practice. In theory, to obtain precise closed-loop control of a system it is necessary to characterise every component in the control loop to derive the appropriate controller parameters, i.e. proportional, integral & derivative (PID) parameters in a classic PID controller. In practice this is often difficult to achieve. Empirical methods are employed to estimate the PID parameters by observing how the system performs under open-loop conditions. In this paper we present a modified PID controller, with a constrained integral function, that delivers excellent regulation of soil moisture by supplying the appropriate amount of water to meet the needs of the plant during the diurnal cycle. Furthermore, the modified PID controller responds quickly to changes in environmental conditions, including rainfall events which can result in: controller windup, under-watering and plant stress conditions. The experimental work successfully demonstrates the functionality of a constrained integral PID controller that delivers robust and precise irrigation control. Coir substrate strawberry growing trial data is also presented illustrating soil moisture control and the ability to match water deliver to solar radiation.
A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography
NASA Astrophysics Data System (ADS)
Sun, S.; Chen, C.; WANG, H.; Wang, Q.
2014-12-01
The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M. & Cella, F., 2013. Self-constrained inversion of potential fields, Geophys J Int.This research is supported by the Fundamental Research Funds for Institute for Geophysical and Geochemical Exploration, Chinese Academy of Geological Sciences (Grant Nos. WHS201210 and WHS201211).
NASA Astrophysics Data System (ADS)
Ja'fari, Ahmad; Hamidzadeh Moghadam, Rasoul
2012-10-01
Routine core analysis provides useful information for petrophysical study of the hydrocarbon reservoirs. Effective porosity and fluid conductivity (permeability) could be obtained from core analysis in laboratory. Coring hydrocarbon bearing intervals and analysis of obtained cores in laboratory is expensive and time consuming. In this study an improved method to make a quantitative correlation between porosity and permeability obtained from core and conventional well log data by integration of different artificial intelligent systems is proposed. The proposed method combines the results of adaptive neuro-fuzzy inference system (ANFIS) and neural network (NN) algorithms for overall estimation of core data from conventional well log data. These methods multiply the output of each algorithm with a weight factor. Simple averaging and weighted averaging were used for determining the weight factors. In the weighted averaging method the genetic algorithm (GA) is used to determine the weight factors. The overall algorithm was applied in one of SW Iran’s oil fields with two cored wells. One-third of all data were used as the test dataset and the rest of them were used for training the networks. Results show that the output of the GA averaging method provided the best mean square error and also the best correlation coefficient with real core data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr; Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
NASA Astrophysics Data System (ADS)
Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.
2018-05-01
The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.
Primal-dual methods of shape sensitivity analysis for curvilinear cracks with nonpenetration
NASA Astrophysics Data System (ADS)
Kovtunenko, V. A.
2006-10-01
Based on a level-set description of a crack moving with a given velocity, the problem of shape perturb-ation of the crack is considered. Nonpenetration conditions are imposed between opposite crack surfaces which result in a constrained minimization problem describing equilibrium of a solid with the crack. We suggest a minimax formulation of the state problem thus allowing curvilinear (nonplanar) cracks for the consideration. Utilizing primal-dual methods of shape sensitivity analysis we obtain the general formula for a shape derivative of the potential energy, which describes an energy-release rate for the curvilinear cracks. The conditions sufficient to rewrite it in the form of a path-independent integral (J-integral) are derived.
Dipole and quadrupole synthesis of electric potential fields. M.S. Thesis
NASA Technical Reports Server (NTRS)
Tilley, D. G.
1979-01-01
A general technique for expanding an unknown potential field in terms of a linear summation of weighted dipole or quadrupole fields is described. Computational methods were developed for the iterative addition of dipole fields. Various solution potentials were compared inside the boundary with a more precise calculation of the potential to derive optimal schemes for locating the singularities of the dipole fields. Then, the problem of determining solutions to Laplace's equation on an unbounded domain as constrained by pertinent electron trajectory data was considered.
A joint precoding scheme for indoor downlink multi-user MIMO VLC systems
NASA Astrophysics Data System (ADS)
Zhao, Qiong; Fan, Yangyu; Kang, Bochao
2017-11-01
In this study, we aim to improve the system performance and reduce the implementation complexity of precoding scheme for visible light communication (VLC) systems. By incorporating the power-method algorithm and the block diagonalization (BD) algorithm, we propose a joint precoding scheme for indoor downlink multi-user multi-input-multi-output (MU-MIMO) VLC systems. In this scheme, we apply the BD algorithm to eliminate the co-channel interference (CCI) among users firstly. Secondly, the power-method algorithm is used to search the precoding weight for each user based on the optimal criterion of signal to interference plus noise ratio (SINR) maximization. Finally, the optical power restrictions of VLC systems are taken into account to constrain the precoding weight matrix. Comprehensive computer simulations in two scenarios indicate that the proposed scheme always has better bit error rate (BER) performance and lower computation complexity than that of the traditional scheme.
Integrated Conceptual Design of Joined-Wing SensorCraft Using Response Surface Models
2006-11-01
vi Acknowledgements I would like to express my sincere appreciation to my thesis advisor, Dr. Robert Canfield for his guidance and...55 Raymer Approximate and Group Weights Sizing Methods....................................... 57 Finite Element Model Structural Weight...Empty Weight Fraction Equation ............................... 54 Figure 29 Response of Refined Weight to T/W and W/S Inputs for Model (2) Raymer ASW
NASA Astrophysics Data System (ADS)
Mirzaei, Mahmood; Tibaldi, Carlo; Hansen, Morten H.
2016-09-01
PI/PID controllers are the most common wind turbine controllers. Normally a first tuning is obtained using methods such as pole-placement or Ziegler-Nichols and then extensive aeroelastic simulations are used to obtain the best tuning in terms of regulation of the outputs and reduction of the loads. In the traditional tuning approaches, the properties of different open loop and closed loop transfer functions of the system are not normally considered. In this paper, an assessment of the pole-placement tuning method is presented based on robustness measures. Then a constrained optimization setup is suggested to automatically tune the wind turbine controller subject to robustness constraints. The properties of the system such as the maximum sensitivity and complementary sensitivity functions (Ms and Mt ), along with some of the responses of the system, are used to investigate the controller performance and formulate the optimization problem. The cost function is the integral absolute error (IAE) of the rotational speed from a disturbance modeled as a step in wind speed. Linearized model of the DTU 10-MW reference wind turbine is obtained using HAWCStab2. Thereafter, the model is reduced with model order reduction. The trade-off curves are given to assess the tunings of the poles- placement method and a constrained optimization problem is solved to find the best tuning.
NASA Astrophysics Data System (ADS)
Peng, Chengtao; Qiu, Bensheng; Zhang, Cheng; Ma, Changyu; Yuan, Gang; Li, Ming
2017-07-01
Over the years, the X-ray computed tomography (CT) has been successfully used in clinical diagnosis. However, when the body of the patient to be examined contains metal objects, the image reconstructed would be polluted by severe metal artifacts, which affect the doctor's diagnosis of disease. In this work, we proposed a dynamic re-weighted total variation (DRWTV) technique combined with the statistic iterative reconstruction (SIR) method to reduce the artifacts. The DRWTV method is based on the total variation (TV) and re-weighted total variation (RWTV) techniques, but it provides a sparser representation than TV and protects the tissue details better than RWTV. Besides, the DRWTV can suppress the artifacts and noise, and the SIR convergence speed is also accelerated. The performance of the algorithm is tested on both simulated phantom dataset and clinical dataset, which are the teeth phantom with two metal implants and the skull with three metal implants, respectively. The proposed algorithm (SIR-DRWTV) is compared with two traditional iterative algorithms, which are SIR and SIR constrained by RWTV regulation (SIR-RWTV). The results show that the proposed algorithm has the best performance in reducing metal artifacts and protecting tissue details.
On the time-weighted quadratic sum of linear discrete systems
NASA Technical Reports Server (NTRS)
Jury, E. I.; Gutman, S.
1975-01-01
A method is proposed for obtaining the time-weighted quadratic sum for linear discrete systems. The formula of the weighted quadratic sum is obtained from matrix z-transform formulation. In addition, it is shown that this quadratic sum can be derived in a recursive form for several useful weighted functions. The discussion presented parallels that of MacFarlane (1963) for weighted quadratic integral for linear continuous systems.
Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction
NASA Astrophysics Data System (ADS)
Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.
2013-12-01
We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.
2014-12-31
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary.more » The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.« less
Calculation of Radar Probability of Detection in K-Distributed Sea Clutter and Noise
2011-04-01
Laguerre polynomials are generated from a recurrence relation, and the nodes and weights are calculated from the eigenvalues and eigenvectors of a...B.P. Flannery, Numerical Recipes in Fortran, Second Edition, Cambridge University Press (1992). 12. W. Gautschi, Orthogonal Polynomials (in Matlab...the integration, with the nodes and weights calculated using matrix methods, so that a general purpose numerical integration routine is not required
Hippocampus segmentation using locally weighted prior based level set
NASA Astrophysics Data System (ADS)
Achuthan, Anusha; Rajeswari, Mandava
2015-12-01
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Jang, Hae-Won; Ih, Jeong-Guon
2013-03-01
The time domain boundary element method (TBEM) to calculate the exterior sound field using the Kirchhoff integral has difficulties in non-uniqueness and exponential divergence. In this work, a method to stabilize TBEM calculation for the exterior problem is suggested. The time domain CHIEF (Combined Helmholtz Integral Equation Formulation) method is newly formulated to suppress low order fictitious internal modes. This method constrains the surface Kirchhoff integral by forcing the pressures at the additional interior points to be zero when the shortest retarded time between boundary nodes and an interior point elapses. However, even after using the CHIEF method, the TBEM calculation suffers the exponential divergence due to the remaining unstable high order fictitious modes at frequencies higher than the frequency limit of the boundary element model. For complete stabilization, such troublesome modes are selectively adjusted by projecting the time response onto the eigenspace. In a test example for a transiently pulsating sphere, the final average error norm of the stabilized response compared to the analytic solution is 2.5%.
Stabilization of computational procedures for constrained dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1988-01-01
A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.
Zhang, Li; Athavale, Prashant; Pop, Mihaela; Wright, Graham A
2017-08-01
To enable robust reconstruction for highly accelerated three-dimensional multicontrast late enhancement imaging to provide improved MR characterization of myocardial infarction with isotropic high spatial resolution. A new method using compressed sensing with low rank and spatially varying edge-preserving constraints (CS-LASER) is proposed to improve the reconstruction of fine image details from highly undersampled data. CS-LASER leverages the low rank feature of the multicontrast volume series in MR relaxation and integrates spatially varying edge preservation into the explicit low rank constrained compressed sensing framework using weighted total variation. With an orthogonal temporal basis pre-estimated, a multiscale iterative reconstruction framework is proposed to enable the practice of CS-LASER with spatially varying weights of appropriate accuracy. In in vivo pig studies with both retrospective and prospective undersamplings, CS-LASER preserved fine image details better and presented tissue characteristics with a higher degree of consistency with histopathology, particularly in the peri-infarct region, than an alternative technique for different acceleration rates. An isotropic resolution of 1.5 mm was achieved in vivo within a single breath-hold using the proposed techniques. Accelerated three-dimensional multicontrast late enhancement with CS-LASER can achieve improved MR characterization of myocardial infarction with high spatial resolution. Magn Reson Med 78:598-610, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
Xie, Y L; Li, Y P; Huang, G H; Li, Y F; Chen, L R
2011-04-15
In this study, an inexact-chance-constrained water quality management (ICC-WQM) model is developed for planning regional environmental management under uncertainty. This method is based on an integration of interval linear programming (ILP) and chance-constrained programming (CCP) techniques. ICC-WQM allows uncertainties presented as both probability distributions and interval values to be incorporated within a general optimization framework. Complexities in environmental management systems can be systematically reflected, thus applicability of the modeling process can be highly enhanced. The developed method is applied to planning chemical-industry development in Binhai New Area of Tianjin, China. Interval solutions associated with different risk levels of constraint violation have been obtained. They can be used for generating decision alternatives and thus help decision makers identify desired policies under various system-reliability constraints of water environmental capacity of pollutant. Tradeoffs between system benefits and constraint-violation risks can also be tackled. They are helpful for supporting (a) decision of wastewater discharge and government investment, (b) formulation of local policies regarding water consumption, economic development and industry structure, and (c) analysis of interactions among economic benefits, system reliability and pollutant discharges. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hutchison, Amy C.; Woodward, Lindsay
2014-01-01
The Common Core State Standards produce a need to understand how digital tools can support literacy instruction. The purpose of this case study was to explore how a language arts teacher's integration of computers and iPads empowered and constrained her and the resulting classroom instruction. Constraining factors included (a) inadequate…
NASA Astrophysics Data System (ADS)
Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.
2016-12-01
The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.
PANATIKI: A Network Access Control Implementation Based on PANA for IoT Devices
Sanchez, Pedro Moreno; Lopez, Rafa Marin; Gomez Skarmeta, Antonio F.
2013-01-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices. PMID:24189332
PANATIKI: a network access control implementation based on PANA for IoT devices.
Moreno Sanchez, Pedro; Marin Lopez, Rafa; Gomez Skarmeta, Antonio F
2013-11-01
Internet of Things (IoT) networks are the pillar of recent novel scenarios, such as smart cities or e-healthcare applications. Among other challenges, these networks cover the deployment and interaction of small devices with constrained capabilities and Internet protocol (IP)-based networking connectivity. These constrained devices usually require connection to the Internet to exchange information (e.g., management or sensing data) or access network services. However, only authenticated and authorized devices can, in general, establish this connection. The so-called authentication, authorization and accounting (AAA) services are in charge of performing these tasks on the Internet. Thus, it is necessary to deploy protocols that allow constrained devices to verify their credentials against AAA infrastructures. The Protocol for Carrying Authentication for Network Access (PANA) has been standardized by the Internet engineering task force (IETF) to carry the Extensible Authentication Protocol (EAP), which provides flexible authentication upon the presence of AAA. To the best of our knowledge, this paper is the first deep study of the feasibility of EAP/PANA for network access control in constrained devices. We provide light-weight versions and implementations of these protocols to fit them into constrained devices. These versions have been designed to reduce the impact in standard specifications. The goal of this work is two-fold: (1) to demonstrate the feasibility of EAP/PANA in IoT devices; (2) to provide the scientific community with the first light-weight interoperable implementation of EAP/PANA for constrained devices in the Contiki operating system (Contiki OS), called PANATIKI. The paper also shows a testbed, simulations and experimental results obtained from real and simulated constrained devices.
Design of optimally normal minimum gain controllers by continuation method
NASA Technical Reports Server (NTRS)
Lim, K. B.; Juang, J.-N.; Kim, Z. C.
1989-01-01
A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.
Further evaluation of the constrained least squares electromagnetic compensation method
NASA Technical Reports Server (NTRS)
Smith, William T.
1991-01-01
Technologies exist for construction of antennas with adaptive surfaces that can compensate for many of the larger distortions caused by thermal and gravitational forces. However, as the frequency and size of reflectors increase, the subtle surface errors become significant and degrade the overall electromagnetic performance. Electromagnetic (EM) compensation through an adaptive feed array offers means for mitigation of surface distortion effects. Implementation of EM compensation is investigated with the measured surface errors of the NASA 15 meter hoop/column reflector antenna. Computer simulations are presented for: (1) a hybrid EM compensation technique, and (2) evaluating the performance of a given EM compensation method when implemented with discretized weights.
Context-aware and locality-constrained coding for image categorization.
Xiao, Wenhua; Wang, Bin; Liu, Yu; Bao, Weidong; Zhang, Maojun
2014-01-01
Improving the coding strategy for BOF (Bag-of-Features) based feature design has drawn increasing attention in recent image categorization works. However, the ambiguity in coding procedure still impedes its further development. In this paper, we introduce a context-aware and locality-constrained Coding (CALC) approach with context information for describing objects in a discriminative way. It is generally achieved by learning a word-to-word cooccurrence prior to imposing context information over locality-constrained coding. Firstly, the local context of each category is evaluated by learning a word-to-word cooccurrence matrix representing the spatial distribution of local features in neighbor region. Then, the learned cooccurrence matrix is used for measuring the context distance between local features and code words. Finally, a coding strategy simultaneously considers locality in feature space and context space, while introducing the weight of feature is proposed. This novel coding strategy not only semantically preserves the information in coding, but also has the ability to alleviate the noise distortion of each class. Extensive experiments on several available datasets (Scene-15, Caltech101, and Caltech256) are conducted to validate the superiority of our algorithm by comparing it with baselines and recent published methods. Experimental results show that our method significantly improves the performance of baselines and achieves comparable and even better performance with the state of the arts.
Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-01-01
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid “particle degeneracy” problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network. PMID:29267252
Li, Xinbin; Zhang, Chenglin; Yan, Lei; Han, Song; Guan, Xinping
2017-12-21
Target localization, which aims to estimate the location of an unknown target, is one of the key issues in applications of underwater acoustic sensor networks (UASNs). However, the constrained property of an underwater environment, such as restricted communication capacity of sensor nodes and sensing noises, makes target localization a challenging problem. This paper relies on fractional sensor nodes to formulate a support vector learning-based particle filter algorithm for the localization problem in communication-constrained underwater acoustic sensor networks. A node-selection strategy is exploited to pick fractional sensor nodes with short-distance pattern to participate in the sensing process at each time frame. Subsequently, we propose a least-square support vector regression (LSSVR)-based observation function, through which an iterative regression strategy is used to deal with the distorted data caused by sensing noises, to improve the observation accuracy. At the same time, we integrate the observation to formulate the likelihood function, which effectively update the weights of particles. Thus, the particle effectiveness is enhanced to avoid "particle degeneracy" problem and improve localization accuracy. In order to validate the performance of the proposed localization algorithm, two different noise scenarios are investigated. The simulation results show that the proposed localization algorithm can efficiently improve the localization accuracy. In addition, the node-selection strategy can effectively select the subset of sensor nodes to improve the communication efficiency of the sensor network.
Tongue Images Classification Based on Constrained High Dispersal Network.
Meng, Dan; Cao, Guitao; Duan, Ye; Zhu, Minghua; Tu, Liping; Xu, Dong; Xu, Jiatuo
2017-01-01
Computer aided tongue diagnosis has a great potential to play important roles in traditional Chinese medicine (TCM). However, the majority of the existing tongue image analyses and classification methods are based on the low-level features, which may not provide a holistic view of the tongue. Inspired by deep convolutional neural network (CNN), we propose a novel feature extraction framework called constrained high dispersal neural networks (CHDNet) to extract unbiased features and reduce human labor for tongue diagnosis in TCM. Previous CNN models have mostly focused on learning convolutional filters and adapting weights between them, but these models have two major issues: redundancy and insufficient capability in handling unbalanced sample distribution. We introduce high dispersal and local response normalization operation to address the issue of redundancy. We also add multiscale feature analysis to avoid the problem of sensitivity to deformation. Our proposed CHDNet learns high-level features and provides more classification information during training time, which may result in higher accuracy when predicting testing samples. We tested the proposed method on a set of 267 gastritis patients and a control group of 48 healthy volunteers. Test results show that CHDNet is a promising method in tongue image classification for the TCM study.
Fan, Quan-Yong; Yang, Guang-Hong
2017-01-01
The state inequality constraints have been hardly considered in the literature on solving the nonlinear optimal control problem based the adaptive dynamic programming (ADP) method. In this paper, an actor-critic (AC) algorithm is developed to solve the optimal control problem with a discounted cost function for a class of state-constrained nonaffine nonlinear systems. To overcome the difficulties resulting from the inequality constraints and the nonaffine nonlinearities of the controlled systems, a novel transformation technique with redesigned slack functions and a pre-compensator method are introduced to convert the constrained optimal control problem into an unconstrained one for affine nonlinear systems. Then, based on the policy iteration (PI) algorithm, an online AC scheme is proposed to learn the nearly optimal control policy for the obtained affine nonlinear dynamics. Using the information of the nonlinear model, novel adaptive update laws are designed to guarantee the convergence of the neural network (NN) weights and the stability of the affine nonlinear dynamics without the requirement for the probing signal. Finally, the effectiveness of the proposed method is validated by simulation studies. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Proximity Navigation of Highly Constrained Spacecraft
NASA Technical Reports Server (NTRS)
Scarritt, S.; Swartwout, M.
2007-01-01
Bandit is a 3-kg automated spacecraft in development at Washington University in St. Louis. Bandit's primary mission is to demonstrate proximity navigation, including docking, around a 25-kg student-built host spacecraft. However, because of extreme constraints in mass, power and volume, traditional sensing and actuation methods are not available. In particular, Bandit carries only 8 fixed-magnitude cold-gas thrusters to control its 6 DOF motion. Bandit lacks true inertial sensing, and the ability to sense position relative to the host has error bounds that approach the size of the Bandit itself. Some of the navigation problems are addressed through an extremely robust, error-tolerant soft dock. In addition, we have identified a control methodology that performs well in this constrained environment: behavior-based velocity potential functions, which use a minimum-seeking method similar to Lyapunov functions. We have also adapted the discrete Kalman filter for use on Bandit for position estimation and have developed a similar measurement vs. propagation weighting algorithm for attitude estimation. This paper provides an overview of Bandit and describes the control and estimation approach. Results using our 6DOF flight simulator are provided, demonstrating that these methods show promise for flight use.
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
In the source-free mantle/frozen-flux core magnetic earth model, the non-linear inverse steady motional induction problem was solved using the method presented in Part 1B. How that method was applied to estimate steady, broad-scale fluid velocity fields near the top of Earth's core that induce the secular change indicated by the Definitive Geomagnetic Reference Field (DGRF) models from 1945 to 1980 are described. Special attention is given to the derivation of weight matrices for the DGRF models because the weights determine the apparent significance of the residual secular change. The derived weight matrices also enable estimation of the secular change signal-to-noise ratio characterizing the DGRF models. Two types of weights were derived in 1987-88: radial field weights for fitting the evolution of the broad-scale portion of the radial geomagnetic field component at Earth's surface implied by the DGRF's, and general weights for fitting the evolution of the broad-scale portion of the scalar potential specified by these models. The difference is non-trivial because not all the geomagnetic data represented by the DGRF's constrain the radial field component. For radial field weights (or general weights), a quantitatively acceptable explication of broad-scale secular change relative to the 1980 Magsat epoch must account for 99.94271 percent (or 99.98784 percent) of the total weighted variance accumulated therein. Tolerable normalized root-mean-square weighted residuals of 2.394 percent (or 1.103 percent) are less than the 7 percent errors expected in the source-free mantle/frozen-flux core approximation.
Chondronikola, Maria; Sidossis, Labros S.; Richardson, Lisa M.; Temple, Jeff R.; van den Berg, Patricia A.; Herndon, David N.; Meyer, Walter J.
2012-01-01
Objective Burn injury deformities and obesity have been associated with social integration difficulty and body image dissatisfaction. However, the combined effects of obesity and burn injury in social integration difficulty and body image dissatisfaction are unknown. Methods Adolescent and young adults burn injury survivors were categorized as normal weight (n=47) or overweight and obese (n=21). Burn-related and anthropometric information was obtained from patients' medical records, while validated questionnaires were used to assess the main outcomes and possible confounders. Analysis of covariance and multiple linear regressions were performed to evaluate the objectives of this study. Results Obese and overweight burn injury survivors did not experience increased body image dissatisfaction (12 ± 4.3 vs 13.1 ± 4.4, p = 0.57) or social integration difficulty (17.5 ± 6.9 vs 15.5 ± 5.7, p=0.16) compared to normal weight burn injury survivors. Weight status was not a significant predictor of social integration difficulty or body image dissatisfaction (p=0.19 and p=0.24, respectively). However, mobility limitations predicted greater social integration difficulty (p=0.005) and body image dissatisfaction (p<0.001), while higher weight status at burn was a borderline significant predictor of body image dissatisfaction (p=0.05). Conclusions Obese and overweight adolescents and young adults, who sustained a major burn injury as children, do not experience greater social integration difficulty and body image dissatisfaction compared to normal weight burn injury survivors. Mobility limitations and higher weight status at burn are likely more important factors affecting the long-term social integration difficulty and body image dissatisfaction of these young people. PMID:23292577
DuRoss, Christopher B.; Personius, Stephen F.; Crone, Anthony J.; Olig, Susan S.; Lund, William R.
2011-01-01
We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7–1.9-kyr estimated two-sigma [2δ] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.
DuRoss, C.B.; Personius, S.F.; Crone, A.J.; Olig, S.S.; Lund, W.R.
2011-01-01
We present a method to evaluate and integrate paleoseismic data from multiple sites into a single, objective measure of earthquake timing and recurrence on discrete segments of active faults. We apply this method to the Weber segment (WS) of the Wasatch fault zone using data from four fault-trench studies completed between 1981 and 2009. After systematically reevaluating the stratigraphic and chronologic data from each trench site, we constructed time-stratigraphic OxCal models that yield site probability density functions (PDFs) of the times of individual earthquakes. We next qualitatively correlated the site PDFs into a segment-wide earthquake chronology, which is supported by overlapping site PDFs, large per-event displacements, and prominent segment boundaries. For each segment-wide earthquake, we computed the product of the site PDF probabilities in common time bins, which emphasizes the overlap in the site earthquake times, and gives more weight to the narrowest, best-defined PDFs. The product method yields smaller earthquake-timing uncertainties compared to taking the mean of the site PDFs, but is best suited to earthquakes constrained by broad, overlapping site PDFs. We calculated segment-wide earthquake recurrence intervals and uncertainties using a Monte Carlo model. Five surface-faulting earthquakes occurred on the WS at about 5.9, 4.5, 3.1, 1.1, and 0.6 ka. With the exception of the 1.1-ka event, we used the product method to define the earthquake times. The revised WS chronology yields a mean recurrence interval of 1.3 kyr (0.7-1.9-kyr estimated two-sigma [2??] range based on interevent recurrence). These data help clarify the paleoearthquake history of the WS, including the important question of the timing and rupture extent of the most recent earthquake, and are essential to the improvement of earthquake-probability assessments for the Wasatch Front region.
NASA Astrophysics Data System (ADS)
Tan, Zhukui; Xie, Baiming; Zhao, Yuanliang; Dou, Jinyue; Yan, Tong; Liu, Bin; Zeng, Ming
2018-06-01
This paper presents a new integrated planning framework for effective accommodating electric vehicles in smart distribution systems (SDS). The proposed method incorporates various investment options available for the utility collectively, including distributed generation (DG), capacitors and network reinforcement. Using a back-propagation algorithm combined with cost-benefit analysis, the optimal network upgrade plan, allocation and sizing of the selected components are determined, with the purpose of minimizing the total system capital and operating costs of DG and EV accommodation. Furthermore, a new iterative reliability test method is proposed. It can check the optimization results by subsequently simulating the reliability level of the planning scheme, and modify the generation reserve margin to guarantee acceptable adequacy levels for each year of the planning horizon. Numerical results based on a 32-bus distribution system verify the effectiveness of the proposed method.
Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results
NASA Astrophysics Data System (ADS)
Lu, Da; He, Zhihua; Zhang, Huan
2018-01-01
This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.
Application of Numerical Integration and Data Fusion in Unit Vector Method
NASA Astrophysics Data System (ADS)
Zhang, J.
2012-01-01
The Unit Vector Method (UVM) is a series of orbit determination methods which are designed by Purple Mountain Observatory (PMO) and have been applied extensively. It gets the conditional equations for different kinds of data by projecting the basic equation to different unit vectors, and it suits for weighted process for different kinds of data. The high-precision data can play a major role in orbit determination, and accuracy of orbit determination is improved obviously. The improved UVM (PUVM2) promoted the UVM from initial orbit determination to orbit improvement, and unified the initial orbit determination and orbit improvement dynamically. The precision and efficiency are improved further. In this thesis, further research work has been done based on the UVM: Firstly, for the improvement of methods and techniques for observation, the types and decision of the observational data are improved substantially, it is also asked to improve the decision of orbit determination. The analytical perturbation can not meet the requirement. So, the numerical integration for calculating the perturbation has been introduced into the UVM. The accuracy of dynamical model suits for the accuracy of the real data, and the condition equations of UVM are modified accordingly. The accuracy of orbit determination is improved further. Secondly, data fusion method has been introduced into the UVM. The convergence mechanism and the defect of weighted strategy have been made clear in original UVM. The problem has been solved in this method, the calculation of approximate state transition matrix is simplified and the weighted strategy has been improved for the data with different dimension and different precision. Results of orbit determination of simulation and real data show that the work of this thesis is effective: (1) After the numerical integration has been introduced into the UVM, the accuracy of orbit determination is improved obviously, and it suits for the high-accuracy data of available observation apparatus. Compare with the classical differential improvement with the numerical integration, its calculation speed is also improved obviously. (2) After data fusion method has been introduced into the UVM, weighted distribution accords rationally with the accuracy of different kinds of data, all data are fully used and the new method is also good at numerical stability and rational weighted distribution.
Gallart, X; Gomez, J C; Fernández-Valencia, J A; Combalía, A; Bori, G; García, S; Rios, J; Riba, J
2014-01-01
To evaluate the short-term results of an ultra high molecular weight polyethylene retentive cup in patients at high risk of dislocation, either primary or revision surgery. Retrospective review of 38 cases in order to determine the rate of survival and failure analysis of a constrained cemented cup, with a mean follow-up of 27 months. We studied demographic data, complications, especially re-dislocations of the prosthesis and, also the likely causes of system failure analyzed. In 21.05% (8 cases) were primary surgery and 78.95% were revision surgery (30 cases). The overall survival rate by Kaplan-Meier method was 70.7 months. During follow-up 3 patients died due to causes unrelated to surgery and 2 infections occurred. 12 hips had at least two previous surgeries done. It wasn't any case of aseptic loosening. Four patients presented dislocation, all with a 22 mm head (P=.008). Our statistical analysis didn't found relationship between the abduction cup angle and implant failure (P=.22). The ultra high molecular weight polyethylene retentive cup evaluated in this series has provided satisfactory short-term results in hip arthroplasty patients at high risk of dislocation. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.
NASA Astrophysics Data System (ADS)
Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji
2002-06-01
This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright
GTX Reference Vehicle Structural Verification Methods and Weight Summary
NASA Technical Reports Server (NTRS)
Hunter, J. E.; McCurdy, D. R.; Dunn, P. W.
2002-01-01
The design of a single-stage-to-orbit air breathing propulsion system requires the simultaneous development of a reference launch vehicle in order to achieve the optimal mission performance. Accordingly, for the GTX study a 300-lb payload reference vehicle was preliminary sized to a gross liftoff weight (GLOW) of 238,000 lb. A finite element model of the integrated vehicle/propulsion system was subjected to the trajectory environment and subsequently optimized for structural efficiency. This study involved the development of aerodynamic loads mapped to finite element models of the integrated system in order to assess vehicle margins of safety. Commercially available analysis codes were used in the process along with some internally developed spread-sheets and FORTRAN codes specific to the GTX geometry for mapping of thermal and pressure loads. A mass fraction of 0.20 for the integrated system dry weight has been the driver for a vehicle design consisting of state-of-the-art composite materials in order to meet the rigid weight requirements. This paper summarizes the methodology used for preliminary analyses and presents the current status of the weight optimization for the structural components of the integrated system.
GTX Reference Vehicle Structural Verification Methods and Weight Summary
NASA Technical Reports Server (NTRS)
Hunter, J. E.; McCurdy, D. R.; Dunn, P. W.
2002-01-01
The design of a single-stage-to-orbit air breathing propulsion system requires the simultaneous development of a reference launch vehicle in order to achieve the optimal mission performance. Accordingly, for the GTX study a 300-lb payload reference vehicle was preliminarily sized to a gross liftoff weight (GLOW) of 238,000 lb. A finite element model of the integrated vehicle/propulsion system was subjected to the trajectory environment and subsequently optimized for structural efficiency. This study involved the development of aerodynamic loads mapped to finite element models of the integrated system in order to assess vehicle margins of safety. Commercially available analysis codes were used in the process along with some internally developed spreadsheets and FORTRAN codes specific to the GTX geometry for mapping of thermal and pressure loads. A mass fraction of 0.20 for the integrated system dry weight has been the driver for a vehicle design consisting of state-of-the-art composite materials in order to meet the rigid weight requirements. This paper summarizes the methodology used for preliminary analyses and presents the current status of the weight optimization for the structural components of the integrated system.
Esfandiari, Kasra; Abdollahi, Farzaneh; Talebi, Heidar Ali
2017-09-01
In this paper, an identifier-critic structure is introduced to find an online near-optimal controller for continuous-time nonaffine nonlinear systems having saturated control signal. By employing two Neural Networks (NNs), the solution of Hamilton-Jacobi-Bellman (HJB) equation associated with the cost function is derived without requiring a priori knowledge about system dynamics. Weights of the identifier and critic NNs are tuned online and simultaneously such that unknown terms are approximated accurately and the control signal is kept between the saturation bounds. The convergence of NNs' weights, identification error, and system states is guaranteed using Lyapunov's direct method. Finally, simulation results are performed on two nonlinear systems to confirm the effectiveness of the proposed control strategy. Copyright © 2017 Elsevier Ltd. All rights reserved.
Relations between elliptic multiple zeta values and a special derivation algebra
NASA Astrophysics Data System (ADS)
Broedel, Johannes; Matthes, Nils; Schlotterer, Oliver
2016-04-01
We investigate relations between elliptic multiple zeta values (eMZVs) and describe a method to derive the number of indecomposable elements of given weight and length. Our method is based on representing eMZVs as iterated integrals over Eisenstein series and exploiting the connection with a special derivation algebra. Its commutator relations give rise to constraints on the iterated integrals over Eisenstein series relevant for eMZVs and thereby allow to count the indecomposable representatives. Conversely, the above connection suggests apparently new relations in the derivation algebra. Under https://tools.aei.mpg.de/emzv we provide relations for eMZVs over a wide range of weights and lengths.
CASE STUDY CRITIQUE; UPPER CLINCH CASE STUDY
Case study critique: Upper Clinch case study (from Research on Methods for Integrating Ecological Economics and Ecological Risk Assessment: A Trade-off Weighted Index Approach to Integrating Economics and Ecological Risk Assessment). This critique answers the questions: 1) does ...
NASA Technical Reports Server (NTRS)
Shimabukuro, Yosio Edemir; Smith, James A.
1991-01-01
Constrained-least-squares and weighted-least-squares mixing models for generating fraction images derived from remote sensing multispectral data are presented. An experiment considering three components within the pixels-eucalyptus, soil (understory), and shade-was performed. The generated fraction images for shade (shade image) derived from these two methods were compared by considering the performance and computer time. The derived shade images are related to the observed variation in forest structure, i.e., the fraction of inferred shade in the pixel is related to different eucalyptus ages.
Inference of reactive transport model parameters using a Bayesian multivariate approach
NASA Astrophysics Data System (ADS)
Carniato, Luca; Schoups, Gerrit; van de Giesen, Nick
2014-08-01
Parameter estimation of subsurface transport models from multispecies data requires the definition of an objective function that includes different types of measurements. Common approaches are weighted least squares (WLS), where weights are specified a priori for each measurement, and weighted least squares with weight estimation (WLS(we)) where weights are estimated from the data together with the parameters. In this study, we formulate the parameter estimation task as a multivariate Bayesian inference problem. The WLS and WLS(we) methods are special cases in this framework, corresponding to specific prior assumptions about the residual covariance matrix. The Bayesian perspective allows for generalizations to cases where residual correlation is important and for efficient inference by analytically integrating out the variances (weights) and selected covariances from the joint posterior. Specifically, the WLS and WLS(we) methods are compared to a multivariate (MV) approach that accounts for specific residual correlations without the need for explicit estimation of the error parameters. When applied to inference of reactive transport model parameters from column-scale data on dissolved species concentrations, the following results were obtained: (1) accounting for residual correlation between species provides more accurate parameter estimation for high residual correlation levels whereas its influence for predictive uncertainty is negligible, (2) integrating out the (co)variances leads to an efficient estimation of the full joint posterior with a reduced computational effort compared to the WLS(we) method, and (3) in the presence of model structural errors, none of the methods is able to identify the correct parameter values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, Zach S.; Suyu, Sherry H.; Treu, Tommaso
2013-05-01
In order to use strong gravitational lens time delays to measure precise and accurate cosmological parameters the effects of mass along the line of sight must be taken into account. We present a method to achieve this by constraining the probability distribution function of the effective line-of-sight convergence {kappa}{sub ext}. The method is based on matching the observed overdensity in the weighted number of galaxies to that found in mock catalogs with {kappa}{sub ext} obtained by ray-tracing through structure formation simulations. We explore weighting schemes based on projected distance, mass, luminosity, and redshift. This additional information reduces the uncertainty ofmore » {kappa}{sub ext} from {sigma}{sub {kappa}} {approx} 0.06 to {approx}0.04 for very overdense LOSs like that of the system B1608+656. For more common LOSs, {sigma}{sub {kappa}} is reduced to {approx}<0.03, corresponding to an uncertainty of {approx}< 3% on distance. This uncertainty has comparable effects on cosmological parameters to that arising from the mass model of the deflector and its immediate environment. Photometric redshifts based on g, r, i and K photometries are sufficient to constrain {kappa}{sub ext} almost as well as with spectroscopic redshifts. As an illustration, we apply our method to the system B1608+656. Our most reliable {kappa}{sub ext} estimator gives {sigma}{sub {kappa}} = 0.047 down from 0.065 using only galaxy counts. Although deeper multiband observations of the field of B1608+656 are necessary to obtain a more precise estimate, we conclude that griK photometry, in addition to spectroscopy to characterize the immediate environment, is an effective way to increase the precision of time-delay cosmography.« less
Variable Weight Fractional Collisions for Multiple Species Mixtures
2017-08-28
DISTRIBUTION A: APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED; PA #17517 6 / 21 VARIABLE WEIGHTS FOR DYNAMIC RANGE Continuum to Discrete ...Representation: Many Particles →̃ Continuous Distribution Discretized VDF Yields Vlasov But Collision Integral Still a Problem Particle Methods VDF to Delta...Function Set Collisions between Discrete Velocities But Poorly Resolved Tail (Tail Critical to Inelastic Collisions) Variable Weights Permit Extra DOF in
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
NASA Astrophysics Data System (ADS)
Shen, Lin; Xie, Liangxu; Yang, Mingjun
2017-04-01
Conformational sampling under rugged energy landscape is always a challenge in computer simulations. The recently developed integrated tempering sampling, together with its selective variant (SITS), emerges to be a powerful tool in exploring the free energy landscape or functional motions of various systems. The estimation of weighting factors constitutes a critical step in these methods and requires accurate calculation of partition function ratio between different thermodynamic states. In this work, we propose a new adaptive update algorithm to compute the weighting factors based on the weighted histogram analysis method (WHAM). The adaptive-WHAM algorithm with SITS is then applied to study the thermodynamic properties of several representative peptide systems solvated in an explicit water box. The performance of the new algorithm is validated in simulations of these solvated peptide systems. We anticipate more applications of this coupled optimisation and production algorithm to other complicated systems such as the biochemical reactions in solution.
NASA Astrophysics Data System (ADS)
Mai, J.; Cuntz, M.; Zink, M.; Schaefer, D.; Thober, S.; Samaniego, L. E.; Shafii, M.; Tolson, B.
2015-12-01
Hydrologic models are traditionally calibrated against discharge. Recent studies have shown however, that only a few global model parameters are constrained using the integral discharge measurements. It is therefore advisable to use additional information to calibrate those models. Snow pack data, for example, could improve the parametrization of snow-related processes, which might be underrepresented when using only discharge. One common approach is to combine these multiple objectives into one single objective function and allow the use of a single-objective algorithm. Another strategy is to consider the different objectives separately and apply a Pareto-optimizing algorithm. Both methods are challenging in the choice of appropriate multiple objectives with either conflicting interests or the focus on different model processes. A first aim of this study is to compare the two approaches employing the mesoscale Hydrologic Model mHM at several distinct river basins over Europe and North America. This comparison will allow the identification of the single-objective solution on the Pareto front. It is elucidated if this position is determined by the weighting and scaling of the multiple objectives when combing them to the single objective. The principal second aim is to guide the selection of proper objectives employing sensitivity analyses. These analyses are used to determine if an additional information would help to constrain additional model parameters. The additional information are either multiple data sources or multiple signatures of one measurement. It is evaluated if specific discharge signatures can inform different parts of the hydrologic model. The results show that an appropriate selection of discharge signatures increased the number of constrained parameters by more than 50% compared to using only NSE of the discharge time series. It is further assessed if the use of these signatures impose conflicting objectives on the hydrologic model. The usage of signatures is furthermore contrasted to the use of additional observations such as soil moisture or snow height. The gain of using an auxiliary dataset is determined using the parametric sensitivity on the respective modeled variable.
Mass-based design and optimization of wave rotors for gas turbine engine enhancement
NASA Astrophysics Data System (ADS)
Chan, S.; Liu, H.
2017-03-01
An analytic method aiming at mass properties was developed for the preliminary design and optimization of wave rotors. In the present method, we introduce the mass balance principle into the design and thus can predict and optimize the mass qualities as well as the performance of wave rotors. A dedicated least-square method with artificial weighting coefficients was developed to solve the over-constrained system in the mass-based design. This method and the adoption of the coefficients were validated by numerical simulation. Moreover, the problem of fresh air exhaustion (FAE) was put forward and analyzed, and exhaust gas recirculation (EGR) was investigated. Parameter analyses and optimization elucidated which designs would not only achieve the best performance, but also operate with minimum EGR and no FAE.
Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Helicopter Control Energy Reduction Using Moving Horizontal Tail
Oktay, Tugrul; Sal, Firat
2015-01-01
Helicopter moving horizontal tail (i.e., MHT) strategy is applied in order to save helicopter flight control system (i.e., FCS) energy. For this intention complex, physics-based, control-oriented nonlinear helicopter models are used. Equations of MHT are integrated into these models and they are together linearized around straight level flight condition. A specific variance constrained control strategy, namely, output variance constrained Control (i.e., OVC) is utilized for helicopter FCS. Control energy savings due to this MHT idea with respect to a conventional helicopter are calculated. Parameters of helicopter FCS and dimensions of MHT are simultaneously optimized using a stochastic optimization method, namely, simultaneous perturbation stochastic approximation (i.e., SPSA). In order to observe improvement in behaviors of classical controls closed loop analyses are done. PMID:26180841
Option pricing, stochastic volatility, singular dynamics and constrained path integrals
NASA Astrophysics Data System (ADS)
Contreras, Mauricio; Hojman, Sergio A.
2014-01-01
Stochastic volatility models have been widely studied and used in the financial world. The Heston model (Heston, 1993) [7] is one of the best known models to deal with this issue. These stochastic volatility models are characterized by the fact that they explicitly depend on a correlation parameter ρ which relates the two Brownian motions that drive the stochastic dynamics associated to the volatility and the underlying asset. Solutions to the Heston model in the context of option pricing, using a path integral approach, are found in Lemmens et al. (2008) [21] while in Baaquie (2007,1997) [12,13] propagators for different stochastic volatility models are constructed. In all previous cases, the propagator is not defined for extreme cases ρ=±1. It is therefore necessary to obtain a solution for these extreme cases and also to understand the origin of the divergence of the propagator. In this paper we study in detail a general class of stochastic volatility models for extreme values ρ=±1 and show that in these two cases, the associated classical dynamics corresponds to a system with second class constraints, which must be dealt with using Dirac’s method for constrained systems (Dirac, 1958,1967) [22,23] in order to properly obtain the propagator in the form of a Euclidean Hamiltonian path integral (Henneaux and Teitelboim, 1992) [25]. After integrating over momenta, one gets an Euclidean Lagrangian path integral without constraints, which in the case of the Heston model corresponds to a path integral of a repulsive radial harmonic oscillator. In all the cases studied, the price of the underlying asset is completely determined by one of the second class constraints in terms of volatility and plays no active role in the path integral.
Propeller/fan-pitch feathering apparatus
NASA Technical Reports Server (NTRS)
Schilling, Jan C. (Inventor); Adamson, Arthur P. (Inventor); Bathori, Julius (Inventor); Walker, Neil (Inventor)
1990-01-01
A pitch feathering system for a gas turbine driven aircraft propeller having multiple variable pitch blades utilizes a counter-weight linked to the blades. The weight is constrained to move, when effecting a pitch change, only in a radial plane and about an axis which rotates about the propeller axis. The system includes a linkage allowing the weight to move through a larger angle than the associated pitch change of the blade.
Pedersen, Birgith; Groenkjaer, Mette; Falkmer, Ursula; Delmar, Charlotte
Changes in weight and body composition among women during and after adjuvant antineoplastic treatment for breast cancer may influence long-term survival and quality of life. Research on factual weight changes is diverse and contrasting, and their influence on women's perception of body and self seems to be insufficiently explored. The aim of this study was to expand the understanding of the association between changes in weight and body composition and the women's perception of body and selves. A mixed-methods research design was used. Data consisted of weight and body composition measures from 95 women with breast cancer during 18 months past surgery. Twelve women from this cohort were interviewed individually at 12 months. Linear mixed model and logistic regression were used to estimate changes of repeated measures and odds ratio. Interviews were analyzed guided by existential phenomenology. Joint displays and integrative mixed-methods interpretation demonstrated that even small weight gains, extended waist, and weight loss were associated with fearing recurrence of breast cancer. Perceiving an ambiguous transforming body, the women moved between a unified body subject and the body as an object dissociated in "I" and "it" while fighting against or accepting the body changes. Integrating findings demonstrated that factual weight changes do not correspond with the perceived changes and may trigger existential threats. Transition to a new habitual body demand health practitioners to enter a joint narrative work to reveal how the changes impact on the women's body and self-perception independent of how they are displayed quantitatively.
Design, fabrication and acceptance testing of a zero gravity whole body shower, volume 1
NASA Technical Reports Server (NTRS)
1973-01-01
The effort to design whole body shower for the space station prototype is reported. Clothes and dish washer/dryer concepts were formulated with consideration given to integrating such a system with the overall shower design. Water recycling methods to effect vehicle weight savings were investigated and it was concluded that reusing wash and/or rinse water resulted in weight savings which were not sufficient to outweigh the added degree of hardware complexity. The formulation of preliminary and final designs for the shower are described. A detailed comparison of the air drag vs. vacuum pickup method was prepared that indicated the air drag concept results in more severe space station weight penalties; therefore, the preliminary system design was based on utilizing the vacuum pickup method. Tests were performed to determine the optimum methods of storing, heating and sterilizing the cleansing agent utilized in the shower; it was concluded that individual packages of pre-sterilized cleansing agent should be used. Integration features with the space station prototype system were defined and incorporated into the shower design as necessary.
Modeling PSInSAR time series without phase unwrapping
Zhang, L.; Ding, X.; Lu, Z.
2011-01-01
In this paper, we propose a least-squares-based method for multitemporal synthetic aperture radar interferometry that allows one to estimate deformations without the need of phase unwrapping. The method utilizes a series of multimaster wrapped differential interferograms with short baselines and focuses on arcs at which there are no phase ambiguities. An outlier detector is used to identify and remove the arcs with phase ambiguities, and a pseudoinverse of the variance-covariance matrix is used as the weight matrix of the correlated observations. The deformation rates at coherent points are estimated with a least squares model constrained by reference points. The proposed approach is verified with a set of simulated data.
Fusion of Low-Cost Imaging and Inertial Sensors for Navigation
2007-01-01
an Integrated GPS/MEMS Inertial Navigation Pack- age. In Proceedings of ION GNSS 2004, pp. 825–832, September 2004. [3] R. G. Brown and P. Y. Hwang ...track- ing, with no a priori knowledge is provided in [13]. An on- line (Extended Kalman Filter-based) method for calculat- ing a trajectory by tracking...transformation, effectively constraining the resulting correspondence search space. The algorithm was incorporated into an extended Kalman filter and
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia; Li, Qingchen
2017-04-01
This study focuses on multifractal detrended cross-correlation analysis of the different volatility intervals of Mainland China, US, and Hong Kong stock markets. A volatility-constrained multifractal detrended cross-correlation analysis (VC-MF-DCCA) method is proposed to study the volatility conductivity of Mainland China, US, and Hong Kong stock markets. Empirical results indicate that fluctuation may be related to important activities in real markets. The Hang Seng Index (HSI) stock market is more influential than the Shanghai Composite Index (SCI) stock market. Furthermore, the SCI stock market is more influential than the Dow Jones Industrial Average stock market. The conductivity between the HSI and SCI stock markets is the strongest. HSI was the most influential market in the large fluctuation interval of 1991 to 2014. The autoregressive fractionally integrated moving average method is used to verify the validity of VC-MF-DCCA. Results show that VC-MF-DCCA is effective.
NASA Technical Reports Server (NTRS)
Lee, Jeh Won
1990-01-01
The objective is the theoretical analysis and the experimental verification of dynamics and control of a two link flexible manipulator with a flexible parallel link mechanism. Nonlinear equations of motion of the lightweight manipulator are derived by the Lagrangian method in symbolic form to better understand the structure of the dynamic model. The resulting equation of motion have a structure which is useful to reduce the number of terms calculated, to check correctness, or to extend the model to higher order. A manipulator with a flexible parallel link mechanism is a constrained dynamic system whose equations are sensitive to numerical integration error. This constrained system is solved using singular value decomposition of the constraint Jacobian matrix. Elastic motion is expressed by the assumed mode method. Mode shape functions of each link are chosen using the load interfaced component mode synthesis. The discrepancies between the analytical model and the experiment are explained using a simplified and a detailed finite element model.
Modeling laser beam diffraction and propagation by the mode-expansion method.
Snyder, James J
2007-08-01
In the mode-expansion method for modeling propagation of a diffracted beam, the beam at the aperture can be expanded as a weighted set of orthogonal modes. The parameters of the expansion modes are chosen to maximize the weighting coefficient of the lowest-order mode. As the beam propagates, its field distribution can be reconstructed from the set of weighting coefficients and the Gouy phase of the lowest-order mode. We have developed a simple procedure to implement the mode-expansion method for propagation through an arbitrary ABCD matrix, and we have demonstrated that it is accurate in comparison with direct calculations of diffraction integrals and much faster.
Neethling, Ian; Jelsma, Jennifer; Ramma, Lebogang; Schneider, Helen; Bradshaw, Debbie
2016-01-01
The global burden of disease (GBD) 2010 study used a universal set of disability weights to estimate disability adjusted life years (DALYs) by country. However, it is not clear whether these weights can be applied universally in calculating DALYs to inform local decision-making. This study derived disability weights for a resource-constrained community in Cape Town, South Africa, and interrogated whether the GBD 2010 disability weights necessarily represent the preferences of economically disadvantaged communities. A household survey was conducted in Lavender Hill, Cape Town, to assess the health state preferences of the general public. The responses from a paired comparison valuation method were assessed using a probit regression. The probit coefficients were anchored onto the 0 to 1 disability weight scale by running a lowess regression on the GBD 2010 disability weights and interpolating the coefficients between the upper and lower limit of the smoothed disability weights. Heroin and opioid dependence had the highest disability weight of 0.630, whereas intellectual disability had the lowest (0.040). Untreated injuries ranked higher than severe mental disorders. There were some counterintuitive results, such as moderate (15th) and severe vision impairment (16th) ranking higher than blindness (20th). A moderate correlation between the disability weights of the local study and those of the GBD 2010 study was observed (R(2)=0.440, p<0.05). This indicates that there was a relationship, although some conditions, such as untreated fracture of the radius or ulna, showed large variability in disability weights (0.488 in local study and 0.043 in GBD 2010). Respondents seemed to value physical mobility higher than cognitive functioning, which is in contrast to the GBD 2010 study. This study shows that not all health state preferences are universal. Studies estimating DALYs need to derive local disability weights using methods that are less cognitively demanding for respondents.
Panda, Rashmi; Puhan, N B; Panda, Ganapati
2018-02-01
Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.
Regional patterns of future runoff changes from Earth system models constrained by observation
NASA Astrophysics Data System (ADS)
Yang, Hui; Zhou, Feng; Piao, Shilong; Huang, Mengtian; Chen, Anping; Ciais, Philippe; Li, Yue; Lian, Xu; Peng, Shushi; Zeng, Zhenzhong
2017-06-01
In the recent Intergovernmental Panel on Climate Change assessment, multimodel ensembles (arithmetic model averaging, AMA) were constructed with equal weights given to Earth system models, without considering the performance of each model at reproducing current conditions. Here we use Bayesian model averaging (BMA) to construct a weighted model ensemble for runoff projections. Higher weights are given to models with better performance in estimating historical decadal mean runoff. Using the BMA method, we find that by the end of this century, the increase of global runoff (9.8 ± 1.5%) under Representative Concentration Pathway 8.5 is significantly lower than estimated from AMA (12.2 ± 1.3%). BMA presents a less severe runoff increase than AMA at northern high latitudes and a more severe decrease in Amazonia. Runoff decrease in Amazonia is stronger than the intermodel difference. The intermodel difference in runoff changes is mainly caused not only by precipitation differences among models, but also by evapotranspiration differences at the high northern latitudes.
Li, Lian-Hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility.
Li, Lian-hui; Mo, Rong
2015-01-01
The production task queue has a great significance for manufacturing resource allocation and scheduling decision. Man-made qualitative queue optimization method has a poor effect and makes the application difficult. A production task queue optimization method is proposed based on multi-attribute evaluation. According to the task attributes, the hierarchical multi-attribute model is established and the indicator quantization methods are given. To calculate the objective indicator weight, criteria importance through intercriteria correlation (CRITIC) is selected from three usual methods. To calculate the subjective indicator weight, BP neural network is used to determine the judge importance degree, and then the trapezoid fuzzy scale-rough AHP considering the judge importance degree is put forward. The balanced weight, which integrates the objective weight and the subjective weight, is calculated base on multi-weight contribution balance model. The technique for order preference by similarity to an ideal solution (TOPSIS) improved by replacing Euclidean distance with relative entropy distance is used to sequence the tasks and optimize the queue by the weighted indicator value. A case study is given to illustrate its correctness and feasibility. PMID:26414758
Method of texturing a superconductive oxide precursor
DeMoranville, Kenneth L.; Li, Qi; Antaya, Peter D.; Christopherson, Craig J.; Riley, Jr., Gilbert N.; Seuntjens, Jeffrey M.
1999-01-01
A method of forming a textured superconductor wire includes constraining an elongated superconductor precursor between two constraining elongated members placed in contact therewith on opposite sides of the superconductor precursor, and passing the superconductor precursor with the two constraining members through flat rolls to form the textured superconductor wire. The method includes selecting desired cross-sectional shape and size constraining members to control the width of the formed superconductor wire. A textured superconductor wire formed by the method of the invention has regular-shaped, curved sides and is free of flashing. A rolling assembly for single-pass rolling of the elongated precursor superconductor includes two rolls, two constraining members, and a fixture for feeding the precursor superconductor and the constraining members between the rolls. In alternate embodiments of the invention, the rolls can have machined regions which will contact only the elongated constraining members and affect the lateral deformation and movement of those members during the rolling process.
76 FR 52229 - Establishment of Area Navigation Route Q-37; Texas
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-22
... route around potentially constrained airspace during convective weather events in west Texas. DATES... around potentially constrained airspace during convective weather events in west Texas. Additionally, the new route is being integrated into the existing severe weather national playbook routes to Houston, TX...
Cobb, G; Bland, R M
2013-01-01
To explore the financial implications of applying the WHO guidelines for the nutritional management of HIV-infected children in a rural South African HIV programme. WHO guidelines describe Nutritional Care Plans (NCPs) for three categories of HIV-infected children: NCP-A: growing adequately; NCP-B: weight-for-age z-score (WAZ) ≤-2 but no evidence of severe acute malnutrition (SAM), confirmed weight loss/growth curve flattening, or condition with increased nutritional needs (e.g. tuberculosis); NCP-C: SAM. In resource-constrained settings, children requiring NCP-B or NCP-C usually need supplementation to achieve the additional energy recommendation. We estimated the proportion of children initiating antiretroviral treatment (ART) in the Hlabisa HIV Programme who would have been eligible for supplementation in 2010. The cost of supplying 26-weeks supplementation as a proportion of the cost of supplying ART to the same group was calculated. A total of 251 children aged 6 months to 14 years initiated ART. Eighty-eight required 6-month NCP-B, including 41 with a WAZ ≤-2 (no evidence of SAM) and 47 with a WAZ >-2 with co-existent morbidities including tuberculosis. Additionally, 25 children had SAM and required 10-weeks NCP-C followed by 16-weeks NCP-B. Thus, 113 of 251 (45%) children were eligible for nutritional supplementation at an estimated overall cost of $11 136, using 2010 exchange rates. These costs are an estimated additional 11.6% to that of supplying 26-week ART to the 251 children initiated. It is essential to address nutritional needs of HIV-infected children to optimise their health outcomes. Nutritional supplementation should be integral to, and budgeted for, in HIV programmes. © 2012 Blackwell Publishing Ltd.
ERIC Educational Resources Information Center
Dadelo, Stanislav; Turskis, Zenonas; Zavadskas, Edmundas Kazimieras; Kacerauskas, Tomas; Dadeliene, Ruta
2016-01-01
To maximize the effectiveness of a decision, it is necessary to support decision-making with integrated methods. It can be assumed that subjective evaluation (considering only absolute values) is only remotely connected with the evaluation of real processes. Therefore, relying solely on these values in process management decision-making would be a…
Efficient approach to the free energy of crystals via Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Navascués, G.; Velasco, E.
2015-08-01
We present a general approach to compute the absolute free energy of a system of particles with constrained center of mass based on the Monte Carlo thermodynamic coupling integral method. The version of the Frenkel-Ladd approach [J. Chem. Phys. 81, 3188 (1984)], 10.1063/1.448024, which uses a harmonic coupling potential, is recovered. Also, we propose a different choice, based on one-particle square-well coupling potentials, which is much simpler, more accurate, and free from some of the difficulties of the Frenkel-Ladd method. We apply our approach to hard spheres and compare with the standard harmonic method.
Perualila-Tan, Nolen Joy; Shkedy, Ziv; Talloen, Willem; Göhlmann, Hinrich W H; Moerbeke, Marijke Van; Kasim, Adetayo
2016-08-01
The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.
Finite time step and spatial grid effects in δf simulation of warm plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sturdevant, Benjamin J., E-mail: benjamin.j.sturdevant@gmail.com; Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309; Parker, Scott E.
2016-01-15
This paper introduces a technique for analyzing time integration methods used with the particle weight equations in δf method particle-in-cell (PIC) schemes. The analysis applies to the simulation of warm, uniform, periodic or infinite plasmas in the linear regime and considers the collective behavior similar to the analysis performed by Langdon for full-f PIC schemes [1,2]. We perform both a time integration analysis and spatial grid analysis for a kinetic ion, adiabatic electron model of ion acoustic waves. An implicit time integration scheme is studied in detail for δf simulations using our weight equation analysis and for full-f simulations usingmore » the method of Langdon. It is found that the δf method exhibits a CFL-like stability condition for low temperature ions, which is independent of the parameter characterizing the implicitness of the scheme. The accuracy of the real frequency and damping rate due to the discrete time and spatial schemes is also derived using a perturbative method. The theoretical analysis of numerical error presented here may be useful for the verification of simulations and for providing intuition for the design of new implicit time integration schemes for the δf method, as well as understanding differences between δf and full-f approaches to plasma simulation.« less
Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo
2014-06-01
In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both local and global learning strategies, able to exploit the overall topology of the network. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Flexible Energy Scheduling Tool for Integrating Variable Generation | Grid
, security-constrained economic dispatch, and automatic generation control programs. DOWNLOAD PAPER Electric commitment, security-constrained economic dispatch, and automatic generation control sub-models. Each sub resolutions and operating strategies can be explored. FESTIV produces not only economic metrics but also
Optimizing Railroad Tank Car Safety Design to Reduce Hazardous Materials Transportation Risk
ERIC Educational Resources Information Center
Saat, Mohd Rapik
2009-01-01
The design of railroad tank cars is subject to structural and performance requirements and constrained by weight. They can be made safer by increasing tank thickness and adding various protective features, but these increase the weight and cost of the car and reduce its capacity and consequent transportation efficiency. Aircraft, automobiles and…
NASA Astrophysics Data System (ADS)
Druken, Bridget Kinsella
Lesson study, a teacher-led vehicle for inquiring into teacher practice through creating, enacting, and reflecting on collaboratively designed research lessons, has been shown to improve mathematics teacher practice in the United States, such as improving knowledge about mathematics, changing teacher practice, and developing communities of teachers. Though it has been described as a sustainable form of professional development, little research exists on what might support teachers in continuing to engage in lesson study after a grant ends. This qualitative and multi-case study investigates the sustainability of lesson study as mathematics teachers engage in a district scale-up lesson study professional experience after participating in a three-year California Mathematics Science Partnership (CaMSP) grant to improve algebraic instruction. To do so, I first provide a description of material (e.g. curricular materials and time), human (attending district trainings and interacting with mathematics coaches), and social (qualities like trust, shared values, common goals, and expectations developed through relationships with others) resources present in the context of two school districts as reported by participants. I then describe practices of lesson study reported to have continued. I also report on teachers' conceptions of what it means to engage in lesson study. I conclude by describing how these results suggest factors that supported and constrained teachers' in continuing lesson study. To accomplish this work, I used qualitative methods of grounded theory informed by a modified sustainability framework on interview, survey, and case study data about teachers, principals, and Teachers on Special Assignment (TOSAs). Four cases were selected to show the varying levels of lesson study practices that continued past the conclusion of the grant. Analyses reveal varying levels of integration, linkage, and synergy among both formally and informally arranged groups of teachers. High levels of integration and linkage among groups of teachers supported them in sustaining lesson study practices. Groups of teachers with low levels of integration but with linked individuals sustained some level of practices, whereas teachers with low levels of integration and linkage constrained them in continuing lesson study at their site. Additionally, teachers' visions of lesson study and its uses shaped the types of activities teachers engaged, with well-developed conceptions of lesson study supporting and limited visions constraining the ability to attract or align resources to continue lesson study practices. Principals' support, teacher autonomy, and cultures of collaboration or isolation were also factors that either supported or constrained teachers' ability to continue lesson study. These analyses provide practical implications on how to support mathematics teachers in continuing lesson study, and theoretical contributions on developing the construct of sustainability within mathematics education research.
Xu, Jiuping; Feng, Cuiying
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method.
Xu, Jiuping
2014-01-01
This paper presents an extension of the multimode resource-constrained project scheduling problem for a large scale construction project where multiple parallel projects and a fuzzy random environment are considered. By taking into account the most typical goals in project management, a cost/weighted makespan/quality trade-off optimization model is constructed. To deal with the uncertainties, a hybrid crisp approach is used to transform the fuzzy random parameters into fuzzy variables that are subsequently defuzzified using an expected value operator with an optimistic-pessimistic index. Then a combinatorial-priority-based hybrid particle swarm optimization algorithm is developed to solve the proposed model, where the combinatorial particle swarm optimization and priority-based particle swarm optimization are designed to assign modes to activities and to schedule activities, respectively. Finally, the results and analysis of a practical example at a large scale hydropower construction project are presented to demonstrate the practicality and efficiency of the proposed model and optimization method. PMID:24550708
Canonical quantization of constrained systems and coadjoint orbits of Diff(S sup 1 )
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherer, W.M.
It is shown that Dirac's treatment of constrained Hamiltonian systems and Schwinger's action principle quantization lead to identical commutations relations. An explicit relation between the Lagrange multipliers in the action principle approach and the additional terms in the Dirac bracket is derived. The equivalence of the two methods is demonstrated in the case of the non-linear sigma model. Dirac's method is extended to superspace and this extension is applied to the chiral superfield. The Dirac brackets of the massive interacting chiral superfluid are derived and shown to give the correct commutation relations for the component fields. The Hamiltonian of themore » theory is given and the Hamiltonian equations of motion are computed. They agree with the component field results. An infinite sequence of differential operators which are covariant under the coadjoint action of Diff(S{sup 1}) and analogues to Hill's operator is constructed. They map conformal fields of negative integer and half-integer weight to their dual space. Some properties of these operators are derived and possible applications are discussed. The Korteweg-de Vries equation is formulated as a coadjoint orbit of Diff(S{sup 1}).« less
Liu, Derong; Yang, Xiong; Wang, Ding; Wei, Qinglai
2015-07-01
The design of stabilizing controller for uncertain nonlinear systems with control constraints is a challenging problem. The constrained-input coupled with the inability to identify accurately the uncertainties motivates the design of stabilizing controller based on reinforcement-learning (RL) methods. In this paper, a novel RL-based robust adaptive control algorithm is developed for a class of continuous-time uncertain nonlinear systems subject to input constraints. The robust control problem is converted to the constrained optimal control problem with appropriately selecting value functions for the nominal system. Distinct from typical action-critic dual networks employed in RL, only one critic neural network (NN) is constructed to derive the approximate optimal control. Meanwhile, unlike initial stabilizing control often indispensable in RL, there is no special requirement imposed on the initial control. By utilizing Lyapunov's direct method, the closed-loop optimal control system and the estimated weights of the critic NN are proved to be uniformly ultimately bounded. In addition, the derived approximate optimal control is verified to guarantee the uncertain nonlinear system to be stable in the sense of uniform ultimate boundedness. Two simulation examples are provided to illustrate the effectiveness and applicability of the present approach.
NASA Astrophysics Data System (ADS)
Pathak, Savita; Mondal, Seema Sarkar
2010-10-01
A multi-objective inventory model of deteriorating item has been developed with Weibull rate of decay, time dependent demand, demand dependent production, time varying holding cost allowing shortages in fuzzy environments for non- integrated and integrated businesses. Here objective is to maximize the profit from different deteriorating items with space constraint. The impreciseness of inventory parameters and goals for non-integrated business has been expressed by linear membership functions. The compromised solutions are obtained by different fuzzy optimization methods. To incorporate the relative importance of the objectives, the different cardinal weights crisp/fuzzy have been assigned. The models are illustrated with numerical examples and results of models with crisp/fuzzy weights are compared. The result for the model assuming them to be integrated business is obtained by using Generalized Reduced Gradient Method (GRG). The fuzzy integrated model with imprecise inventory cost is formulated to optimize the possibility necessity measure of fuzzy goal of the objective function by using credibility measure of fuzzy event by taking fuzzy expectation. The results of crisp/fuzzy integrated model are illustrated with numerical examples and results are compared.
Moody, J.A.; Meade, R.H.
1994-01-01
The efficacy of the method is evaluated by comparing the particle size distributions of sediment collected by the discharge-weighted pumping method with the particle size distributions of sediment collected by depth integration and separated by gravitational settling. The pumping method was found to undersample the suspended sand sized particles (>63 ??m) but to collect a representative sample of the suspended silt and clay sized particles (<63??m). The success of the discharge-weighted pumping method depends on how homogeneously the silt and clay sized particles (<63 ??m) are distributed in the vertical direction in the river. The degree of homogeneity depends on the composition and degree of aggregation of the suspended sediment particles. -from Authors
Leme, Ana Carolina Barco; Thompson, Debbe; Lenz Dunker, Karin Louise; Nicklas, Theresa; Tucunduva Philippi, Sonia; Lopez, Tabbetha; Baranowski, Tom
2018-01-01
Introduction Obesity and eating disorders are public health problems that have lifelong financial and personal costs and common risk factors, for example, body dissatisfaction, weight teasing and disordered eating. Obesity prevention interventions might lead to the development of an eating disorder since focusing on weight may contribute to excessive concern with diet and weight. Therefore, the proposed research will assess whether integrating obesity and eating disorder prevention procedures (‘integrated approach’) do better than single approach interventions in preventing obesity among adolescents, and if integrated approaches influence weight-related outcomes. Methods and analysis Integrated obesity and eating disorder prevention interventions will be identified. Randomised controlled trials and quasi-experimental trials reporting data on adolescents ranging from 10 to 19 years of age from both sexes will be included. Outcomes of interest include body composition, unhealthy weight control behaviours and body satisfaction measurements. MEDLINE/PubMed, PsycINFO, Web of Science and SciELO will be searched. Data will be extracted independently by two reviewers using a standardised data extraction form. Trial quality will be assessed using the Cochrane Collaboration criteria. The effects of integrated versus single approach intervention studies will be compared using systematic review procedures. If an adequate number of studies report data on integrated interventions among similar populations (k>5), a meta-analysis with random effects will be conducted. Sensitivity analyses and meta-regression will be performed only if between-study heterogeneity is high (I2 ≥75%). Ethics and dissemination Ethics approval will not be required as this is a systematic review of published studies. The findings will be disseminated through conference presentations and peer-reviewed journals. PMID:29674372
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wells, J; Zhang, L; Samei, E
Purpose: To develop and validate more robust methods for automated lung, spine, and hardware detection in AP/PA chest images. This work is part of a continuing effort to automatically characterize the perceptual image quality of clinical radiographs. [Y. Lin et al. Med. Phys. 39, 7019–7031 (2012)] Methods: Our previous implementation of lung/spine identification was applicable to only one vendor. A more generalized routine was devised based on three primary components: lung boundary detection, fuzzy c-means (FCM) clustering, and a clinically-derived lung pixel probability map. Boundary detection was used to constrain the lung segmentations. FCM clustering produced grayscale- and neighborhood-based pixelmore » classification probabilities which are weighted by the clinically-derived probability maps to generate a final lung segmentation. Lung centerlines were set along the left-right lung midpoints. Spine centerlines were estimated as a weighted average of body contour, lateral lung contour, and intensity-based centerline estimates. Centerline estimation was tested on 900 clinical AP/PA chest radiographs which included inpatient/outpatient, upright/bedside, men/women, and adult/pediatric images from multiple imaging systems. Our previous implementation further did not account for the presence of medical hardware (pacemakers, wires, implants, staples, stents, etc.) potentially biasing image quality analysis. A hardware detection algorithm was developed using a gradient-based thresholding method. The training and testing paradigm used a set of 48 images from which 1920 51×51 pixel{sup 2} ROIs with and 1920 ROIs without hardware were manually selected. Results: Acceptable lung centerlines were generated in 98.7% of radiographs while spine centerlines were acceptable in 99.1% of radiographs. Following threshold optimization, the hardware detection software yielded average true positive and true negative rates of 92.7% and 96.9%, respectively. Conclusion: Updated segmentation and centerline estimation methods in addition to new gradient-based hardware detection software provide improved data integrity control and error-checking for automated clinical chest image quality characterization across multiple radiography systems.« less
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; ...
2016-09-09
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing
In this paper, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesizedmore » by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated p-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. Finally, these four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.« less
Gillet, Natacha; Berstis, Laura; Wu, Xiaojing; Gajdos, Fruzsina; Heck, Alexander; de la Lande, Aurélien; Blumberger, Jochen; Elstner, Marcus
2016-10-11
In this article, four methods to calculate charge transfer integrals in the context of bridge-mediated electron transfer are tested. These methods are based on density functional theory (DFT). We consider two perturbative Green's function effective Hamiltonian methods (first, at the DFT level of theory, using localized molecular orbitals; second, applying a tight-binding DFT approach, using fragment orbitals) and two constrained DFT implementations with either plane-wave or local basis sets. To assess the performance of the methods for through-bond (TB)-dominated or through-space (TS)-dominated transfer, different sets of molecules are considered. For through-bond electron transfer (ET), several molecules that were originally synthesized by Paddon-Row and co-workers for the deduction of electronic coupling values from photoemission and electron transmission spectroscopies, are analyzed. The tested methodologies prove to be successful in reproducing experimental data, the exponential distance decay constant and the superbridge effects arising from interference among ET pathways. For through-space ET, dedicated π-stacked systems with heterocyclopentadiene molecules were created and analyzed on the basis of electronic coupling dependence on donor-acceptor distance, structure of the bridge, and ET barrier height. The inexpensive fragment-orbital density functional tight binding (FODFTB) method gives similar results to constrained density functional theory (CDFT) and both reproduce the expected exponential decay of the coupling with donor-acceptor distances and the number of bridging units. These four approaches appear to give reliable results for both TB and TS ET and present a good alternative to expensive ab initio methodologies for large systems involving long-range charge transfers.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
Cheng, Jian; Deriche, Rachid; Jiang, Tianzi; Shen, Dinggang; Yap, Pew-Thian
2014-11-01
Spherical Deconvolution (SD) is commonly used for estimating fiber Orientation Distribution Functions (fODFs) from diffusion-weighted signals. Existing SD methods can be classified into two categories: 1) Continuous Representation based SD (CR-SD), where typically Spherical Harmonic (SH) representation is used for convenient analytical solutions, and 2) Discrete Representation based SD (DR-SD), where the signal profile is represented by a discrete set of basis functions uniformly oriented on the unit sphere. A feasible fODF should be non-negative and should integrate to unity throughout the unit sphere S(2). However, to our knowledge, most existing SH-based SD methods enforce non-negativity only on discretized points and not the whole continuum of S(2). Maximum Entropy SD (MESD) and Cartesian Tensor Fiber Orientation Distributions (CT-FOD) are the only SD methods that ensure non-negativity throughout the unit sphere. They are however computational intensive and are susceptible to errors caused by numerical spherical integration. Existing SD methods are also known to overestimate the number of fiber directions, especially in regions with low anisotropy. DR-SD introduces additional error in peak detection owing to the angular discretization of the unit sphere. This paper proposes a SD framework, called Non-Negative SD (NNSD), to overcome all the limitations above. NNSD is significantly less susceptible to the false-positive peaks, uses SH representation for efficient analytical spherical deconvolution, and allows accurate peak detection throughout the whole unit sphere. We further show that NNSD and most existing SD methods can be extended to work on multi-shell data by introducing a three-dimensional fiber response function. We evaluated NNSD in comparison with Constrained SD (CSD), a quadratic programming variant of CSD, MESD, and an L1-norm regularized non-negative least-squares DR-SD. Experiments on synthetic and real single-/multi-shell data indicate that NNSD improves estimation performance in terms of mean difference of angles, peak detection consistency, and anisotropy contrast between isotropic and anisotropic regions. Copyright © 2014 Elsevier Inc. All rights reserved.
Multi-symplectic integrators: numerical schemes for Hamiltonian PDEs that conserve symplecticity
NASA Astrophysics Data System (ADS)
Bridges, Thomas J.; Reich, Sebastian
2001-06-01
The symplectic numerical integration of finite-dimensional Hamiltonian systems is a well established subject and has led to a deeper understanding of existing methods as well as to the development of new very efficient and accurate schemes, e.g., for rigid body, constrained, and molecular dynamics. The numerical integration of infinite-dimensional Hamiltonian systems or Hamiltonian PDEs is much less explored. In this Letter, we suggest a new theoretical framework for generalizing symplectic numerical integrators for ODEs to Hamiltonian PDEs in R2: time plus one space dimension. The central idea is that symplecticity for Hamiltonian PDEs is directional: the symplectic structure of the PDE is decomposed into distinct components representing space and time independently. In this setting PDE integrators can be constructed by concatenating uni-directional ODE symplectic integrators. This suggests a natural definition of multi-symplectic integrator as a discretization that conserves a discrete version of the conservation of symplecticity for Hamiltonian PDEs. We show that this approach leads to a general framework for geometric numerical schemes for Hamiltonian PDEs, which have remarkable energy and momentum conservation properties. Generalizations, including development of higher-order methods, application to the Euler equations in fluid mechanics, application to perturbed systems, and extension to more than one space dimension are also discussed.
Graph Learning in Knowledge Bases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Sean; Wang, Daisy Zhe
The amount of text data has been growing exponentially in recent years, giving rise to automatic information extraction methods that store text annotations in a database. The current state-of-theart structured prediction methods, however, are likely to contain errors and it’s important to be able to manage the overall uncertainty of the database. On the other hand, the advent of crowdsourcing has enabled humans to aid machine algorithms at scale. As part of this project we introduced pi-CASTLE , a system that optimizes and integrates human and machine computing as applied to a complex structured prediction problem involving conditional random fieldsmore » (CRFs). We proposed strategies grounded in information theory to select a token subset, formulate questions for the crowd to label, and integrate these labelings back into the database using a method of constrained inference. On both a text segmentation task over academic citations and a named entity recognition task over tweets we showed an order of magnitude improvement in accuracy gain over baseline methods.« less
NASA Astrophysics Data System (ADS)
Zhang, Ziyu; Jiang, Wen; Dolbow, John E.; Spencer, Benjamin W.
2018-01-01
We present a strategy for the numerical integration of partial elements with the eXtended finite element method (X-FEM). The new strategy is specifically designed for problems with propagating cracks through a bulk material that exhibits inelasticity. Following a standard approach with the X-FEM, as the crack propagates new partial elements are created. We examine quadrature rules that have sufficient accuracy to calculate stiffness matrices regardless of the orientation of the crack with respect to the element. This permits the number of integration points within elements to remain constant as a crack propagates, and for state data to be easily transferred between successive discretizations. In order to maintain weights that are strictly positive, we propose an approach that blends moment-fitted weights with volume-fraction based weights. To demonstrate the efficacy of this simple approach, we present results from numerical tests and examples with both elastic and plastic material response.
NASA Astrophysics Data System (ADS)
Kot, V. A.
2017-11-01
The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Peiffer, Loic; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher (1984, Geochim. Cosmichim. Acta 46 513–528) into a stand-alone computer program, to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization using existing parameter estimationmore » software, such as iTOUGH2, PEST, or UCODE. This integrated geothermometry approach presents advantages over classical geothermometers for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss.« less
Performance Analysis of Constrained Loosely Coupled GPS/INS Integration Solutions
Falco, Gianluca; Einicke, Garry A.; Malos, John T.; Dovis, Fabio
2012-01-01
The paper investigates approaches for loosely coupled GPS/INS integration. Error performance is calculated using a reference trajectory. A performance improvement can be obtained by exploiting additional map information (for example, a road boundary). A constrained solution has been developed and its performance compared with an unconstrained one. The case of GPS outages is also investigated showing how a Kalman filter that operates on the last received GPS position and velocity measurements provides a performance benefit. Results are obtained by means of simulation studies and real data. PMID:23202241
NASA Astrophysics Data System (ADS)
Coudeyras, N.; Sinou, J.-J.; Nacivet, S.
2009-01-01
Brake squeal noise is still an issue since it generates high warranty costs for the automotive industry and irritation for customers. Key parameters must be known in order to reduce it. Stability analysis is a common method of studying nonlinear phenomena and has been widely used by the scientific and the engineering communities for solving disc brake squeal problems. This type of analysis provides areas of stability versus instability for driven parameters, thereby making it possible to define design criteria. Nevertheless, this technique does not permit obtaining the vibrating state of the brake system and nonlinear methods have to be employed. Temporal integration is a well-known method for computing the dynamic solution but as it is time consuming, nonlinear methods such as the Harmonic Balance Method (HBM) are preferred. This paper presents a novel nonlinear method called the Constrained Harmonic Balance Method (CHBM) that works for nonlinear systems subject to flutter instability. An additional constraint-based condition is proposed that omits the static equilibrium point (i.e. the trivial static solution of the nonlinear problem that would be obtained by applying the classical HBM) and therefore focuses on predicting both the Fourier coefficients and the fundamental frequency of the stationary nonlinear system. The effectiveness of the proposed nonlinear approach is illustrated by an analysis of disc brake squeal. The brake system under consideration is a reduced finite element model of a pad and a disc. Both stability and nonlinear analyses are performed and the results are compared with a classical variable order solver integration algorithm. Therefore, the objectives of the following paper are to present not only an extension of the HBM (CHBM) but also to demonstrate an application to the specific problem of disc brake squeal with extensively parametric studies that investigate the effects of the friction coefficient, piston pressure, nonlinear stiffness and structural damping.
The added value of remote sensing products in constraining hydrological models
NASA Astrophysics Data System (ADS)
Nijzink, Remko C.; Almeida, Susana; Pechlivanidis, Ilias; Capell, René; Gustafsson, David; Arheimer, Berit; Freer, Jim; Han, Dawei; Wagener, Thorsten; Sleziak, Patrik; Parajka, Juraj; Savenije, Hubert; Hrachowitz, Markus
2017-04-01
The calibration of a hydrological model still depends on the availability of streamflow data, even though more additional sources of information (i.e. remote sensed data products) have become more widely available. In this research, the model parameters of four different conceptual hydrological models (HYPE, HYMOD, TUW, FLEX) were constrained with remotely sensed products. The models were applied over 27 catchments across Europe to cover a wide range of climates, vegetation and landscapes. The fluxes and states of the models were correlated with the relevant products (e.g. MOD10A snow with modelled snow states), after which new a-posteriori parameter distributions were determined based on a weighting procedure using conditional probabilities. Briefly, each parameter was weighted with the coefficient of determination of the relevant regression between modelled states/fluxes and products. In this way, final feasible parameter sets were derived without the use of discharge time series. Initial results show that improvements in model performance, with regard to streamflow simulations, are obtained when the models are constrained with a set of remotely sensed products simultaneously. In addition, we present a more extensive analysis to assess a model's ability to reproduce a set of hydrological signatures, such as rising limb density or peak distribution. Eventually, this research will enhance our understanding and recommendations in the use of remotely sensed products for constraining conceptual hydrological modelling and improving predictive capability, especially for data sparse regions.
Prior-knowledge-based feedforward network simulation of true boiling point curve of crude oil.
Chen, C W; Chen, D Z
2001-11-01
Theoretical results and practical experience indicate that feedforward networks can approximate a wide class of functional relationships very well. This property is exploited in modeling chemical processes. Given finite and noisy training data, it is important to encode the prior knowledge in neural networks to improve the fit precision and the prediction ability of the model. In this paper, as to the three-layer feedforward networks and the monotonic constraint, the unconstrained method, Joerding's penalty function method, the interpolation method, and the constrained optimization method are analyzed first. Then two novel methods, the exponential weight method and the adaptive method, are proposed. These methods are applied in simulating the true boiling point curve of a crude oil with the condition of increasing monotonicity. The simulation experimental results show that the network models trained by the novel methods are good at approximating the actual process. Finally, all these methods are discussed and compared with each other.
ERIC Educational Resources Information Center
Ramey, Christopher H.; Chrysikou, Evangelia G.; Reilly, Jamie
2013-01-01
Word learning is a lifelong activity constrained by cognitive biases that people possess at particular points in development. Age of acquisition (AoA) is a psycholinguistic variable that may prove useful toward gauging the relative weighting of different phonological, semantic, and morphological factors at different phases of language acquisition…
NASA Astrophysics Data System (ADS)
Polcari, Marco; Fernández, José; Albano, Matteo; Bignami, Christian; Palano, Mimmo; Stramondo, Salvatore
2017-12-01
In this work, we propose an improved algorithm to constrain the 3D ground displacement field induced by fast surface deformations due to earthquakes or landslides. Based on the integration of different data, we estimate the three displacement components by solving a function minimization problem from the Bayes theory. We exploit the outcomes from SAR Interferometry (InSAR), Global Positioning System (GNSS) and Multiple Aperture Interferometry (MAI) to retrieve the 3D surface displacement field. Any other source of information can be added to the processing chain in a simple way, being the algorithm computationally efficient. Furthermore, we use the intensity Pixel Offset Tracking (POT) to locate the discontinuity produced on the surface by a sudden deformation phenomenon and then improve the GNSS data interpolation. This approach allows to be independent from other information such as in-situ investigations, tectonic studies or knowledge of the data covariance matrix. We applied such a method to investigate the ground deformation field related to the 2014 Mw 6.0 Napa Valley earthquake, occurred few kilometers from the San Andreas fault system.
Integrative Analysis of Many Weighted Co-Expression Networks Using Tensor Computation
Li, Wenyuan; Liu, Chun-Chi; Zhang, Tong; Li, Haifeng; Waterman, Michael S.; Zhou, Xianghong Jasmine
2011-01-01
The rapid accumulation of biological networks poses new challenges and calls for powerful integrative analysis tools. Most existing methods capable of simultaneously analyzing a large number of networks were primarily designed for unweighted networks, and cannot easily be extended to weighted networks. However, it is known that transforming weighted into unweighted networks by dichotomizing the edges of weighted networks with a threshold generally leads to information loss. We have developed a novel, tensor-based computational framework for mining recurrent heavy subgraphs in a large set of massive weighted networks. Specifically, we formulate the recurrent heavy subgraph identification problem as a heavy 3D subtensor discovery problem with sparse constraints. We describe an effective approach to solving this problem by designing a multi-stage, convex relaxation protocol, and a non-uniform edge sampling technique. We applied our method to 130 co-expression networks, and identified 11,394 recurrent heavy subgraphs, grouped into 2,810 families. We demonstrated that the identified subgraphs represent meaningful biological modules by validating against a large set of compiled biological knowledge bases. We also showed that the likelihood for a heavy subgraph to be meaningful increases significantly with its recurrence in multiple networks, highlighting the importance of the integrative approach to biological network analysis. Moreover, our approach based on weighted graphs detects many patterns that would be overlooked using unweighted graphs. In addition, we identified a large number of modules that occur predominately under specific phenotypes. This analysis resulted in a genome-wide mapping of gene network modules onto the phenome. Finally, by comparing module activities across many datasets, we discovered high-order dynamic cooperativeness in protein complex networks and transcriptional regulatory networks. PMID:21698123
Committed to kids: an integrated, 4-level team approach to weight management in adolescents.
Sothern, Melinda S; Schumacher, Heidi; von Almen, T Kristian; Carlisle, Lauren Keely; Udall, John N
2002-03-01
The integrated, 4-level approach of Committed to Kids is successful because of several factors: The sessions are designed to entertain the adolescents and promote initial success; The program features parent-training methods in short, interactive, educational sessions; In severely obese adolescents, the diet intervention results in noticeable weight loss that motivates the patient to continue; also, the improved exercise tolerance resulting from the weight loss promotes increased physical activity; and The program team provides consistent feedback-patients and their families receive results and updates every 3 months. Most importantly, the program is conducted in groups of families. The adolescent group dynamics and peer modeling are primary components of the successful management of obesity in youth.
Mixed Integer PDE Constrained Optimization for the Control of a Wildfire Hazard
2017-01-01
are nodes suitable for extinguishing the fire. We introduce a discretization of the time horizon [0, T] by the set of time T := {0, At,..., ntZ\\t = T...of the constraints and objective with a discrete counterpart. The PDE is replaced by a linear system obtained from a convergent finite difference...method [5] and the integral is replaced by a quadrature formula. The domain is discretized by replacing 17 with an equidistant grid of length Ax
Zhang, Yong; Otani, Akihito; Maginn, Edward J
2015-08-11
Equilibrium molecular dynamics is often used in conjunction with a Green-Kubo integral of the pressure tensor autocorrelation function to compute the shear viscosity of fluids. This approach is computationally expensive and is subject to a large amount of variability because the plateau region of the Green-Kubo integral is difficult to identify unambiguously. Here, we propose a time decomposition approach for computing the shear viscosity using the Green-Kubo formalism. Instead of one long trajectory, multiple independent trajectories are run and the Green-Kubo relation is applied to each trajectory. The averaged running integral as a function of time is fit to a double-exponential function with a weighting function derived from the standard deviation of the running integrals. Such a weighting function minimizes the uncertainty of the estimated shear viscosity and provides an objective means of estimating the viscosity. While the formal Green-Kubo integral requires an integration to infinite time, we suggest an integration cutoff time tcut, which can be determined by the relative values of the running integral and the corresponding standard deviation. This approach for computing the shear viscosity can be easily automated and used in computational screening studies where human judgment and intervention in the data analysis are impractical. The method has been applied to the calculation of the shear viscosity of a relatively low-viscosity liquid, ethanol, and relatively high-viscosity ionic liquid, 1-n-butyl-3-methylimidazolium bis(trifluoromethane-sulfonyl)imide ([BMIM][Tf2N]), over a range of temperatures. These test cases show that the method is robust and yields reproducible and reliable shear viscosity values.
Song, Ruizhuo; Lewis, Frank L; Wei, Qinglai
2017-03-01
This paper establishes an off-policy integral reinforcement learning (IRL) method to solve nonlinear continuous-time (CT) nonzero-sum (NZS) games with unknown system dynamics. The IRL algorithm is presented to obtain the iterative control and off-policy learning is used to allow the dynamics to be completely unknown. Off-policy IRL is designed to do policy evaluation and policy improvement in the policy iteration algorithm. Critic and action networks are used to obtain the performance index and control for each player. The gradient descent algorithm makes the update of critic and action weights simultaneously. The convergence analysis of the weights is given. The asymptotic stability of the closed-loop system and the existence of Nash equilibrium are proved. The simulation study demonstrates the effectiveness of the developed method for nonlinear CT NZS games with unknown system dynamics.
Photo-Spectrometer Realized In A Standard Cmos Ic Process
Simpson, Michael L.; Ericson, M. Nance; Dress, William B.; Jellison, Gerald E.; Sitter, Jr., David N.; Wintenberg, Alan L.
1999-10-12
A spectrometer, comprises: a semiconductor having a silicon substrate, the substrate having integrally formed thereon a plurality of layers forming photo diodes, each of the photo diodes having an independent spectral response to an input spectra within a spectral range of the semiconductor and each of the photo diodes formed only from at least one of the plurality of layers of the semiconductor above the substrate; and, a signal processing circuit for modifying signals from the photo diodes with respective weights, the weighted signals being representative of a specific spectral response. The photo diodes have different junction depths and different polycrystalline silicon and oxide coverings. The signal processing circuit applies the respective weights and sums the weighted signals. In a corresponding method, a spectrometer is manufactured by manipulating only the standard masks, materials and fabrication steps of standard semiconductor processing, and integrating the spectrometer with a signal processing circuit.
A comparison of two multi-variable integrator windup protection schemes
NASA Technical Reports Server (NTRS)
Mattern, Duane
1993-01-01
Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parrish, Robert M.; Sherrill, C. David, E-mail: sherrill@gatech.edu; Hohenstein, Edward G.
2014-05-14
We apply orbital-weighted least-squares tensor hypercontraction decomposition of the electron repulsion integrals to accelerate the coupled cluster singles and doubles (CCSD) method. Using accurate and flexible low-rank factorizations of the electron repulsion integral tensor, we are able to reduce the scaling of the most vexing particle-particle ladder term in CCSD from O(N{sup 6}) to O(N{sup 5}), with remarkably low error. Combined with a T{sub 1}-transformed Hamiltonian, this leads to substantial practical accelerations against an optimized density-fitted CCSD implementation.
Bardhan, Jaydeep P; Altman, Michael D; Tidor, B; White, Jacob K
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule's electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts-in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method.
Bardhan, Jaydeep P.; Altman, Michael D.
2009-01-01
We present a partial-differential-equation (PDE)-constrained approach for optimizing a molecule’s electrostatic interactions with a target molecule. The approach, which we call reverse-Schur co-optimization, can be more than two orders of magnitude faster than the traditional approach to electrostatic optimization. The efficiency of the co-optimization approach may enhance the value of electrostatic optimization for ligand-design efforts–in such projects, it is often desirable to screen many candidate ligands for their viability, and the optimization of electrostatic interactions can improve ligand binding affinity and specificity. The theoretical basis for electrostatic optimization derives from linear-response theory, most commonly continuum models, and simple assumptions about molecular binding processes. Although the theory has been used successfully to study a wide variety of molecular binding events, its implications have not yet been fully explored, in part due to the computational expense associated with the optimization. The co-optimization algorithm achieves improved performance by solving the optimization and electrostatic simulation problems simultaneously, and is applicable to both unconstrained and constrained optimization problems. Reverse-Schur co-optimization resembles other well-known techniques for solving optimization problems with PDE constraints. Model problems as well as realistic examples validate the reverse-Schur method, and demonstrate that our technique and alternative PDE-constrained methods scale very favorably compared to the standard approach. Regularization, which ordinarily requires an explicit representation of the objective function, can be included using an approximate Hessian calculated using the new BIBEE/P (boundary-integral-based electrostatics estimation by preconditioning) method. PMID:23055839
NASA Astrophysics Data System (ADS)
Bruynooghe, Michel M.
1998-04-01
In this paper, we present a robust method for automatic object detection and delineation in noisy complex images. The proposed procedure is a three stage process that integrates image segmentation by multidimensional pixel clustering and geometrically constrained optimization of deformable contours. The first step is to enhance the original image by nonlinear unsharp masking. The second step is to segment the enhanced image by multidimensional pixel clustering, using our reducible neighborhoods clustering algorithm that has a very interesting theoretical maximal complexity. Then, candidate objects are extracted and initially delineated by an optimized region merging algorithm, that is based on ascendant hierarchical clustering with contiguity constraints and on the maximization of average contour gradients. The third step is to optimize the delineation of previously extracted and initially delineated objects. Deformable object contours have been modeled by cubic splines. An affine invariant has been used to control the undesired formation of cusps and loops. Non linear constrained optimization has been used to maximize the external energy. This avoids the difficult and non reproducible choice of regularization parameters, that are required by classical snake models. The proposed method has been applied successfully to the detection of fine and subtle microcalcifications in X-ray mammographic images, to defect detection by moire image analysis, and to the analysis of microrugosities of thin metallic films. The later implementation of the proposed method on a digital signal processor associated to a vector coprocessor would allow the design of a real-time object detection and delineation system for applications in medical imaging and in industrial computer vision.
Neural network-based systems for handprint OCR applications.
Ganis, M D; Wilson, C L; Blue, J L
1998-01-01
Over the last five years or so, neural network (NN)-based approaches have been steadily gaining performance and popularity for a wide range of optical character recognition (OCR) problems, from isolated digit recognition to handprint recognition. We present an NN classification scheme based on an enhanced multilayer perceptron (MLP) and describe an end-to-end system for form-based handprint OCR applications designed by the National Institute of Standards and Technology (NIST) Visual Image Processing Group. The enhancements to the MLP are based on (i) neuron activations functions that reduce the occurrences of singular Jacobians; (ii) successive regularization to constrain the volume of the weight space; and (iii) Boltzmann pruning to constrain the dimension of the weight space. Performance characterization studies of NN systems evaluated at the first OCR systems conference and the NIST form-based handprint recognition system are also summarized.
XY vs X Mixer in Quantum Alternating Operator Ansatz for Optimization Problems with Constraints
NASA Technical Reports Server (NTRS)
Wang, Zhihui; Rubin, Nicholas; Rieffel, Eleanor G.
2018-01-01
Quantum Approximate Optimization Algorithm, further generalized as Quantum Alternating Operator Ansatz (QAOA), is a family of algorithms for combinatorial optimization problems. It is a leading candidate to run on emerging universal quantum computers to gain insight into quantum heuristics. In constrained optimization, penalties are often introduced so that the ground state of the cost Hamiltonian encodes the solution (a standard practice in quantum annealing). An alternative is to choose a mixing Hamiltonian such that the constraint corresponds to a constant of motion and the quantum evolution stays in the feasible subspace. Better performance of the algorithm is speculated due to a much smaller search space. We consider problems with a constant Hamming weight as the constraint. We also compare different methods of generating the generalized W-state, which serves as a natural initial state for the Hamming-weight constraint. Using graph-coloring as an example, we compare the performance of using XY model as a mixer that preserves the Hamming weight with the performance of adding a penalty term in the cost Hamiltonian.
Improving robot arm control for safe and robust haptic cooperation in orthopaedic procedures.
Cruces, R A Castillo; Wahrburg, J
2007-12-01
This paper presents the ongoing results of an effort to achieve the integration of a navigated cooperative robotic arm into computer-assisted orthopaedic surgery. A seamless integration requires the system acting in direct cooperation with the surgeon instead of replacing him. Two technical issues are discussed to improve the haptic operating modes for interactive robot guidance. The concept of virtual fixtures is used to restrict the range of motion of the robot according to pre-operatively defined constraints, and methodologies to assure a robust and accurate motion through singular arm configurations are investigated. A new method for handling singularities is proposed, which is superior to the commonly used damped-least-squares method. It produces no deviations of the end-effector in relation to the virtually constrained path. A solution to assure a good performance of a hands-on robotic arm at singularity configurations is proposed. (c) 2007 John Wiley & Sons, Ltd.
Reflected stochastic differential equation models for constrained animal movement
Hanks, Ephraim M.; Johnson, Devin S.; Hooten, Mevin B.
2017-01-01
Movement for many animal species is constrained in space by barriers such as rivers, shorelines, or impassable cliffs. We develop an approach for modeling animal movement constrained in space by considering a class of constrained stochastic processes, reflected stochastic differential equations. Our approach generalizes existing methods for modeling unconstrained animal movement. We present methods for simulation and inference based on augmenting the constrained movement path with a latent unconstrained path and illustrate this augmentation with a simulation example and an analysis of telemetry data from a Steller sea lion (Eumatopias jubatus) in southeast Alaska.
The Orbits of Jupiter’s Irregular Satellites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brozović, Marina; Jacobson, Robert A., E-mail: marina.brozovic@jpl.nasa.gov, E-mail: raj@jpl.nasa.gov
2017-04-01
We report on the improved ephemerides for the irregular Jovian satellites. We used a combination of numerically integrated equations of motion and a weighted least-squares algorithm to fit the astrometric measurements. The orbital fits for 59 satellites are summarized in terms of state vectors, post-fit residuals, and mean orbital elements. The current data set appears to be sensitive to the mass of Himalia, which is constrained to the range of GM = 0.13–0.28 km{sup 3} s{sup −2}. Here, GM is the product of the Newtonian constant of gravitation, G and the body's mass, M . Our analysis of the orbital uncertaintiesmore » indicates that 11 out of 59 satellites are lost owing to short data arcs. The lost satellites hold provisional International Astronomical Union (IAU) designations and will likely need to be rediscovered.« less
Application of an optimized winglet configuration to an advanced commercial transport
NASA Technical Reports Server (NTRS)
Shollenberger, C. A.
1979-01-01
The design is presented of an aircraft which employs an integrated wing and winglet lift system. Comparison was made with a conventional baseline configuration employing a high-aspect-ratio supercritical wing. An optimized wing-winglet combination was selected from four proposed configurations for which aerodynamic, structural, and weight characteristics were evaluated. Each candidate wing-winglet configuration was constrained to the same induced drag coefficient as the baseline aircraft. The selected wing-winglet configuration was resized for a specific medium-range mission requirement, and operating costs were estimated for a typical mission. Study results indicated that the wing-winglet aircraft was lighter and could complete the specified mission at less cost than the conventional wing aircraft. These indications were sensitive to the impact of flutter characteristics and, to a lesser extent, to the performance of the high-lift system. Further study in these areas is recommended to reduce uncertainty in future development.
Macroevolutionary developmental biology: Embryos, fossils, and phylogenies.
Organ, Chris L; Cooper, Lisa Noelle; Hieronymus, Tobin L
2015-10-01
The field of evolutionary developmental biology is broadly focused on identifying the genetic and developmental mechanisms underlying morphological diversity. Connecting the genotype with the phenotype means that evo-devo research often considers a wide range of evidence, from genetics and morphology to fossils. In this commentary, we provide an overview and framework for integrating fossil ontogenetic data with developmental data using phylogenetic comparative methods to test macroevolutionary hypotheses. We survey the vertebrate fossil record of preserved embryos and discuss how phylogenetic comparative methods can integrate data from developmental genetics and paleontology. Fossil embryos provide limited, yet critical, developmental data from deep time. They help constrain when developmental innovations first appeared during the history of life and also reveal the order in which related morphologies evolved. Phylogenetic comparative methods provide a powerful statistical approach that allows evo-devo researchers to infer the presence of nonpreserved developmental traits in fossil species and to detect discordant evolutionary patterns and processes across levels of biological organization. © 2015 Wiley Periodicals, Inc.
Numerical integration of discontinuous functions: moment fitting and smart octree
NASA Astrophysics Data System (ADS)
Hubrich, Simeon; Di Stolfo, Paolo; Kudela, László; Kollmannsberger, Stefan; Rank, Ernst; Schröder, Andreas; Düster, Alexander
2017-11-01
A fast and simple grid generation can be achieved by non-standard discretization methods where the mesh does not conform to the boundary or the internal interfaces of the problem. However, this simplification leads to discontinuous integrands for intersected elements and, therefore, standard quadrature rules do not perform well anymore. Consequently, special methods are required for the numerical integration. To this end, we present two approaches to obtain quadrature rules for arbitrary domains. The first approach is based on an extension of the moment fitting method combined with an optimization strategy for the position and weights of the quadrature points. In the second approach, we apply the smart octree, which generates curved sub-cells for the integration mesh. To demonstrate the performance of the proposed methods, we consider several numerical examples, showing that the methods lead to efficient quadrature rules, resulting in less integration points and in high accuracy.
Xiao, Yangfan; Yi, Shanzhen; Tang, Zhongqian
2017-12-01
Flood is the most common natural hazard in the world and has caused serious loss of life and property. Assessment of flood prone areas is of great importance for watershed management and reduction of potential loss of life and property. In this study, a framework of multi-criteria analysis (MCA) incorporating geographic information system (GIS), fuzzy analytic hierarchy process (AHP) and spatial ordered weighted averaging (OWA) method was developed for flood hazard assessment. The factors associated with geographical, hydrological and flood-resistant characteristics of the basin were selected as evaluation criteria. The relative importance of the criteria was estimated through fuzzy AHP method. The OWA method was utilized to analyze the effects of different risk attitudes of the decision maker on the assessment result. The spatial ordered weighted averaging method with spatially variable risk preference was implemented in the GIS environment to integrate the criteria. The advantage of the proposed method is that it has considered spatial heterogeneity in assigning risk preference in the decision-making process. The presented methodology has been applied to the area including Hanyang, Caidian and Hannan of Wuhan, China, where flood events occur frequently. The outcome of flood hazard distribution presents a tendency of high risk towards populated and developed areas, especially the northeast part of Hanyang city, which has suffered frequent floods in history. The result indicates where the enhancement projects should be carried out first under the condition of limited resources. Finally, sensitivity of the criteria weights was analyzed to measure the stability of results with respect to the variation of the criteria weights. The flood hazard assessment method presented in this paper is adaptable for hazard assessment of a similar basin, which is of great significance to establish counterplan to mitigate life and property losses. Copyright © 2017 Elsevier B.V. All rights reserved.
A Review on Methods of Risk Adjustment and their Use in Integrated Healthcare Systems
Juhnke, Christin; Bethge, Susanne
2016-01-01
Introduction: Effective risk adjustment is an aspect that is more and more given weight on the background of competitive health insurance systems and vital healthcare systems. The objective of this review was to obtain an overview of existing models of risk adjustment as well as on crucial weights in risk adjustment. Moreover, the predictive performance of selected methods in international healthcare systems should be analysed. Theory and methods: A comprehensive, systematic literature review on methods of risk adjustment was conducted in terms of an encompassing, interdisciplinary examination of the related disciplines. Results: In general, several distinctions can be made: in terms of risk horizons, in terms of risk factors or in terms of the combination of indicators included. Within these, another differentiation by three levels seems reasonable: methods based on mortality risks, methods based on morbidity risks as well as those based on information on (self-reported) health status. Conclusions and discussion: After the final examination of different methods of risk adjustment it was shown that the methodology used to adjust risks varies. The models differ greatly in terms of their included morbidity indicators. The findings of this review can be used in the evaluation of integrated healthcare delivery systems and can be integrated into quality- and patient-oriented reimbursement of care providers in the design of healthcare contracts. PMID:28316544
NASA Astrophysics Data System (ADS)
Williams, C. R.
2012-12-01
The NASA Global Precipitation Mission (GPM) raindrop size distribution (DSD) Working Group is composed of NASA PMM Science Team Members and is charged to "investigate the correlations between DSD parameters using Ground Validation (GV) data sets that support, or guide, the assumptions used in satellite retrieval algorithms." Correlations between DSD parameters can be used to constrain the unknowns and reduce the degrees-of-freedom in under-constrained satellite algorithms. Over the past two years, the GPM DSD Working Group has analyzed GV data and has found correlations between the mass-weighted mean raindrop diameter (Dm) and the mass distribution standard deviation (Sm) that follows a power-law relationship. This Dm-Sm power-law relationship appears to be robust and has been observed in surface disdrometer and vertically pointing radar observations. One benefit of a Dm-Sm power-law relationship is that a three parameter DSD can be modeled with just two parameters: Dm and Nw that determines the DSD amplitude. In order to incorporate observed DSD correlations into satellite algorithms, the GPM DSD Working Group is developing scattering and integral tables that can be used by satellite algorithms. Scattering tables describe the interaction of electromagnetic waves on individual particles to generate cross sections of backscattering, extinction, and scattering. Scattering tables are independent of the distribution of particles. Integral tables combine scattering table outputs with DSD parameters and DSD correlations to generate integrated normalized reflectivity, attenuation, scattering, emission, and asymmetry coefficients. Integral tables contain both frequency dependent scattering properties and cloud microphysics. The GPM DSD Working Group has developed scattering tables for raindrops at both Dual Precipitation Radar (DPR) frequencies and at all GMI radiometer frequencies less than 100 GHz. Scattering tables include Mie and T-matrix scattering with H- and V-polarization at the instrument view angles of nadir to 17 degrees (for DPR) and 48 & 53 degrees off nadir (for GMI). The GPM DSD Working Group is generating integral tables with GV observed DSD correlations and is performing sensitivity and verification tests. One advantage of keeping scattering tables separate from integral tables is that research can progress on the electromagnetic scattering of particles independent of cloud microphysics research. Another advantage of keeping the tables separate is that multiple scattering tables will be needed for frozen precipitation. Scattering tables are being developed for individual frozen particles based on habit, density and operating frequency. And a third advantage of keeping scattering and integral tables separate is that this framework provides an opportunity to communicate GV findings about DSD correlations into integral tables, and thus, into satellite algorithms.
NASA Astrophysics Data System (ADS)
Saito, S.; Lin, W.
2014-12-01
Core-log integration has been applied for rock mechanics studies in scientific ocean drilling since 2007 in plate subduction margins such as Nankai Trough, Costa Rica margin, and Japan Trench. State of stress in subduction wedge is essential for controlling dynamics of plate boundary fault. One of the common methods to estimate stress state is analysis of borehole breakouts (drilling induced borehole wall compressive failures) recorded in borehole image logs to determine the maximum horizontal principal stress orientation. Borehole breakouts can also yield possible range of stress magnitude based on a rock compressive strength criterion. In this study, we constrained the stress magnitudes based on two different rock failure criteria, the Mohr-Coulomb (MC) criteria and the modified Wiebols-Cook (mWC) criteria. As the MC criterion is the same as that under unconfined compression state, only one rock parameter, unconfined compressive strength (UCS) is needed to constrain stress magnitudes. The mWC criterion needs the UCS, Poisson's ratio and internal frictional coefficient determined by triaxial compression experiments to take the intermediate principal stress effects on rock strength into consideration. We conducted various strength experiments on samples taken during IODP Expeditions 334/344 (Costa Rica Seismogenesis Project) to evaluate reliable method to estimate stress magnitudes. Our results show that the effects of the intermediate principal stress on the rock compressive failure occurred on a borehole wall is not negligible.
Methane Leak Detection and Emissions Quantification with UAVs
NASA Astrophysics Data System (ADS)
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Structure Refinement of Protein Low Resolution Models Using the GNEIMO Constrained Dynamics Method
Park, In-Hee; Gangupomu, Vamshi; Wagner, Jeffrey; Jain, Abhinandan; Vaidehi, Nagara-jan
2012-01-01
The challenge in protein structure prediction using homology modeling is the lack of reliable methods to refine the low resolution homology models. Unconstrained all-atom molecular dynamics (MD) does not serve well for structure refinement due to its limited conformational search. We have developed and tested the constrained MD method, based on the Generalized Newton-Euler Inverse Mass Operator (GNEIMO) algorithm for protein structure refinement. In this method, the high-frequency degrees of freedom are replaced with hard holonomic constraints and a protein is modeled as a collection of rigid body clusters connected by flexible torsional hinges. This allows larger integration time steps and enhances the conformational search space. In this work, we have demonstrated the use of a constraint free GNEIMO method for protein structure refinement that starts from low-resolution decoy sets derived from homology methods. In the eight proteins with three decoys for each, we observed an improvement of ~2 Å in the RMSD to the known experimental structures of these proteins. The GNEIMO method also showed enrichment in the population density of native-like conformations. In addition, we demonstrated structural refinement using a “Freeze and Thaw” clustering scheme with the GNEIMO framework as a viable tool for enhancing localized conformational search. We have derived a robust protocol based on the GNEIMO replica exchange method for protein structure refinement that can be readily extended to other proteins and possibly applicable for high throughput protein structure refinement. PMID:22260550
Simpson, John; Raith, Andrea; Rouse, Paul; Ehrgott, Matthias
2017-10-09
Purpose The operations research method of data envelopment analysis (DEA) shows promise for assessing radiotherapy treatment plan quality. The purpose of this paper is to consider the technical requirements for using DEA for plan assessment. Design/methodology/approach In total, 41 prostate treatment plans were retrospectively analysed using the DEA method. The authors investigate the impact of DEA weight restrictions with reference to the ability to differentiate plan performance at a level of clinical significance. Patient geometry influences plan quality and the authors compare differing approaches for managing patient geometry within the DEA method. Findings The input-oriented DEA method is the method of choice when performing plan analysis using the key undesirable plan metrics as the DEA inputs. When considering multiple inputs, it is necessary to constrain the DEA input weights in order to identify potential plan improvements at a level of clinical significance. All tested approaches for the consideration of patient geometry yielded consistent results. Research limitations/implications This work is based on prostate plans and individual recommendations would therefore need to be validated for other treatment sites. Notwithstanding, the method that requires both optimised DEA weights according to clinical significance and appropriate accounting for patient geometric factors is universally applicable. Practical implications DEA can potentially be used during treatment plan development to guide the planning process or alternatively used retrospectively for treatment plan quality audit. Social implications DEA is independent of the planning system platform and therefore has the potential to be used for multi-institutional quality audit. Originality/value To the authors' knowledge, this is the first published examination of the optimal approach in the use of DEA for radiotherapy treatment plan assessment.
NASA Astrophysics Data System (ADS)
Tiwari, Vaibhav
2018-07-01
The population analysis and estimation of merger rates of compact binaries is one of the important topics in gravitational wave astronomy. The primary ingredient in these analyses is the population-averaged sensitive volume. Typically, sensitive volume, of a given search to a given simulated source population, is estimated by drawing signals from the population model and adding them to the detector data as injections. Subsequently injections, which are simulated gravitational waveforms, are searched for by the search pipelines and their signal-to-noise ratio (SNR) is determined. Sensitive volume is estimated, by using Monte-Carlo (MC) integration, from the total number of injections added to the data, the number of injections that cross a chosen threshold on SNR and the astrophysical volume in which the injections are placed. So far, only fixed population models have been used in the estimation of binary black holes (BBH) merger rates. However, as the scope of population analysis broaden in terms of the methodologies and source properties considered, due to an increase in the number of observed gravitational wave (GW) signals, the procedure will need to be repeated multiple times at a large computational cost. In this letter we address the problem by performing a weighted MC integration. We show how a single set of generic injections can be weighted to estimate the sensitive volume for multiple population models; thereby greatly reducing the computational cost. The weights in this MC integral are the ratios of the output probabilities, determined by the population model and standard cosmology, and the injection probability, determined by the distribution function of the generic injections. Unlike analytical/semi-analytical methods, which usually estimate sensitive volume using single detector sensitivity, the method is accurate within statistical errors, comes at no added cost and requires minimal computational resources.
Nayhouse, Michael; Kwon, Joseph Sang-Il; Orkoulas, G
2012-05-28
In simulation studies of fluid-solid transitions, the solid phase is usually modeled as a constrained system in which each particle is confined to move in a single Wigner-Seitz cell. The constrained cell model has been used in the determination of fluid-solid coexistence via thermodynamic integration and other techniques. In the present work, the phase diagram of such a constrained system of Lennard-Jones particles is determined from constant-pressure simulations. The pressure-density isotherms exhibit inflection points which are interpreted as the mechanical stability limit of the solid phase. The phase diagram of the constrained system contains a critical and a triple point. The temperature and pressure at the critical and the triple point are both higher than those of the unconstrained system due to the reduction in the entropy caused by the single occupancy constraint.
ERIC Educational Resources Information Center
Collado-Rivera, Maria; Branscum, Paul; Larson, Daniel; Gao, Haijuan
2018-01-01
Objective: The objective of this study was to evaluate the determinants of sugary drink consumption among overweight and obese adults attempting to lose weight using the Integrative Model of Behavioural Prediction (IMB). Design: Cross-sectional design. Method: Determinants of behavioural intentions (attitudes, perceived norms and perceived…
Optimization of flexible wing structures subject to strength and induced drag constraints
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1977-01-01
An optimization procedure for designing wing structures subject to stress, strain, and drag constraints is presented. The optimization method utilizes an extended penalty function formulation for converting the constrained problem into a series of unconstrained ones. Newton's method is used to solve the unconstrained problems. An iterative analysis procedure is used to obtain the displacements of the wing structure including the effects of load redistribution due to the flexibility of the structure. The induced drag is calculated from the lift distribution. Approximate expressions for the constraints used during major portions of the optimization process enhance the efficiency of the procedure. A typical fighter wing is used to demonstrate the procedure. Aluminum and composite material designs are obtained. The tradeoff between weight savings and drag reduction is investigated.
The design of multirate digital control systems
NASA Technical Reports Server (NTRS)
Berg, M. C.
1986-01-01
The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.
Light-weight cyptography for resource constrained environments
NASA Astrophysics Data System (ADS)
Baier, Patrick; Szu, Harold
2006-04-01
We give a survey of "light-weight" encryption algorithms designed to maximise security within tight resource constraints (limited memory, power consumption, processor speed, chip area, etc.) The target applications of such algorithms are RFIDs, smart cards, mobile phones, etc., which may store, process and transmit sensitive data, but at the same time do not always support conventional strong algorithms. A survey of existing algorithms is given and new proposal is introduced.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
NASA Astrophysics Data System (ADS)
Khode, Urmi B.
High Altitude Long Endurance (HALE) airships are platform of interest due to their persistent observation and persistent communication capabilities. A novel HALE airship design configuration incorporates a composite sandwich propulsive hull duct between the front and the back of the hull for significant drag reduction via blown wake effects. The sandwich composite shell duct is subjected to hull pressure on its outer walls and flow suction on its inner walls which result in in-plane wall compressive stress, which may cause duct buckling. An approach based upon finite element stability analysis combined with a ply layup and foam thickness determination weight minimization search algorithm is utilized. Its goal is to achieve an optimized solution for the configuration of the sandwich composite as a solution to a constrained minimum weight design problem, for which the shell duct remains stable with a prescribed margin of safety under prescribed loading. The stability analysis methodology is first verified by comparing published analytical results for a number of simple cylindrical shell configurations with FEM counterpart solutions obtained using the commercially available code ABAQUS. Results show that the approach is effective in identifying minimum weight composite duct configurations for a number of representative combinations of duct geometry, composite material and foam properties, and propulsive duct applied pressure loading.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alam, Aftab; Khan, S. N.; Wilson, Brian G.
2011-07-06
A numerically efficient, accurate, and easily implemented integration scheme over convex Voronoi polyhedra (VP) is presented for use in ab initio electronic-structure calculations. We combine a weighted Voronoi tessellation with isoparametric integration via Gauss-Legendre quadratures to provide rapidly convergent VP integrals for a variety of integrands, including those with a Coulomb singularity. We showcase the capability of our approach by first applying it to an analytic charge-density model achieving machine-precision accuracy with expected convergence properties in milliseconds. For contrast, we compare our results to those using shape-functions and show our approach is greater than 10 5 times faster and 10more » 7 times more accurate. Furthermore, a weighted Voronoi tessellation also allows for a physics-based partitioning of space that guarantees convex, space-filling VP while reflecting accurate atomic size and site charges, as we show within KKR methods applied to Fe-Pd alloys.« less
Analysis of the influencing factors of global energy interconnection development
NASA Astrophysics Data System (ADS)
Zhang, Yi; He, Yongxiu; Ge, Sifan; Liu, Lin
2018-04-01
Under the background of building global energy interconnection and achieving green and low-carbon development, this paper grasps a new round of energy restructuring and the trend of energy technology change, based on the present situation of global and China's global energy interconnection development, established the index system of the impact of global energy interconnection development factors. A subjective and objective weight analysis of the factors affecting the development of the global energy interconnection was conducted separately by network level analysis and entropy method, and the weights are summed up by the method of additive integration, which gives the comprehensive weight of the influencing factors and the ranking of their influence.
Information Integration in Multiple Cue Judgment: A Division of Labor Hypothesis
ERIC Educational Resources Information Center
Juslin, Peter; Karlsson, Linnea; Olsson, Henrik
2008-01-01
There is considerable evidence that judgment is constrained to additive integration of information. The authors propose an explanation of why serial and additive cognitive integration can produce accurate multiple cue judgment both in additive and non-additive environments in terms of an adaptive division of labor between multiple representations.…
Specification and Enforcement of Semantic Integrity Constraints in Microsoft Access
ERIC Educational Resources Information Center
Dadashzadeh, Mohammad
2007-01-01
Semantic integrity constraints are business-specific rules that limit the permissible values in a database. For example, a university rule dictating that an "incomplete" grade cannot be changed to an A constrains the possible states of the database. To maintain database integrity, business rules should be identified in the course of database…
NASA Astrophysics Data System (ADS)
Sun, Kai; Chen, Chao; Du, Jinsong; Wang, Limin; Lei, Binhua
2018-01-01
Thickness estimation of sedimentary basin is a complex geological problem, especially in an orogenic environment. Intense and multiple tectonic movements and climate changes result in inhomogeneity of sedimentary layers and basement configurations, which making sedimentary structure modelling difficult. In this study, integrated geophysical methods, including gravity, magnetotelluric (MT) sounding and electrical resistivity tomography (ERT), were used to estimate basement relief to understand the geological structure and evolution of the eastern Barkol Basin in China. This basin formed with the uplift of the eastern Tianshan during the Cenozoic. Gravity anomaly map revealed the framework of the entire area, and ERT as well as MT sections reflected the geoelectric features of the Cenozoic two-layer distribution. Therefore, gravity data, constrained by MT, ERT and boreholes, were utilized to estimate the spatial distribution of the Quaternary layer. The gravity effect of the Quaternary layer related to the Tertiary layer was later subtracted to obtain the residual anomaly for inversion. For the Tertiary layer, the study area was divided into several parts because of lateral difference of density contrasts. Gravity data were interpreted to determine the density contrast constrained by the MT results. The basement relief can be verified by geological investigation, including the uplift process and regional tectonic setting. The agreement between geophysical survey and prior information from geology emphasizes the importance of integrated geophysical survey as a complementary means of geological studies in this region.
Lu, Chao; Chelikani, Sudhakar; Papademetris, Xenophon; Knisely, Jonathan P.; Milosevic, Michael F.; Chen, Zhe; Jaffray, David A.; Staib, Lawrence H.; Duncan, James S.
2011-01-01
External beam radiotherapy (EBRT) has become the preferred options for non-surgical treatment of prostate cancer and cervix cancer. In order to deliver higher doses to cancerous regions within these pelvic structures (i.e. prostate or cervix) while maintaining or lowering the doses to surrounding non-cancerous regions, it is critical to account for setup variation, organ motion, anatomical changes due to treatment and intra-fraction motion. In previous work, manual segmentation of the soft tissues is performed and then images are registered based on the manual segmentation. In this paper, we present an integrated automatic approach to multiple organ segmentation and nonrigid constrained registration, which can achieve these two aims simultaneously. The segmentation and registration steps are both formulated using a Bayesian framework, and they constrain each other using an iterative conditional model strategy. We also propose a new strategy to assess cumulative actual dose for this novel integrated algorithm, in order to both determine whether the intended treatment is being delivered and, potentially, whether or not a plan should be adjusted for future treatment fractions. Quantitative results show that the automatic segmentation produced results that have an accuracy comparable to manual segmentation, while the registration part significantly outperforms both rigid and non-rigid registration. Clinical application and evaluation of dose delivery show the superiority of proposed method to the procedure currently used in clinical practice, i.e. manual segmentation followed by rigid registration. PMID:21646038
Medicine Delivery Device with Integrated Sterilization and Detection
NASA Technical Reports Server (NTRS)
Sheam, Michael J.; Greer, Harold F.; Manohara, Harish
2013-01-01
Sterile delivery devices can be created by integrating a medicine delivery instrument with surfaces that are coated with germicidal and anti-fouling material. This requires that a large-surface-area template be developed within a constrained volume to ensure good contact between the delivered medicine and the germicidal material. Both of these can be integrated using JPL-developed silicon nanotip or cryo-etch black silicon technologies with atomic layer deposition (ALD) coating of specific germicidal layers. Nanofabrication techniques that are used to produce a microfluidics device are also capable of synthesizing extremely hig-hsurface-area templates in precise locations, and coating those surfaces with conformal films to manipulate their surface properties. This methodology has been successfully applied at JPL to produce patterned and coated silicon nanotips (also known as black silicon) to manipulate the hydrophilicity of surfaces to direct the spreading of fluids in microdevices. JPL s ALD technique is an ideal method to produce the highly conformal coatings required for this type of application. Certain materials, such as TiO2, have germicidal and anti-fouling properties when they are illuminated with UV light. The proposed delivery device contacts medicine with this high-surface-area black silicon surface coated with a thin-film germicidal deposited conformally with ALD. The coating can also be illuminated with ultraviolet light for the purpose of sterilization or identification of the medicine itself. This constrained volume that is located immediately prior to delivery into a patient, ensures that the medicine delivery device is inherently sterile.
Integration of progressive hedging and dual decomposition in stochastic integer programs
Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...
2015-04-07
We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
A feasible DY conjugate gradient method for linear equality constraints
NASA Astrophysics Data System (ADS)
LI, Can
2017-09-01
In this paper, we propose a feasible conjugate gradient method for solving linear equality constrained optimization problem. The method is an extension of the Dai-Yuan conjugate gradient method proposed by Dai and Yuan to linear equality constrained optimization problem. It can be applied to solve large linear equality constrained problem due to lower storage requirement. An attractive property of the method is that the generated direction is always feasible and descent direction. Under mild conditions, the global convergence of the proposed method with exact line search is established. Numerical experiments are also given which show the efficiency of the method.
Chen, Chuan; Hendriks, Gijs A G M; van Sloun, Ruud J G; Hansen, Hendrik H G; de Korte, Chris L
2018-05-01
In this paper, a novel processing framework is introduced for Fourier-domain beamforming of plane-wave ultrasound data, which incorporates coherent compounding and angular weighting in the Fourier domain. Angular weighting implies spectral weighting by a 2-D steering-angle-dependent filtering template. The design of this filter is also optimized as part of this paper. Two widely used Fourier-domain plane-wave ultrasound beamforming methods, i.e., Lu's f-k and Stolt's f-k methods, were integrated in the framework. To enable coherent compounding in Fourier domain for the Stolt's f-k method, the original Stolt's f-k method was modified to achieve alignment of the spectra for different steering angles in k-space. The performance of the framework was compared for both methods with and without angular weighting using experimentally obtained data sets (phantom and in vivo), and data sets (phantom) provided by the IEEE IUS 2016 plane-wave beamforming challenge. The addition of angular weighting enhanced the image contrast while preserving image resolution. This resulted in images of equal quality as those obtained by conventionally used delay-and-sum (DAS) beamforming with apodization and coherent compounding. Given the lower computational load of the proposed framework compared to DAS, to our knowledge it can, therefore, be concluded that it outperforms commonly used beamforming methods such as Stolt's f-k, Lu's f-k, and DAS.
The inverse problem of the calculus of variations for discrete systems
NASA Astrophysics Data System (ADS)
Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David
2018-05-01
We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.
Analytical Hierarchy Process modeling for malaria risk zones in Vadodara district, Gujarat
NASA Astrophysics Data System (ADS)
Bhatt, B.; Joshi, J. P.
2014-11-01
Malaria epidemic is one of the complex spatial problems around the world. According to WHO, an estimated 6, 27, 000 deaths occurred due to malaria in 2012. In many developing nations with diverse ecological regions, it is still a large cause of human mortality. Owing to the incompleteness of epidemiological data and their spatial origin, the quantification of disease incidence burdening basic public health planning is a major constrain especially in developing countries. The present study focuses on the integrated Geospatial and Multi-Criteria Evaluation (AHP) technique to determine malaria risk zones. The study is conducted in Vadodara district, including 12 Taluka among which 4 Taluka are predominantly tribal. The influence of climatic and physical environmental factors viz., rainfall, hydro geomorphology; drainage, elevation, and land cover are used to score their share in the evaluation of malariogenic condition. This was synthesized on the basis of preference over each factor and the total weights of each data and data layer were computed and visualized. The district was divided into three viz., high, moderate and low risk zones .It was observed that a geographical area of 1885.2sq.km comprising 30.3% fall in high risk zone. The risk zones identified on the basis of these parameters and assigned weights shows a close resemblance with ground condition. As the API distribution for 2011overlaid corresponds to the risk zones identified. The study demonstrates the significance and prospect of integrating Geospatial tools and Analytical Hierarchy Process for malaria risk zones and dynamics of malaria transmission.
Integrated identification, modeling and control with applications
NASA Astrophysics Data System (ADS)
Shi, Guojun
This thesis deals with the integration of system design, identification, modeling and control. In particular, six interdisciplinary engineering problems are addressed and investigated. Theoretical results are established and applied to structural vibration reduction and engine control problems. First, the data-based LQG control problem is formulated and solved. It is shown that a state space model is not necessary to solve this problem; rather a finite sequence from the impulse response is the only model data required to synthesize an optimal controller. The new theory avoids unnecessary reliance on a model, required in the conventional design procedure. The infinite horizon model predictive control problem is addressed for multivariable systems. The basic properties of the receding horizon implementation strategy is investigated and the complete framework for solving the problem is established. The new theory allows the accommodation of hard input constraints and time delays. The developed control algorithms guarantee the closed loop stability. A closed loop identification and infinite horizon model predictive control design procedure is established for engine speed regulation. The developed algorithms are tested on the Cummins Engine Simulator and desired results are obtained. A finite signal-to-noise ratio model is considered for noise signals. An information quality index is introduced which measures the essential information precision required for stabilization. The problems of minimum variance control and covariance control are formulated and investigated. Convergent algorithms are developed for solving the problems of interest. The problem of the integrated passive and active control design is addressed in order to improve the overall system performance. A design algorithm is developed, which simultaneously finds: (i) the optimal values of the stiffness and damping ratios for the structure, and (ii) an optimal output variance constrained stabilizing controller such that the active control energy is minimized. A weighted q-Markov COVER method is introduced for identification with measurement noise. The result is use to develop an iterative closed loop identification/control design algorithm. The effectiveness of the algorithm is illustrated by experimental results.
NASA Astrophysics Data System (ADS)
Hyer, E. J.; Reid, J. S.; Kasischke, E. S.; Allen, D. J.
2005-12-01
The magnitude of trace gas and aerosol emissions from wildfires is a scientific problem with important implications for atmospheric composition, and is also integral to understanding carbon cycling in terrestrial ecosystems. Recent ecological research on modeling wildfire emissions has integrated theoretical advances derived from ecological fieldwork with improved spatial and temporal databases to produce "post facto" estimates of emissions with high spatial and temporal resolution. These advances have been shown to improve agreement with atmospheric observations at coarse scales, but can in principle be applied to applications, such as forecasting, at finer scales. However, several of the approaches employed in these forward models are incompatible with the requirements of real-time forecasting, requiring modification of data inputs and calculation methods. Because of the differences in data inputs used for real-time and "post-facto" emissions modeling, the key uncertainties in the forward problem are not necessarily the same for these two applications. However, adaptation of these advances in forward modeling to forecasting applications has the potential to improve air quality forecasts, and also to provide a large body of experimental data which can be used to constrain crucial uncertainties in current conceptual models of wildfire emissions. This talk describes a forward modeling method developed at the University of Maryland and its application to the Fire Locating and Modeling of Burning Emissions (FLAMBE) system at the Naval Research Laboratory. Methods for applying the outputs of the NRL aerosol forecasting system to the inverse problem of constraining emissions will also be discussed. The system described can use the feedback supplied by atmospheric observations to improve the emissions source description in the forecasting model, and can also be used for hypothesis testing regarding fire behavior and data inputs.
Sparse Poisson noisy image deblurring.
Carlavan, Mikael; Blanc-Féraud, Laure
2012-04-01
Deblurring noisy Poisson images has recently been a subject of an increasing amount of works in many areas such as astronomy and biological imaging. In this paper, we focus on confocal microscopy, which is a very popular technique for 3-D imaging of biological living specimens that gives images with a very good resolution (several hundreds of nanometers), although degraded by both blur and Poisson noise. Deconvolution methods have been proposed to reduce these degradations, and in this paper, we focus on techniques that promote the introduction of an explicit prior on the solution. One difficulty of these techniques is to set the value of the parameter, which weights the tradeoff between the data term and the regularizing term. Only few works have been devoted to the research of an automatic selection of this regularizing parameter when considering Poisson noise; therefore, it is often set manually such that it gives the best visual results. We present here two recent methods to estimate this regularizing parameter, and we first propose an improvement of these estimators, which takes advantage of confocal images. Following these estimators, we secondly propose to express the problem of the deconvolution of Poisson noisy images as the minimization of a new constrained problem. The proposed constrained formulation is well suited to this application domain since it is directly expressed using the antilog likelihood of the Poisson distribution and therefore does not require any approximation. We show how to solve the unconstrained and constrained problems using the recent alternating-direction technique, and we present results on synthetic and real data using well-known priors, such as total variation and wavelet transforms. Among these wavelet transforms, we specially focus on the dual-tree complex wavelet transform and on the dictionary composed of curvelets and an undecimated wavelet transform.
Testbed Experiment for SPIDER: A Photonic Integrated Circuit-based Interferometric imaging system
NASA Astrophysics Data System (ADS)
Badham, K.; Duncan, A.; Kendrick, R. L.; Wuchenich, D.; Ogden, C.; Chriqui, G.; Thurman, S. T.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. In this paper we describe the photonic integrated circuit design and the testbed used to create the first images of extended scenes. We summarize the image reconstruction steps and present the final images. We also describe our next generation PIC design for a larger (16x area, 4x field of view) image.
Analytic Method for Computing Instrument Pointing Jitter
NASA Technical Reports Server (NTRS)
Bayard, David
2003-01-01
A new method of calculating the root-mean-square (rms) pointing jitter of a scientific instrument (e.g., a camera, radar antenna, or telescope) is introduced based on a state-space concept. In comparison with the prior method of calculating the rms pointing jitter, the present method involves significantly less computation. The rms pointing jitter of an instrument (the square root of the jitter variance shown in the figure) is an important physical quantity which impacts the design of the instrument, its actuators, controls, sensory components, and sensor- output-sampling circuitry. Using the Sirlin, San Martin, and Lucke definition of pointing jitter, the prior method of computing the rms pointing jitter involves a frequency-domain integral of a rational polynomial multiplied by a transcendental weighting function, necessitating the use of numerical-integration techniques. In practice, numerical integration complicates the problem of calculating the rms pointing error. In contrast, the state-space method provides exact analytic expressions that can be evaluated without numerical integration.
NASA Astrophysics Data System (ADS)
Huang, Wei; Chen, Xiu; Wang, Yueyun
2018-03-01
Landsat data are widely used in various earth observations, but the clouds interfere with the applications of the images. This paper proposes a weighted variational gradient-based fusion method (WVGBF) for high-fidelity thin cloud removal of Landsat images, which is an improvement of the variational gradient-based fusion (VGBF) method. The VGBF method integrates the gradient information from the reference band into visible bands of cloudy image to enable spatial details and remove thin clouds. The VGBF method utilizes the same gradient constraints to the entire image, which causes the color distortion in cloudless areas. In our method, a weight coefficient is introduced into the gradient approximation term to ensure the fidelity of image. The distribution of weight coefficient is related to the cloud thickness map. The map is built on Independence Component Analysis (ICA) by using multi-temporal Landsat images. Quantitatively, we use R value to evaluate the fidelity in the cloudless regions and metric Q to evaluate the clarity in the cloud areas. The experimental results indicate that the proposed method has the better ability to remove thin cloud and achieve high fidelity.
Visible Wavelength Exoplanet Phase Curves from Global Albedo Maps
NASA Astrophysics Data System (ADS)
Webber, Matthew; Cahoy, Kerri Lynn
2015-01-01
To investigate the effect of three-dimensional global albedo maps we use an albedo model that: calculates albedo spectra for each points across grid in longitude and latitude on the planetary disk, uses the appropriate angles for the source-observer geometry for each location, and then weights and sums these spectra using the Tschebychev-Gauss integration method. This structure permits detailed 3D modeling of an illuminated planetary disk and computes disk-integrated phase curves. Different pressure-temperature profiles are used for each location based on geometry and dynamics. We directly couple high-density pressure maps from global dynamic radiative-transfer models to compute global cloud maps. Cloud formation is determined from the correlation of the species condensation curves with the temperature-pressure profiles. We use the detailed cloud patterns, of spatial-varying composition and temperature, to determine the observable albedo spectra and phase curves for exoplanets Kepler-7b and HD189733b. These albedo spectra are used to compute planet-star flux ratios using PHOENIX stellar models, exoplanet orbital parameters, and telescope transmission functions. Insight from the Earthshine spectrum and solid surface albedo functions (e.g. water, ice, snow, rocks) are used with our planetary grid to determine the phase curve and flux ratios of non-uniform Earth and Super Earth-like exoplanets with various rotation rates and stellar types. Predictions can be tailored to the visible and Near-InfraRed (NIR) spectral windows for the Kepler space telescope, Hubble space telescope, and future observatories (e.g. WFIRST, JWST, Exo-C, Exo-S). Additionally, we constrain the effect of exoplanet urban-light on the shape of the night-side phase curve for Earths and Super-Earths.
Weighted SGD for ℓ p Regression with Randomized Preconditioning.
Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W
2016-01-01
In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems-e.g., ℓ 2 and ℓ 1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓ p regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓ p solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ 1 regression with size n by d , pwSGD returns an approximate solution with ε relative error in the objective value in (log n ·nnz( A )+poly( d )/ ε 2 ) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ 2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in (log n ·nnz( A )+poly( d ) log(1/ ε )/ ε ) time. We show that for unconstrained ℓ 2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε , high dimension n and low dimension d satisfy d ≥ 1/ ε and n ≥ d 2 / ε . We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10 -3 , more quickly.
Weighted SGD for ℓp Regression with Randomized Preconditioning*
Yang, Jiyan; Chow, Yin-Lam; Ré, Christopher; Mahoney, Michael W.
2018-01-01
In recent years, stochastic gradient descent (SGD) methods and randomized linear algebra (RLA) algorithms have been applied to many large-scale problems in machine learning and data analysis. SGD methods are easy to implement and applicable to a wide range of convex optimization problems. In contrast, RLA algorithms provide much stronger performance guarantees but are applicable to a narrower class of problems. We aim to bridge the gap between these two methods in solving constrained overdetermined linear regression problems—e.g., ℓ2 and ℓ1 regression problems. We propose a hybrid algorithm named pwSGD that uses RLA techniques for preconditioning and constructing an importance sampling distribution, and then performs an SGD-like iterative process with weighted sampling on the preconditioned system.By rewriting a deterministic ℓp regression problem as a stochastic optimization problem, we connect pwSGD to several existing ℓp solvers including RLA methods with algorithmic leveraging (RLA for short).We prove that pwSGD inherits faster convergence rates that only depend on the lower dimension of the linear system, while maintaining low computation complexity. Such SGD convergence rates are superior to other related SGD algorithm such as the weighted randomized Kaczmarz algorithm.Particularly, when solving ℓ1 regression with size n by d, pwSGD returns an approximate solution with ε relative error in the objective value in 𝒪(log n·nnz(A)+poly(d)/ε2) time. This complexity is uniformly better than that of RLA methods in terms of both ε and d when the problem is unconstrained. In the presence of constraints, pwSGD only has to solve a sequence of much simpler and smaller optimization problem over the same constraints. In general this is more efficient than solving the constrained subproblem required in RLA.For ℓ2 regression, pwSGD returns an approximate solution with ε relative error in the objective value and the solution vector measured in prediction norm in 𝒪(log n·nnz(A)+poly(d) log(1/ε)/ε) time. We show that for unconstrained ℓ2 regression, this complexity is comparable to that of RLA and is asymptotically better over several state-of-the-art solvers in the regime where the desired accuracy ε, high dimension n and low dimension d satisfy d ≥ 1/ε and n ≥ d2/ε. We also provide lower bounds on the coreset complexity for more general regression problems, indicating that still new ideas will be needed to extend similar RLA preconditioning ideas to weighted SGD algorithms for more general regression problems. Finally, the effectiveness of such algorithms is illustrated numerically on both synthetic and real datasets, and the results are consistent with our theoretical findings and demonstrate that pwSGD converges to a medium-precision solution, e.g., ε = 10−3, more quickly. PMID:29782626
Freezing Transition Studies Through Constrained Cell Model Simulation
NASA Astrophysics Data System (ADS)
Nayhouse, Michael; Kwon, Joseph Sang-Il; Heng, Vincent R.; Amlani, Ankur M.; Orkoulas, G.
2014-10-01
In the present work, a simulation method based on cell models is used to deduce the fluid-solid transition of a system of particles that interact via a pair potential, , which is of the form with . The simulations are implemented under constant-pressure conditions on a generalized version of the constrained cell model. The constrained cell model is constructed by dividing the volume into Wigner-Seitz cells and confining each particle in a single cell. This model is a special case of a more general cell model which is formed by introducing an additional field variable that controls the number of particles per cell and, thus, the relative stability of the solid against the fluid phase. High field values force configurations with one particle per cell and thus favor the solid phase. Fluid-solid coexistence on the isotherm that corresponds to a reduced temperature of 2 is determined from constant-pressure simulations of the generalized cell model using tempering and histogram reweighting techniques. The entire fluid-solid phase boundary is determined through a thermodynamic integration technique based on histogram reweighting, using the previous coexistence point as a reference point. The vapor-liquid phase diagram is obtained from constant-pressure simulations of the unconstrained system using tempering and histogram reweighting. The phase diagram of the system is found to contain a stable critical point and a triple point. The phase diagram of the corresponding constrained cell model is also found to contain both a stable critical point and a triple point.
NASA Astrophysics Data System (ADS)
Zhang, Chenglong; Zhang, Fan; Guo, Shanshan; Liu, Xiao; Guo, Ping
2018-01-01
An inexact nonlinear mλ-measure fuzzy chance-constrained programming (INMFCCP) model is developed for irrigation water allocation under uncertainty. Techniques of inexact quadratic programming (IQP), mλ-measure, and fuzzy chance-constrained programming (FCCP) are integrated into a general optimization framework. The INMFCCP model can deal with not only nonlinearities in the objective function, but also uncertainties presented as discrete intervals in the objective function, variables and left-hand side constraints and fuzziness in the right-hand side constraints. Moreover, this model improves upon the conventional fuzzy chance-constrained programming by introducing a linear combination of possibility measure and necessity measure with varying preference parameters. To demonstrate its applicability, the model is then applied to a case study in the middle reaches of Heihe River Basin, northwest China. An interval regression analysis method is used to obtain interval crop water production functions in the whole growth period under uncertainty. Therefore, more flexible solutions can be generated for optimal irrigation water allocation. The variation of results can be examined by giving different confidence levels and preference parameters. Besides, it can reflect interrelationships among system benefits, preference parameters, confidence levels and the corresponding risk levels. Comparison between interval crop water production functions and deterministic ones based on the developed INMFCCP model indicates that the former is capable of reflecting more complexities and uncertainties in practical application. These results can provide more reliable scientific basis for supporting irrigation water management in arid areas.
2012-01-01
Background Elementary mode (EM) analysis is ideally suited for metabolic engineering as it allows for an unbiased decomposition of metabolic networks in biologically meaningful pathways. Recently, constrained minimal cut sets (cMCS) have been introduced to derive optimal design strategies for strain improvement by using the full potential of EM analysis. However, this approach does not allow for the inclusion of regulatory information. Results Here we present an alternative, novel and simple method for the prediction of cMCS, which allows to account for boolean transcriptional regulation. We use binary linear programming and show that the design of a regulated, optimal metabolic network of minimal functionality can be formulated as a standard optimization problem, where EM and regulation show up as constraints. We validated our tool by optimizing ethanol production in E. coli. Our study showed that up to 70% of the predicted cMCS contained non-enzymatic, non-annotated reactions, which are difficult to engineer. These cMCS are automatically excluded by our approach utilizing simple weight functions. Finally, due to efficient preprocessing, the binary program remains computationally feasible. Conclusions We used integer programming to predict efficient deletion strategies to metabolically engineer a production organism. Our formulation utilizes the full potential of cMCS but adds additional flexibility to the design process. In particular our method allows to integrate regulatory information into the metabolic design process and explicitly favors experimentally feasible deletions. Our method remains manageable even if millions or potentially billions of EM enter the analysis. We demonstrated that our approach is able to correctly predict the most efficient designs for ethanol production in E. coli. PMID:22898474
NASA Astrophysics Data System (ADS)
Bartell, Richard J.; Perram, Glen P.; Fiorino, Steven T.; Long, Scott N.; Houle, Marken J.; Rice, Christopher A.; Manning, Zachary P.; Bunch, Dustin W.; Krizo, Matthew J.; Gravley, Liesebet E.
2005-06-01
The Air Force Institute of Technology's Center for Directed Energy has developed a software model, the High Energy Laser End-to-End Operational Simulation (HELEEOS), under the sponsorship of the High Energy Laser Joint Technology Office (JTO), to facilitate worldwide comparisons across a broad range of expected engagement scenarios of expected performance of a diverse range of weight-constrained high energy laser system types. HELEEOS has been designed to meet JTO's goals of supporting a broad range of analyses applicable to the operational requirements of all the military services, constraining weapon effectiveness through accurate engineering performance assessments allowing its use as an investment strategy tool, and the establishment of trust among military leaders. HELEEOS is anchored to respected wave optics codes and all significant degradation effects, including thermal blooming and optical turbulence, are represented in the model. The model features operationally oriented performance metrics, e.g. dwell time required to achieve a prescribed probability of kill and effective range. Key features of HELEEOS include estimation of the level of uncertainty in the calculated Pk and generation of interactive nomographs to allow the user to further explore a desired parameter space. Worldwide analyses are enabled at five wavelengths via recently available databases capturing climatological, seasonal, diurnal, and geographical spatial-temporal variability in atmospheric parameters including molecular and aerosol absorption and scattering profiles and optical turbulence strength. Examples are provided of the impact of uncertainty in weight-power relationships, coupled with operating condition variability, on results of performance comparisons between chemical and solid state lasers.
Mwacalimba, Kennedy Kapala; Green, Judith
2015-03-01
'One World, One Health' has become a key rallying theme for the integration of public health and animal health priorities, particularly in the governance of pandemic-scale zoonotic infectious disease threats. However, the policy challenges of integrating public health and animal health priorities in the context of trade and development issues remain relatively unexamined, and few studies to date have explored the implications of global disease governance for resource-constrained countries outside the main centres of zoonotic outbreaks. This article draws on a policy study of national level avian and pandemic influenza preparedness between 2005 and 2009 across the sectors of trade, health and agriculture in Zambia. We highlight the challenges of integrating disease control interventions amidst trade and developmental realities in resource-poor environments. One Health prioritizes disease risk mitigation, sidelining those trade and development narratives which speak to broader public health concerns. We show how locally important trade and development imperatives were marginalized in Zambia, limiting the effectiveness of pandemic preparedness. Our findings are likely to be generalizable to other resource-constrained countries, and suggest that effective disease governance requires alignment with trade and development sectors, as well as integration of veterinary and public health sectors. Published by Oxford University Press in association with The London School of Hygiene and Tropical Medicine © The Author 2014; all rights reserved.
Bernard R. Parresol; Charles E. Thomas
1996-01-01
In the wood utilization industry, both stem profile and biomass are important quantities. The two have traditionally been estimated separately. The introduction of a density-integral method allows for coincident estimation of stem profile and biomass, based on the calculus of mass theory, and provides an alternative to weight-ratio methodology. In the initial...
Integrated GNSS Attitude Determination and Positioning for Direct Geo-Referencing
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J. G.
2014-01-01
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0.8, matching the theoretical gain of 3/4 for two antennas on the rotating frame and a single antenna at the reference station. PMID:25036330
Integrated GNSS attitude determination and positioning for direct geo-referencing.
Nadarajah, Nandakumaran; Paffenholz, Jens-André; Teunissen, Peter J G
2014-07-17
Direct geo-referencing is an efficient methodology for the fast acquisition of 3D spatial data. It requires the fusion of spatial data acquisition sensors with navigation sensors, such as Global Navigation Satellite System (GNSS) receivers. In this contribution, we consider an integrated GNSS navigation system to provide estimates of the position and attitude (orientation) of a 3D laser scanner. The proposed multi-sensor system (MSS) consists of multiple GNSS antennas rigidly mounted on the frame of a rotating laser scanner and a reference GNSS station with known coordinates. Precise GNSS navigation requires the resolution of the carrier phase ambiguities. The proposed method uses the multivariate constrained integer least-squares (MC-LAMBDA) method for the estimation of rotating frame ambiguities and attitude angles. MC-LAMBDA makes use of the known antenna geometry to strengthen the underlying attitude model and, hence, to enhance the reliability of rotating frame ambiguity resolution and attitude determination. The reliable estimation of rotating frame ambiguities is consequently utilized to enhance the relative positioning of the rotating frame with respect to the reference station. This integrated (array-aided) method improves ambiguity resolution, as well as positioning accuracy between the rotating frame and the reference station. Numerical analyses of GNSS data from a real-data campaign confirm the improved performance of the proposed method over the existing method. In particular, the integrated method yields reliable ambiguity resolution and reduces position standard deviation by a factor of about 0:8, matching the theoretical gain of √ 3/4 for two antennas on the rotating frame and a single antenna at the reference station.
NASA Astrophysics Data System (ADS)
Huffman, Katelyn A.
Understanding the orientation and magnitude of tectonic stress in active tectonic margins like subduction zones is important for understanding fault mechanics. In the Nankai Trough subduction zone, faults in the accretionary prism are thought to have historically slipped during or immediately following deep plate boundary earthquakes, often generating devastating tsunamis. I focus on quantifying stress at two locations of interest in the Nankai Trough accretionary prism, offshore Southwest Japan. I employ a method to constrain stress magnitude that combines observations of compressional borehole failure from logging-while-drilling resistivity-at-the-bit generated images (RAB) with estimates of rock strength and the relationship between tectonic stress and stress at the wall of a borehole. I use the method to constrain stress at Ocean Drilling Program (ODP) Site 808 and Integrated Ocean Drilling Program (IODP) Site C0002. At Site 808, I consider a range of parameters (assumed rock strength, friction coefficient, breakout width, and fluid pressure) in the method to constrain stress to explore uncertainty in stress magnitudes and discuss stress results in terms of the seismic cycle. I find a combination of increased fluid pressure and decreased friction along the frontal thrust or other weak faults could produce thrust-style failure, without the entire prism being at critical state failure, as other kinematic models of accretionary prism behavior during earthquakes imply. Rock strength is typically inferred using a failure criterion and unconfined compressive strength from empirical relations with P-wave velocity. I minimize uncertainty in rock strength by measuring rock strength in triaxial tests on Nankai core. I find strength of Nankai core is significantly less than empirical relations predict. I create a new empirical fit to our experiments and explore implications of this on stress magnitude estimates. I find using the new empirical fit can decrease stress predicted in the method by as much as 4 MPa at Site C0002. I constrain stress at Site C0002 using geophysical logging data from two adjacent boreholes drilled into the same sedimentary sequence with different drilling conditions in a forward model that predicts breakout width over a range of horizontal stresses (where SHmax is constrained by the ratio of stresses that would produce active faulting and Shmin is constrained from leak-off-tests) and rock strength. I then compare predicted breakout widths to observations of breakout widths from RAB images to determine the combination of stresses in the model that best match real world observations. This is the first published method to constrain both stress and strength simultaneously. Finally, I explore uncertainty in rock behavior during compressional breakout formation using a finite element model (FEM) that predicts Biot poroelastic changes in fluid pressure in rock adjacent to the borehole upon its excavation and explore the effect this has on rock failure. I test a range of permeability and rock stiffness. I find that when rock stiffness and permeability are in the range of what exists at Nankai, pore fluid pressure increase +/- 45° from Shmin and can lead to weakening of wall rock and a wider compressional failure zone than what would exist at equilibrium conditions. In a case example at, we find this can lead to an overestimate of tectonic stress using compressional failures of ~2 MPa in the area of the borehole where fluid pressure increases. In areas around the borehole where pore fluid decreases (+/- 45° from SHmax), the wall rock can strengthen which suppresses tensile failure. The implications of this research is that there are many potential pitfalls in the method to constrain stress using borehole breakouts in Nankai Trough mudstone, mostly due to uncertainty in parameters such as strength and underlying assumptions regarding constitutive rock behavior. More laboratory measurement and/or models of rock properties and rock constitutive behavior is needed to ensure the method is accurately providing constraints on stress magnitude. (Abstract shortened by ProQuest.).
NASA Technical Reports Server (NTRS)
Griffin, Charles F.; Harvill, William E.
1988-01-01
Numerous design concepts, materials, and manufacturing methods were investigated for the covers and spars of a transport box wing. Cover panels and spar segments were fabricated and tested to verify the structural integrity of design concepts and fabrication techniques. Compression tests on stiffened panels demonstrated the ability of graphite/epoxy wing upper cover designs to achieve a 35 percent weight savings compared to the aluminum baseline. The impact damage tolerance of the designs and materials used for these panels limits the allowable compression strain and therefore the maximum achievable weight savings. Bending and shear tests on various spar designs verified an average weight savings of 37 percent compared to the aluminum baseline. Impact damage to spar webs did not significantly degrade structural performance. Predictions of spar web shear instability correlated well with measured performance. The structural integrity of spars manufactured by filament winding equalled or exceeded those fabricated by hand lay-up. The information obtained will be applied to the design, fabrication, and test of a full-scale section of a wing box. When completed, the tests on the technology integration box beam will demonstrate the structural integrity of an advanced composite wing design which is 25 percent lighter than the metal baseline.
NASA Astrophysics Data System (ADS)
Olson, R.; An, S. I.
2016-12-01
Atlantic Meridional Overturning Circulation (AMOC) in the ocean might slow down in the future, which can lead to a host of climatic effects in North Atlantic and throughout the world. Despite improvements in climate models and availability of new observations, AMOC projections remain uncertain. Here we constrain CMIP5 multi-model ensemble output with observations of a recently developed AMOC index to provide improved Bayesian predictions of future AMOC. Specifically, we first calculate yearly AMOC index loosely based on Rahmstorf et al. (2015) for years 1880—2004 for both observations, and the CMIP5 models for which relevant output is available. We then assign a weight to each model based on a Bayesian Model Averaging method that accounts for differential model skill in terms of both mean state and variability. We include the temporal autocorrelation in climate model errors, and account for the uncertainty in the parameters of our statistical model. We use the weights to provide future weighted projections of AMOC, and compare them to un-weighted ones. Our projections use bootstrapping to account for uncertainty in internal AMOC variability. We also perform spectral and other statistical analyses to show that AMOC index variability, both in models and in observations, is consistent with red noise. Our results improve on and complement previous work by using a new ensemble of climate models, a different observational metric, and an improved Bayesian weighting method that accounts for differential model skill at reproducing internal variability. Reference: Rahmstorf, S., Box, J. E., Feulner, G., Mann, M. E., Robinson, A., Rutherford, S., & Schaffernicht, E. J. (2015). Exceptional twentieth-century slowdown in atlantic ocean overturning circulation. Nature Climate Change, 5(5), 475-480. doi:10.1038/nclimate2554
Sequentially reweighted TV minimization for CT metal artifact reduction.
Zhang, Xiaomeng; Xing, Lei
2013-07-01
Metal artifact reduction has long been an important topic in x-ray CT image reconstruction. In this work, the authors propose an iterative method that sequentially minimizes a reweighted total variation (TV) of the image and produces substantially artifact-reduced reconstructions. A sequentially reweighted TV minimization algorithm is proposed to fully exploit the sparseness of image gradients (IG). The authors first formulate a constrained optimization model that minimizes a weighted TV of the image, subject to the constraint that the estimated projection data are within a specified tolerance of the available projection measurements, with image non-negativity enforced. The authors then solve a sequence of weighted TV minimization problems where weights used for the next iteration are computed from the current solution. Using the complete projection data, the algorithm first reconstructs an image from which a binary metal image can be extracted. Forward projection of the binary image identifies metal traces in the projection space. The metal-free background image is then reconstructed from the metal-trace-excluded projection data by employing a different set of weights. Each minimization problem is solved using a gradient method that alternates projection-onto-convex-sets and steepest descent. A series of simulation and experimental studies are performed to evaluate the proposed approach. Our study shows that the sequentially reweighted scheme, by altering a single parameter in the weighting function, flexibly controls the sparsity of the IG and reconstructs artifacts-free images in a two-stage process. It successfully produces images with significantly reduced streak artifacts, suppressed noise and well-preserved contrast and edge properties. The sequentially reweighed TV minimization provides a systematic approach for suppressing CT metal artifacts. The technique can also be generalized to other "missing data" problems in CT image reconstruction.
NASA Astrophysics Data System (ADS)
Panda, Satyajit; Ray, M. C.
2008-04-01
In this paper, a geometrically nonlinear dynamic analysis has been presented for functionally graded (FG) plates integrated with a patch of active constrained layer damping (ACLD) treatment and subjected to a temperature field. The constraining layer of the ACLD treatment is considered to be made of the piezoelectric fiber-reinforced composite (PFRC) material. The temperature field is assumed to be spatially uniform over the substrate plate surfaces and varied through the thickness of the host FG plates. The temperature-dependent material properties of the FG substrate plates are assumed to be graded in the thickness direction of the plates according to a power-law distribution while the Poisson's ratio is assumed to be a constant over the domain of the plate. The constrained viscoelastic layer of the ACLD treatment is modeled using the Golla-Hughes-McTavish (GHM) method. Based on the first-order shear deformation theory, a three-dimensional finite element model has been developed to model the open-loop and closed-loop nonlinear dynamics of the overall FG substrate plates under the thermal environment. The analysis suggests the potential use of the ACLD treatment with its constraining layer made of the PFRC material for active control of geometrically nonlinear vibrations of FG plates in the absence or the presence of the temperature gradient across the thickness of the plates. It is found that the ACLD treatment is more effective in controlling the geometrically nonlinear vibrations of FG plates than in controlling their linear vibrations. The analysis also reveals that the ACLD patch is more effective for controlling the nonlinear vibrations of FG plates when it is attached to the softest surface of the FG plates than when it is bonded to the stiffest surface of the plates. The effect of piezoelectric fiber orientation in the active constraining PFRC layer on the damping characteristics of the overall FG plates is also discussed.
ATHENA: system studies and optics accommodation
NASA Astrophysics Data System (ADS)
Ayre, M.; Bavdaz, M.; Ferreira, I.; Wille, E.; Fransen, S.; Stefanescu, A.; Linder, M.
2016-07-01
ATHENA is currently in Phase A, with a view to adoption upon a successful Mission Adoption Review in 2019/2020. After a brief presentation of the reference spacecraft (SC) design, this paper will focus on the functional and environmental requirements, the thermo-mechanical design and the Assembly, Integration, Verification & Test (AIVT) considerations related to housing the Silicon Pore Optics (SPO) Mirror Modules (MM) in the very large Mirror Assembly Module (MAM). Initially functional requirements on the MM accommodation are presented, with the Effective Area and Half Energy Width (HEW) requirements leading to a MAM comprising (depending on final mirror size selected) between 700-1000 MMs, co-aligned with exquisite accuracy to provide a common focus. A preliminary HEW budget allocated across the main error-contributors is presented, and this is then used as a reference to derive subsequent requirements and engineering considerations, including: The procedures and technologies for MM-integration into the Mirror Structure (MS) to achieve the required alignment accuracies in a timely manner; stiffness requirements and handling scheme required to constrain deformation under gravity during x-ray testing; temperature control to constrain thermo-elastic deformation during flight; and the role of the Instrument Switching Mechanism (ISM) in constraining HEW and Effective Area errors. Next, we present the key environmental requirements of the MMs, and the need to minimise shock-loading of the MMs is stressed. Methods to achieve this Ø are presented, including: Selection of a large clamp-band launch vehicle interface (LV I/F); lengthening of the shock-path from the LV I/F to the MAM I/F; modal-tuning of the MAM to act as a low-pass filter during launch shock events; use of low-shock HDRMs for the MAM; and the possibility to deploy a passive vibration solution at the LV I/F to reduce loads.
NASA Astrophysics Data System (ADS)
Niri, Mohammad Emami; Lumley, David E.
2017-10-01
Integration of 3D and time-lapse 4D seismic data into reservoir modelling and history matching processes poses a significant challenge due to the frequent mismatch between the initial reservoir model, the true reservoir geology, and the pre-production (baseline) seismic data. A fundamental step of a reservoir characterisation and performance study is the preconditioning of the initial reservoir model to equally honour both the geological knowledge and seismic data. In this paper we analyse the issues that have a significant impact on the (mis)match of the initial reservoir model with well logs and inverted 3D seismic data. These issues include the constraining methods for reservoir lithofacies modelling, the sensitivity of the results to the presence of realistic resolution and noise in the seismic data, the geostatistical modelling parameters, and the uncertainties associated with quantitative incorporation of inverted seismic data in reservoir lithofacies modelling. We demonstrate that in a geostatistical lithofacies simulation process, seismic constraining methods based on seismic litho-probability curves and seismic litho-probability cubes yield the best match to the reference model, even when realistic resolution and noise is included in the dataset. In addition, our analyses show that quantitative incorporation of inverted 3D seismic data in static reservoir modelling carries a range of uncertainties and should be cautiously applied in order to minimise the risk of misinterpretation. These uncertainties are due to the limited vertical resolution of the seismic data compared to the scale of the geological heterogeneities, the fundamental instability of the inverse problem, and the non-unique elastic properties of different lithofacies types.
Multidisciplinary optimization of a controlled space structure using 150 design variables
NASA Technical Reports Server (NTRS)
James, Benjamin B.
1992-01-01
A general optimization-based method for the design of large space platforms through integration of the disciplines of structural dynamics and control is presented. The method uses the global sensitivity equations approach and is especially appropriate for preliminary design problems in which the structural and control analyses are tightly coupled. The method is capable of coordinating general purpose structural analysis, multivariable control, and optimization codes, and thus, can be adapted to a variety of controls-structures integrated design projects. The method is used to minimize the total weight of a space platform while maintaining a specified vibration decay rate after slewing maneuvers.
McBride, J.H.; Stephenson, W.J.; Williams, R.A.; Odum, J.K.; Worley, D.M.; South, J.V.; Brinkerhoff, A.R.; Keach, R.W.; Okojie-Ayoro, A. O.
2010-01-01
Integrated vibroseis compressional and experimental hammer-source, shear-wave, seismic reflection profiles across the Provo segment of the Wasatch fault zone in Utah reveal near-surface and shallow bedrock structures caused by geologically recent deformation. Combining information from the seismic surveys, geologic mapping, terrain analysis, and previous seismic first-arrival modeling provides a well-constrained cross section of the upper ~500 m of the subsurface. Faults are mapped from the surface, through shallow, poorly consolidated deltaic sediments, and cutting through a rigid bedrock surface. The new seismic data are used to test hypotheses on changing fault orientation with depth, the number of subsidiary faults within the fault zone and the width of the fault zone, and the utility of integrating separate elastic methods to provide information on a complex structural zone. Although previous surface mapping has indicated only a few faults, the seismic section shows a wider and more complex deformation zone with both synthetic and antithetic normal faults. Our study demonstrates the usefulness of a combined shallow and deeper penetrating geophysical survey, integrated with detailed geologic mapping to constrain subsurface fault structure. Due to the complexity of the fault zone, accurate seismic velocity information is essential and was obtained from a first-break tomography model. The new constraints on fault geometry can be used to refine estimates of vertical versus lateral tectonic movements and to improve seismic hazard assessment along the Wasatch fault through an urban area. We suggest that earthquake-hazard assessments made without seismic reflection imaging may be biased by the previous mapping of too few faults. ?? 2010 Geological Society of America.
Peak fitting and integration uncertainties for the Aerodyne Aerosol Mass Spectrometer
NASA Astrophysics Data System (ADS)
Corbin, J. C.; Othman, A.; Haskins, J. D.; Allan, J. D.; Sierau, B.; Worsnop, D. R.; Lohmann, U.; Mensah, A. A.
2015-04-01
The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne High-Resolution Aerosol Mass Spectrometers (HR-AMS's) have not been previously addressed as a source of imprecision for these instruments. This manuscript evaluates the significance of these uncertainties and proposes a method for their estimation in routine data analysis. Peak-fitting uncertainties, the most complex source of integration uncertainties, are found to be dominated by errors in m/z calibration. These calibration errors comprise significant amounts of both imprecision and bias, and vary in magnitude from ion to ion. The magnitude of these m/z calibration errors is estimated for an exemplary data set, and used to construct a Monte Carlo model which reproduced well the observed trends in fits to the real data. The empirically-constrained model is used to show that the imprecision in the fitted height of isolated peaks scales linearly with the peak height (i.e., as n1), thus contributing a constant-relative-imprecision term to the overall uncertainty. This constant relative imprecision term dominates the Poisson counting imprecision term (which scales as n0.5) at high signals. The previous HR-AMS uncertainty model therefore underestimates the overall fitting imprecision. The constant relative imprecision in fitted peak height for isolated peaks in the exemplary data set was estimated as ~4% and the overall peak-integration imprecision was approximately 5%. We illustrate the importance of this constant relative imprecision term by performing Positive Matrix Factorization (PMF) on a~synthetic HR-AMS data set with and without its inclusion. Finally, the ability of an empirically-constrained Monte Carlo approach to estimate the fitting imprecision for an arbitrary number of known overlapping peaks is demonstrated. Software is available upon request to estimate these error terms in new data sets.
Constrained spectral clustering under a local proximity structure assumption
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie
2005-01-01
This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.
A sampling and classification item selection approach with content balancing.
Chen, Pei-Hua
2015-03-01
Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.
NASA Technical Reports Server (NTRS)
1973-01-01
An improved method for estimating aircraft weight and cost using a unique and fundamental approach was developed. The results of this study were integrated into a comprehensive digital computer program, which is intended for use at the preliminary design stage of aircraft development. The program provides a means of computing absolute values for weight and cost, and enables the user to perform trade studies with a sensitivity to detail design and overall structural arrangement. Both batch and interactive graphics modes of program operation are available.
NASA Technical Reports Server (NTRS)
Blanchard, D. L.; Chan, F. K.
1973-01-01
For a time-dependent, n-dimensional, special diagonal Hamilton-Jacobi equation a necessary and sufficient condition for the separation of variables to yield a complete integral of the form was established by specifying the admissible forms in terms of arbitrary functions. A complete integral was then expressed in terms of these arbitrary functions and also the n irreducible constants. As an application of the results obtained for the two-dimensional Hamilton-Jacobi equation, analysis was made for a comparatively wide class of dynamical problems involving a particle moving in Euclidean three-dimensional space under the action of external forces but constrained on a moving surface. All the possible cases in which this equation had a complete integral of the form were obtained and these are tubulated for reference.
Graphite/epoxy composite stiffened panel fabrication development
NASA Technical Reports Server (NTRS)
Palmer, R. J.
1984-01-01
This report describes the manufacturing development procedures used to fabricate a series of carbon/epoxy panels with integrally molded stiffeners. Panel size was started at 6 inches by 18 inches and one stiffener and increased to 30 inches by 60 inches and six integral stiffeners. Stiffener concepts were optimized for minimum weight (or mass) to carry stress levels from 1500 lbs/inch to 25,000 lbs/inch compression load. Designs were created and manufactured with a stiffener configuration of integrally molded hat, J, I, sine wave I, solid blade, and honeycomb blade shapes. Successful and unsuccessful detail methods of tooling, lay-up methods, and bagging methods are documented. Recommendations are made for the best state-of-the-art manufacturing technique developed for type of stiffener construction.
Kalluri, Kesava S.; Mahd, Mufeed; Glick, Stephen J.
2013-01-01
Purpose: Breast CT is an emerging imaging technique that can portray the breast in 3D and improve visualization of important diagnostic features. Early clinical studies have suggested that breast CT has sufficient spatial and contrast resolution for accurate detection of masses and microcalcifications in the breast, reducing structural overlap that is often a limiting factor in reading mammographic images. For a number of reasons, image quality in breast CT may be improved by use of an energy resolving photon counting detector. In this study, the authors investigate the improvements in image quality obtained when using energy weighting with an energy resolving photon counting detector as compared to that with a conventional energy integrating detector. Methods: Using computer simulation, realistic CT images of multiple breast phantoms were generated. The simulation modeled a prototype breast CT system using an amorphous silicon (a-Si), CsI based energy integrating detector with different x-ray spectra, and a hypothetical, ideal CZT based photon counting detector with capability of energy discrimination. Three biological signals of interest were modeled as spherical lesions and inserted into breast phantoms; hydroxyapatite (HA) to represent microcalcification, infiltrating ductal carcinoma (IDC), and iodine enhanced infiltrating ductal carcinoma (IIDC). Signal-to-noise ratio (SNR) of these three lesions was measured from the CT reconstructions. In addition, a psychophysical study was conducted to evaluate observer performance in detecting microcalcifications embedded into a realistic anthropomorphic breast phantom. Results: In the energy range tested, improvements in SNR with a photon counting detector using energy weighting was higher (than the energy integrating detector method) by 30%–63% and 4%–34%, for HA and IDC lesions and 12%–30% (with Al filtration) and 32%–38% (with Ce filtration) for the IIDC lesion, respectively. The average area under the receiver operating characteristic curve (AUC) for detection of microcalcifications was higher by greater than 19% (for the different energy weighting methods tested) as compared to the AUC obtained with an energy integrating detector. Conclusions: This study showed that breast CT with a CZT photon counting detector using energy weighting can provide improvements in pixel SNR, and detectability of microcalcifications as compared to that with a conventional energy integrating detector. Since a number of degrading physical factors were not modeled into the photon counting detector, this improvement should be considered as an upper bound on achievable performance. PMID:23927337
NASA Astrophysics Data System (ADS)
Gao, Chen; Ding, Zhongan; Deng, Bofa; Yan, Shengteng
2017-10-01
According to the characteristics of electric energy data acquire system (EEDAS), considering the availability of each index data and the connection between the index integrity, establishing the performance evaluation index system of electric energy data acquire system from three aspects as master station system, communication channel, terminal equipment. To determine the comprehensive weight of each index based on triangular fuzzy number analytic hierarchy process with entropy weight method, and both subjective preference and objective attribute are taken into consideration, thus realize the performance comprehensive evaluation more reasonable and reliable. Example analysis shows that, by combination with analytic hierarchy process (AHP) and triangle fuzzy numbers (TFN) to establish comprehensive index evaluation system based on entropy method, the evaluation results not only convenient and practical, but also more objective and accurate.
Eyler, Amy A.; Purnell, Jason Q.; Kinghorn, Anna M.; Herrick, Cynthia; Evanoff, Bradley A.
2015-01-01
Introduction The objective of this study was to examine workplace determinants of obesity and participation in employer-sponsored wellness programs among low-wage workers. Methods We conducted key informant interviews and focus groups with 2 partner organizations: a health care employer and a union representing retail workers. Interviews and focus groups discussed worksite factors that support or constrain healthy eating and physical activity and barriers that reduce participation in workplace wellness programs. Focus group discussions were transcribed and coded to identify main themes related to healthy eating, physical activity, and workplace factors that affect health. Results Although the union informants recognized the need for workplace wellness programs, very few programs were offered because informants did not know how to reach their widespread and diverse membership. Informants from the health care organization described various programs available to employees but noted several barriers to effective implementation. Workers discussed how their job characteristics contributed to their weight; irregular schedules, shift work, short breaks, physical job demands, and food options at work were among the most commonly discussed contributors to poor eating and exercise behaviors. Workers also described several general factors such as motivation, time, money, and conflicting responsibilities. Conclusion The workplace offers unique opportunities for obesity interventions that go beyond traditional approaches. Our results suggest that modifying the physical and social work environment by using participatory or integrated health and safety approaches may improve eating and physical activity behaviors. However, more research is needed about the methods best suited to the needs of low-wage workers. PMID:25950574
On Bernstein type inequalities and a weighted Chebyshev approximation problem on ellipses
NASA Technical Reports Server (NTRS)
Freund, Roland
1989-01-01
A classical inequality due to Bernstein which estimates the norm of polynomials on any given ellipse in terms of their norm on any smaller ellipse with the same foci is examined. For the uniform and a certain weighted uniform norm, and for the case that the two ellipses are not too close, sharp estimates of this type were derived and the corresponding extremal polynomials were determined. These Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Some new results were also presented for a weighted approximation problem of this type.
Halo effective field theory constrains the solar 7Be + p → 8B + γ rate
Zhang, Xilin; Nollett, Kenneth M.; Phillips, D. R.
2015-11-06
In this study, we report an improved low-energy extrapolation of the cross section for the process 7Be(p,γ) 8B, which determines the 8B neutrino flux from the Sun. Our extrapolant is derived from Halo Effective Field Theory (EFT) at next-to-leading order. We apply Bayesian methods to determine the EFT parameters and the low-energy S-factor, using measured cross sections and scattering lengths as inputs. Asymptotic normalization coefficients of 8B are tightly constrained by existing radiative capture data, and contributions to the cross section beyond external direct capture are detected in the data at E < 0.5 MeV. Most importantly, the S-factor atmore » zero energy is constrained to be S(0) = 21.3 ± 0.7 eV b, which is an uncertainty smaller by a factor of two than previously recommended. That recommendation was based on the full range for S(0) obtained among a discrete set of models judged to be reasonable. In contrast, Halo EFT subsumes all models into a controlled low-energy approximant, where they are characterized by nine parameters at next-to-leading order. These are fit to data, and marginalized over via Monte Carlo integration to produce the improved prediction for S(E).« less
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space.
Kalathil, Shaeen; Elias, Elizabeth
2015-11-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB.
Non-uniform cosine modulated filter banks using meta-heuristic algorithms in CSD space
Kalathil, Shaeen; Elias, Elizabeth
2014-01-01
This paper presents an efficient design of non-uniform cosine modulated filter banks (CMFB) using canonic signed digit (CSD) coefficients. CMFB has got an easy and efficient design approach. Non-uniform decomposition can be easily obtained by merging the appropriate filters of a uniform filter bank. Only the prototype filter needs to be designed and optimized. In this paper, the prototype filter is designed using window method, weighted Chebyshev approximation and weighted constrained least square approximation. The coefficients are quantized into CSD, using a look-up-table. The finite precision CSD rounding, deteriorates the filter bank performances. The performances of the filter bank are improved using suitably modified meta-heuristic algorithms. The different meta-heuristic algorithms which are modified and used in this paper are Artificial Bee Colony algorithm, Gravitational Search algorithm, Harmony Search algorithm and Genetic algorithm and they result in filter banks with less implementation complexity, power consumption and area requirements when compared with those of the conventional continuous coefficient non-uniform CMFB. PMID:26644921
Utility of inverse probability weighting in molecular pathological epidemiology.
Liu, Li; Nevo, Daniel; Nishihara, Reiko; Cao, Yin; Song, Mingyang; Twombly, Tyler S; Chan, Andrew T; Giovannucci, Edward L; VanderWeele, Tyler J; Wang, Molin; Ogino, Shuji
2018-04-01
As one of causal inference methodologies, the inverse probability weighting (IPW) method has been utilized to address confounding and account for missing data when subjects with missing data cannot be included in a primary analysis. The transdisciplinary field of molecular pathological epidemiology (MPE) integrates molecular pathological and epidemiological methods, and takes advantages of improved understanding of pathogenesis to generate stronger biological evidence of causality and optimize strategies for precision medicine and prevention. Disease subtyping based on biomarker analysis of biospecimens is essential in MPE research. However, there are nearly always cases that lack subtype information due to the unavailability or insufficiency of biospecimens. To address this missing subtype data issue, we incorporated inverse probability weights into Cox proportional cause-specific hazards regression. The weight was inverse of the probability of biomarker data availability estimated based on a model for biomarker data availability status. The strategy was illustrated in two example studies; each assessed alcohol intake or family history of colorectal cancer in relation to the risk of developing colorectal carcinoma subtypes classified by tumor microsatellite instability (MSI) status, using a prospective cohort study, the Nurses' Health Study. Logistic regression was used to estimate the probability of MSI data availability for each cancer case with covariates of clinical features and family history of colorectal cancer. This application of IPW can reduce selection bias caused by nonrandom variation in biospecimen data availability. The integration of causal inference methods into the MPE approach will likely have substantial potentials to advance the field of epidemiology.
Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation
NASA Astrophysics Data System (ADS)
Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito
2014-02-01
A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan L.; Culley, Dennis E.; Chapman, Jeffryes W.
2017-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
Approximation of Engine Casing Temperature Constraints for Casing Mounted Electronics
NASA Technical Reports Server (NTRS)
Kratz, Jonathan; Culley, Dennis; Chapman, Jeffryes
2016-01-01
The performance of propulsion engine systems is sensitive to weight and volume considerations. This can severely constrain the configuration and complexity of the control system hardware. Distributed Engine Control technology is a response to these concerns by providing more flexibility in designing the control system, and by extension, more functionality leading to higher performing engine systems. Consequently, there can be a weight benefit to mounting modular electronic hardware on the engine core casing in a high temperature environment. This paper attempts to quantify the in-flight temperature constraints for engine casing mounted electronics. In addition, an attempt is made at studying heat soak back effects. The Commercial Modular Aero Propulsion System Simulation 40k (C-MAPSS40k) software is leveraged with real flight data as the inputs to the simulation. A two-dimensional (2-D) heat transfer model is integrated with the engine simulation to approximate the temperature along the length of the engine casing. This modification to the existing C-MAPSS40k software will provide tools and methodologies to develop a better understanding of the requirements for the embedded electronics hardware in future engine systems. Results of the simulations are presented and their implications on temperature constraints for engine casing mounted electronics is discussed.
A Geo-referenced 3D model of the Juan de Fuca Slab and associated seismicity
Blair, J.L.; McCrory, P.A.; Oppenheimer, D.H.; Waldhauser, F.
2011-01-01
We present a Geographic Information System (GIS) of a new 3-dimensional (3D) model of the subducted Juan de Fuca Plate beneath western North America and associated seismicity of the Cascadia subduction system. The geo-referenced 3D model was constructed from weighted control points that integrate depth information from hypocenter locations and regional seismic velocity studies. We used the 3D model to differentiate earthquakes that occur above the Juan de Fuca Plate surface from earthquakes that occur below the plate surface. This GIS project of the Cascadia subduction system supersedes the one previously published by McCrory and others (2006). Our new slab model updates the model with new constraints. The most significant updates to the model include: (1) weighted control points to incorporate spatial uncertainty, (2) an additional gridded slab surface based on the Generic Mapping Tools (GMT) Surface program which constructs surfaces based on splines in tension (see expanded description below), (3) double-differenced hypocenter locations in northern California to better constrain slab location there, and (4) revised slab shape based on new hypocenter profiles that incorporate routine depth uncertainties as well as data from new seismic-reflection and seismic-refraction studies. We also provide a 3D fly-through animation of the model for use as a visualization tool.
Enriching Triangle Mesh Animations with Physically Based Simulation.
Li, Yijing; Xu, Hongyi; Barbic, Jernej
2017-10-01
We present a system to combine arbitrary triangle mesh animations with physically based Finite Element Method (FEM) simulation, enabling control over the combination both in space and time. The input is a triangle mesh animation obtained using any method, such as keyframed animation, character rigging, 3D scanning, or geometric shape modeling. The input may be non-physical, crude or even incomplete. The user provides weights, specified using a minimal user interface, for how much physically based simulation should be allowed to modify the animation in any region of the model, and in time. Our system then computes a physically-based animation that is constrained to the input animation to the amount prescribed by these weights. This permits smoothly turning physics on and off over space and time, making it possible for the output to strictly follow the input, to evolve purely based on physically based simulation, and anything in between. Achieving such results requires a careful combination of several system components. We propose and analyze these components, including proper automatic creation of simulation meshes (even for non-manifold and self-colliding undeformed triangle meshes), converting triangle mesh animations into animations of the simulation mesh, and resolving collisions and self-collisions while following the input.
Made-to-measure modelling of observed galaxy dynamics
NASA Astrophysics Data System (ADS)
Bovy, Jo; Kawata, Daisuke; Hunt, Jason A. S.
2018-01-01
Amongst dynamical modelling techniques, the made-to-measure (M2M) method for modelling steady-state systems is amongst the most flexible, allowing non-parametric distribution functions in complex gravitational potentials to be modelled efficiently using N-body particles. Here, we propose and test various improvements to the standard M2M method for modelling observed data, illustrated using the simple set-up of a one-dimensional harmonic oscillator. We demonstrate that nuisance parameters describing the modelled system's orientation with respect to the observer - e.g. an external galaxy's inclination or the Sun's position in the Milky Way - as well as the parameters of an external gravitational field can be optimized simultaneously with the particle weights. We develop a method for sampling from the high-dimensional uncertainty distribution of the particle weights. We combine this in a Gibbs sampler with samplers for the nuisance and potential parameters to explore the uncertainty distribution of the full set of parameters. We illustrate our M2M improvements by modelling the vertical density and kinematics of F-type stars in Gaia DR1. The novel M2M method proposed here allows full probabilistic modelling of steady-state dynamical systems, allowing uncertainties on the non-parametric distribution function and on nuisance parameters to be taken into account when constraining the dark and baryonic masses of stellar systems.
NASA Astrophysics Data System (ADS)
Albus, J.; Oery, H.
1993-04-01
One of the main problems associated with the structural design of a hypersonic aircraft is the conception of the cryogenic tank. Therefore two essential questions, in consideration of structural weight, volumetric efficiency and the aspects as well of inspection, maintenance and repair, as of exchangeability in case of leakage (leak before burst) and safety in operation, have to be answered. These questions concern the choice of the tank integration concept and the tank cross section. To get an idea how much the take-off weight depends on the tank integration concept, at the Institut fuer Leichtbau of the RWTH Aachen a program for weight estimation of hypersonic aircraft has been developed. Herewith the goal was to define well suited substitute models which allow the performance of parametric studies within a wide range of parameters in a tolerable amount of time. In the following the mass model and calculation methods used will be shortly introduced and finally the results achieved will be presented and discussed. On this occasion also comments on structural efficiency of different tank cross sections will be given.
Stochastic Multi-Timescale Power System Operations With Variable Wind Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Hongyu; Krad, Ibrahim; Florita, Anthony
This paper describes a novel set of stochastic unit commitment and economic dispatch models that consider stochastic loads and variable generation at multiple operational timescales. The stochastic model includes four distinct stages: stochastic day-ahead security-constrained unit commitment (SCUC), stochastic real-time SCUC, stochastic real-time security-constrained economic dispatch (SCED), and deterministic automatic generation control (AGC). These sub-models are integrated together such that they are continually updated with decisions passed from one to another. The progressive hedging algorithm (PHA) is applied to solve the stochastic models to maintain the computational tractability of the proposed models. Comparative case studies with deterministic approaches are conductedmore » in low wind and high wind penetration scenarios to highlight the advantages of the proposed methodology, one with perfect forecasts and the other with current state-of-the-art but imperfect deterministic forecasts. The effectiveness of the proposed method is evaluated with sensitivity tests using both economic and reliability metrics to provide a broader view of its impact.« less
Hierarchical Bayesian Model Averaging for Chance Constrained Remediation Designs
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Tsai, F. T.
2012-12-01
Groundwater remediation designs are heavily relying on simulation models which are subjected to various sources of uncertainty in their predictions. To develop a robust remediation design, it is crucial to understand the effect of uncertainty sources. In this research, we introduce a hierarchical Bayesian model averaging (HBMA) framework to segregate and prioritize sources of uncertainty in a multi-layer frame, where each layer targets a source of uncertainty. The HBMA framework provides an insight to uncertainty priorities and propagation. In addition, HBMA allows evaluating model weights in different hierarchy levels and assessing the relative importance of models in each level. To account for uncertainty, we employ a chance constrained (CC) programming for stochastic remediation design. Chance constrained programming was implemented traditionally to account for parameter uncertainty. Recently, many studies suggested that model structure uncertainty is not negligible compared to parameter uncertainty. Using chance constrained programming along with HBMA can provide a rigorous tool for groundwater remediation designs under uncertainty. In this research, the HBMA-CC was applied to a remediation design in a synthetic aquifer. The design was to develop a scavenger well approach to mitigate saltwater intrusion toward production wells. HBMA was employed to assess uncertainties from model structure, parameter estimation and kriging interpolation. An improved harmony search optimization method was used to find the optimal location of the scavenger well. We evaluated prediction variances of chloride concentration at the production wells through the HBMA framework. The results showed that choosing the single best model may lead to a significant error in evaluating prediction variances for two reasons. First, considering the single best model, variances that stem from uncertainty in the model structure will be ignored. Second, considering the best model with non-dominant model weight may underestimate or overestimate prediction variances by ignoring other plausible propositions. Chance constraints allow developing a remediation design with a desirable reliability. However, considering the single best model, the calculated reliability will be different from the desirable reliability. We calculated the reliability of the design for the models at different levels of HBMA. The results showed that by moving toward the top layers of HBMA, the calculated reliability converges to the chosen reliability. We employed the chance constrained optimization along with the HBMA framework to find the optimal location and pumpage for the scavenger well. The results showed that using models at different levels in the HBMA framework, the optimal location of the scavenger well remained the same, but the optimal extraction rate was altered. Thus, we concluded that the optimal pumping rate was sensitive to the prediction variance. Also, the prediction variance was changed by using different extraction rate. Using very high extraction rate will cause prediction variances of chloride concentration at the production wells to approach zero regardless of which HBMA models used.
Energy harvesting from controlled buckling of piezoelectric beams
NASA Astrophysics Data System (ADS)
Ansari, M. H.; Karami, M. Amin
2015-11-01
A piezoelectric vibration energy harvester is presented that can generate electricity from the weight of passing cars or crowds. The energy harvester consists of a piezoelectric beam, which buckles when the device is stepped on. The energy harvester can have a horizontal or vertical configuration. In the vertical (direct) configuration, the piezoelectric beam is vertical and directly sustains the weight of the vehicles or people. In the horizontal (indirect) configuration, the vertical weight is transferred to a horizontal axial force through a scissor-like mechanism. Buckling of the beam results in significant stresses and, thus, large power production. However, if the beam’s buckling is not controlled, the beam will fracture. To prevent this, the axial deformation is constrained to limit the deformations of the beam. In this paper, the energy harvester is analytically modeled. The considered piezoelectric beam is a general non-uniform beam. The natural frequencies, mode shapes, and the critical buckling force corresponding to each mode shape are calculated. The electro-mechanical coupling and the geometric nonlinearities are included in the model. The design criteria for the device are discussed. It is demonstrated that a device, realized with commonly used piezoelectric patches, can generate tens of milliwatts of power from passing car traffic. The proposed device could also be implemented in the sidewalks or integrated in shoe soles for energy generation. One of the key features of the device is its frequency up-conversion characteristics. The piezoelectric beam undergoes free vibrations each time the weight is applied to or removed from the energy harvester. The frequency of the free vibrations is orders of magnitude larger than the frequency of the load. The device is, thus, both efficient and insensitive to the frequency of the force excitations.
Berge, Jerica M; Adamek, Margaret; Caspi, Caitlin; Grannon, Katherine Y; Loth, Katie A; Trofholz, Amanda; Nanney, Marilyn S
2018-06-01
In response to the limitations of siloed weight-related intervention approaches, scholars have called for greater integration that is intentional, strategic, and thoughtful between researchers, health care clinicians, community members, and policy makers as a way to more effectively address weight and weight-related (e.g., obesity, diabetes, cardiovascular disease, cancer) public health problems. The Mastery Matrix for Integration Praxis was developed by the Healthy Eating and Activity across the Lifespan (HEAL) team in 2017 to advance the science and praxis of integration across the domains of research, clinical practice, community, and policy to address weight-related public health problems. Integrator functions were identified and developmental stages were created to generate a rubric for measuring mastery of integration. Creating a means to systematically define and evaluate integration praxis and expertise will allow for more individuals and teams to master integration in order to work towards promoting a culture of health. Copyright © 2018 Elsevier Inc. All rights reserved.
Integrated Control Using the SOFFT Control Structure
NASA Technical Reports Server (NTRS)
Halyo, Nesim
1996-01-01
The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.
A person trade-off study to estimate age-related weights for health gains in economic evaluation.
Petrou, Stavros; Kandala, Ngianga-Bakwin; Robinson, Angela; Baker, Rachel
2013-10-01
An increasing body of literature is exploring whether the age of the recipient of health care should be a criterion in how health care resources are allocated. The existing literature is constrained both by the relatively small number of age comparison groups within preference-elicitation studies, and by a paucity of methodological robustness tests for order and framing effects and the reliability and transitivity of preferences that would strengthen confidence in the results. This paper reports the results of a study aimed at estimating granulated age-related weights for health gains across the age spectrum that can potentially inform health care decision-making. A sample of 2,500 participants recruited from the health care consumer panels of a social research company completed a person trade-off (or 'matching') study designed to estimate age-related weights for 5- and 10-year life extensions. The results are presented in terms of matrices for alternative age comparisons across the age spectrum. The results revealed a general, although not invariable, tendency to give more weight to health gains, expressed in terms of life extensions, in younger age groups. In over 85% of age comparisons, the person trade-off exercises revealed a preference for life extensions by the younger of the two age groups that were compared. This pattern held regardless of the method of aggregating responses across study participants. Moreover, the relative weight placed on life extensions by the younger of the two age groups was generally, although not invariably, found to increase as the age difference between the comparator age groups increased. Further analyses revealed that the highest mean relative weight placed on life extensions was estimated for 30-year-olds when the ratio of means method was used to aggregate person trade-off responses across study participants. The highest mean relative weight placed on life extensions was estimated for 10-year-olds for 5-year life extensions and for 30-year-olds for 10-year life extensions, when the median of individual ratios method was used to aggregate person trade-off responses across study participants. Methodological tests framed around alternative referents in the person trade-off questions and the stability of preferences had no discernible effects on the study results. This study has produced new evidence on age-related weights for health gains that can potentially inform health care decision-making.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.
Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-01-01
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service. PMID:28574471
A Map/INS/Wi-Fi Integrated System for Indoor Location-Based Service Applications.
Yu, Chunyang; Lan, Haiyu; Gu, Fuqiang; Yu, Fei; El-Sheimy, Naser
2017-06-02
In this research, a new Map/INS/Wi-Fi integrated system for indoor location-based service (LBS) applications based on a cascaded Particle/Kalman filter framework structure is proposed. Two-dimension indoor map information, together with measurements from an inertial measurement unit (IMU) and Received Signal Strength Indicator (RSSI) value, are integrated for estimating positioning information. The main challenge of this research is how to make effective use of various measurements that complement each other in order to obtain an accurate, continuous, and low-cost position solution without increasing the computational burden of the system. Therefore, to eliminate the cumulative drift caused by low-cost IMU sensor errors, the ubiquitous Wi-Fi signal and non-holonomic constraints are rationally used to correct the IMU-derived navigation solution through the extended Kalman Filter (EKF). Moreover, the map-aiding method and map-matching method are innovatively combined to constrain the primary Wi-Fi/IMU-derived position through an Auxiliary Value Particle Filter (AVPF). Different sources of information are incorporated through a cascaded structure EKF/AVPF filter algorithm. Indoor tests show that the proposed method can effectively reduce the accumulation of positioning errors of a stand-alone Inertial Navigation System (INS), and provide a stable, continuous and reliable indoor location service.
Toward Overcoming the Local Minimum Trap in MFBD
2015-07-14
the first two years of this grant: • A. Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind Deconvolution...Cornelio, E. Loli -Piccolomini, and J. G. Nagy. Constrained Numerical Optimization Meth- ods for Blind Deconvolution, Numerical Algorithms, volume 65, issue 1...Publications (published) during reporting period: A. Cornelio, E. Loli Piccolomini, and J. G. Nagy. Constrained Variable Projection Method for Blind
A Constrained-Clustering Approach to the Analysis of Remote Sensing Data.
1983-01-01
One old and two new clustering methods were applied to the constrained-clustering problem of separating different agricultural fields based on multispectral remote sensing satellite data. (Constrained-clustering involves double clustering in multispectral measurement similarity and geographical location.) The results of applying the three methods are provided along with a discussion of their relative strengths and weaknesses and a detailed description of their algorithms.
imaging volcanos with gravity and muon tomography measurements
NASA Astrophysics Data System (ADS)
Jourde, Kevin; Gibert, Dominique; Marteau, Jacques; Deroussi, Sébastien; Dufour, Fabrice; de Bremond d'Ars, Jean; Ianigro, Jean-Christophe; Gardien, Serge; Girerd, Claude
2015-04-01
Both muon tomography and gravimetry are geohysical methods that provide information on the density structure of the Earth's subsurface. Muon tomography measures the natural flux of cosmic muons and its attenuation produced by the screening effect of the rock mass to image. Gravimetry generally consists in measurements of the vertical component of the local gravity field. Both methods are linearly linked to density, but their spatial sensitivity is very different. Muon tomography essentially works like medical X-ray scan and integrates density information along elongated narrow conical volumes while gravimetry measurements are linked to density by a 3-dimensional integral encompassing the whole studied domain. We show that gravity data are almost useless to constrain the density structure in regions sampled by more than two muon tomography acquisitions. Interestingly the resolution in deeper regions not sampled by muon tomography is significantly improved by joining the two techniques. Examples taken from field experiments performed on La Soufrière of Guadeloupe volcano are discussed.
NASA Astrophysics Data System (ADS)
Yang, Y.; Zhao, Y.
2017-12-01
To understand the differences and their origins of emission inventories based on various methods for the source, emissions of PM10, PM2.5, OC, BC, CH4, VOCs, CO, CO2, NOX, SO2 and NH3 from open biomass burning (OBB) in Yangtze River Delta (YRD) are calculated for 2005-2012 using three (bottom-up, FRP-based and constraining) approaches. The inter-annual trends in emissions with FRP-based and constraining methods are similar with the fire counts in 2005-2012, while that with bottom-up method is different. For most years, emissions of all species estimated with constraining method are smaller than those with bottom-up method (except for VOCs), while they are larger than those with FRP-based (except for EC, CH4 and NH3). Such discrepancies result mainly from different masses of crop residues burned in the field (CRBF) estimated in the three methods. Among the three methods, the simulated concentrations from chemistry transport modeling with the constrained emissions are the closest to available observations, implying the result from constraining method is the best estimation for OBB emissions. CO emissions in the three methods are compared with other studies. Similar temporal variations were found for the constrained emissions, FRP-based emissions, GFASv1.0 and GFEDv4.1s, with the largest and the lowest emissions estimated for 2012 and 2006, respectively. The constrained CO emissions in this study are smaller than those in other studies based on bottom-up method and larger than those based on burned area and FRP derived from satellite. The contributions of OBB to two particulate pollution events in 2010 and 2012 are analyzed with the brute-force method. The average contribution of OBB to PM10 mass concentrations in June 8-14 2012 was estimated at 38.9% (74.8 μg m-3), larger than that in June 17-24, 2010 at 23.6 % (38.5 μg m-3). Influences of diurnal curves and meteorology on air pollution caused by OBB are also evaluated, and the results suggest that air pollution caused by OBB will become heavier if the meteorological conditions are unfavorable, and that more attention should be paid to the supervision in night. Quantified with the Monte-Carlo simulation, the uncertainties of OBB emissions with constraining method are significantly lower than those with bottom-up or FRP-based methods.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
A power autonomous monopedal robot
NASA Astrophysics Data System (ADS)
Krupp, Benjamin T.; Pratt, Jerry E.
2006-05-01
We present the design and initial results of a power-autonomous planar monopedal robot. The robot is a gasoline powered, two degree of freedom robot that runs in a circle, constrained by a boom. The robot uses hydraulic Series Elastic Actuators, force-controllable actuators which provide high force fidelity, moderate bandwidth, and low impedance. The actuators are mounted in the body of the robot, with cable drives transmitting power to the hip and knee joints of the leg. A two-stroke, gasoline engine drives a constant displacement pump which pressurizes an accumulator. Absolute position and spring deflection of each of the Series Elastic Actuators are measured using linear encoders. The spring deflection is translated into force output and compared to desired force in a closed loop force-control algorithm implemented in software. The output signal of each force controller drives high performance servo valves which control flow to each of the pistons of the actuators. In designing the robot, we used a simulation-based iterative design approach. Preliminary estimates of the robot's physical parameters were based on past experience and used to create a physically realistic simulation model of the robot. Next, a control algorithm was implemented in simulation to produce planar hopping. Using the joint power requirements and range of motions from simulation, we worked backward specifying pulley diameter, piston diameter and stroke, hydraulic pressure and flow, servo valve flow and bandwidth, gear pump flow, and engine power requirements. Components that meet or exceed these specifications were chosen and integrated into the robot design. Using CAD software, we calculated the physical parameters of the robot design, replaced the original estimates with the CAD estimates, and produced new joint power requirements. We iterated on this process, resulting in a design which was prototyped and tested. The Monopod currently runs at approximately 1.2 m/s with the weight of all the power generating components, but powered from an off-board pump. On a test stand, the eventual on-board power system generates enough pressure and flow to meet the requirements of these runs and we are currently integrating the power system into the real robot. When operated from an off-board system without carrying the weight of the power generating components, the robot currently runs at approximately 2.25 m/s. Ongoing work is focused on integrating the power system into the robot, improving the control algorithm, and investigating methods for improving efficiency.
A New Family of Solvable Pearson-Dirichlet Random Walks
NASA Astrophysics Data System (ADS)
Le Caër, Gérard
2011-07-01
An n-step Pearson-Gamma random walk in ℝ d starts at the origin and consists of n independent steps with gamma distributed lengths and uniform orientations. The gamma distribution of each step length has a shape parameter q>0. Constrained random walks of n steps in ℝ d are obtained from the latter walks by imposing that the sum of the step lengths is equal to a fixed value. Simple closed-form expressions were obtained in particular for the distribution of the endpoint of such constrained walks for any d≥ d 0 and any n≥2 when q is either q = d/2 - 1 ( d 0=3) or q= d-1 ( d 0=2) (Le Caër in J. Stat. Phys. 140:728-751, 2010). When the total walk length is chosen, without loss of generality, to be equal to 1, then the constrained step lengths have a Dirichlet distribution whose parameters are all equal to q and the associated walk is thus named a Pearson-Dirichlet random walk. The density of the endpoint position of a n-step planar walk of this type ( n≥2), with q= d=2, was shown recently to be a weighted mixture of 1+ floor( n/2) endpoint densities of planar Pearson-Dirichlet walks with q=1 (Beghin and Orsingher in Stochastics 82:201-229, 2010). The previous result is generalized to any walk space dimension and any number of steps n≥2 when the parameter of the Pearson-Dirichlet random walk is q= d>1. We rely on the connection between an unconstrained random walk and a constrained one, which have both the same n and the same q= d, to obtain a closed-form expression of the endpoint density. The latter is a weighted mixture of 1+ floor( n/2) densities with simple forms, equivalently expressed as a product of a power and a Gauss hypergeometric function. The weights are products of factors which depends both on d and n and Bessel numbers independent of d.
Integrated air revitalization system for Space Station
NASA Technical Reports Server (NTRS)
Boyda, R. B.; Miller, C. W.; Schwartz, M. R.
1986-01-01
Fifty-one distinct functions are encompassed by the Space Station's Environmental Control and Life Support System; one exception to this noninteractivity of functions is the regenerative air revitalization system that removes and reduces CO2 and generates O2. The integration of these interdependent functions, and of humidity control, into a single system furnishes opportunities for process simplification as well as for power, weight and volume requirement reductions by comparison with discrete subsystems. Attention is presently given to a system which quantifies these integration-related savings and identifies additional advantages that accrue to this integrating design method.
Apparatus and method for fabricating a microbattery
Shul, Randy J.; Kravitz, Stanley H.; Christenson, Todd R.; Zipperian, Thomas E.; Ingersoll, David
2002-01-01
An apparatus and method for fabricating a microbattery that uses silicon as the structural component, packaging component, and semiconductor to reduce the weight, size, and cost of thin film battery technology is described. When combined with advanced semiconductor packaging techniques, such a silicon-based microbattery enables the fabrication of autonomous, highly functional, integrated microsystems having broad applicability.
2016-05-26
www.arcic.army.mil/Articles/cdd-Force-Design-in-a-Constrained Environment.aspx. 45 Tzu , Sun . The Art of War... Bibliography Army Capabilities Integration Center. “Force 2025 and Beyond.” US Army. February 18, 2016. Accessed February 20, 2016. http
Fitting integrated enzyme rate equations to progress curves with the use of a weighting matrix.
Franco, R; Aran, J M; Canela, E I
1991-01-01
A method is presented for fitting the pairs of values product formed-time taken from progress curves to the integrated rate equation. The procedure is applied to the estimation of the kinetic parameters of the adenosine deaminase system. Simulation studies demonstrate the capabilities of this strategy. A copy of the FORTRAN77 program used can be obtained from the authors by request. PMID:2006914
Thermodynamic integration of the free energy along a reaction coordinate in Cartesian coordinates
NASA Astrophysics Data System (ADS)
den Otter, W. K.
2000-05-01
A generalized formulation of the thermodynamic integration (TI) method for calculating the free energy along a reaction coordinate is derived. Molecular dynamics simulations with a constrained reaction coordinate are used to sample conformations. These are then projected onto conformations with a higher value of the reaction coordinate by means of a vector field. The accompanying change in potential energy plus the divergence of the vector field constitute the derivative of the free energy. Any vector field meeting some simple requirements can be used as the basis of this TI expression. Two classes of vector fields are of particular interest here. The first recovers the conventional TI expression, with its cumbersome dependence on a full set of generalized coordinates. As the free energy is a function of the reaction coordinate only, it should in principle be possible to derive an expression depending exclusively on the definition of the reaction coordinate. This objective is met by the second class of vector fields to be discussed. The potential of mean constraint force (PMCF) method, after averaging over the unconstrained momenta, falls in this second class. The new method is illustrated by calculations on the isomerization of n-butane, and is compared with existing methods.
Artifact reduction in short-scan CBCT by use of optimization-based reconstruction
Zhang, Zheng; Han, Xiao; Pearson, Erik; Pelizzari, Charles; Sidky, Emil Y; Pan, Xiaochuan
2017-01-01
Increasing interest in optimization-based reconstruction in research on, and applications of, cone-beam computed tomography (CBCT) exists because it has been shown to have to potential to reduce artifacts observed in reconstructions obtained with the Feldkamp–Davis–Kress (FDK) algorithm (or its variants), which is used extensively for image reconstruction in current CBCT applications. In this work, we carried out a study on optimization-based reconstruction for possible reduction of artifacts in FDK reconstruction specifically from short-scan CBCT data. The investigation includes a set of optimization programs such as the image-total-variation (TV)-constrained data-divergency minimization, data-weighting matrices such as the Parker weighting matrix, and objects of practical interest for demonstrating and assessing the degree of artifact reduction. Results of investigative work reveal that appropriately designed optimization-based reconstruction, including the image-TV-constrained reconstruction, can reduce significant artifacts observed in FDK reconstruction in CBCT with a short-scan configuration. PMID:27046218
scarlet: Source separation in multi-band images by Constrained Matrix Factorization
NASA Astrophysics Data System (ADS)
Melchior, Peter; Moolekamp, Fred; Jerdee, Maximilian; Armstrong, Robert; Sun, Ai-Lei; Bosch, James; Lupton, Robert
2018-03-01
SCARLET performs source separation (aka "deblending") on multi-band images. It is geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data. Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. The code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source. Because of its generic utility, this package provides a stand-alone implementation that contains the core components of the source separation algorithm. However, the development of this package is part of the LSST Science Pipeline; the meas_deblender package contains a wrapper to implement the algorithms here for the LSST stack.
Kuo, R J; Wu, P; Wang, C P
2002-09-01
Sales forecasting plays a very prominent role in business strategy. Numerous investigations addressing this problem have generally employed statistical methods, such as regression or autoregressive and moving average (ARMA). However, sales forecasting is very complicated owing to influence by internal and external environments. Recently, artificial neural networks (ANNs) have also been applied in sales forecasting since their promising performances in the areas of control and pattern recognition. However, further improvement is still necessary since unique circumstances, e.g. promotion, cause a sudden change in the sales pattern. Thus, this study utilizes a proposed fuzzy neural network (FNN), which is able to eliminate the unimportant weights, for the sake of learning fuzzy IF-THEN rules obtained from the marketing experts with respect to promotion. The result from FNN is further integrated with the time series data through an ANN. Both the simulated and real-world problem results show that FNN with weight elimination can have lower training error compared with the regular FNN. Besides, real-world problem results also indicate that the proposed estimation system outperforms the conventional statistical method and single ANN in accuracy.
Investigation of alternate power source for Space Shuttle Orbiter hydraulic system
NASA Technical Reports Server (NTRS)
Simon, William E.; Young, Fred M.
1993-01-01
This investigation consists of a short-term feasibility study to determine whether or not an alternate electrical power source would trade favorably from a performance, reliability, safety, operation, and weight standpoint in replacing the current auxiliary power unit subsystems with its attendant components (water spray boiler, hydrazine fuel and tanks, feed and vent lines, controls, etc.), operating under current flight rules. Results of this feasibility study are used to develop recommendations for the next step (e.g., to determine if such an alternate electrical power source would show an advantage given that the current operational flight mode of the system could be modified in such a way as not to constrain the operational capability and safety of the vehicle). However, this next step is not within the scope of this investigation. This study does not include a cost analysis, nor does it include investigation of the integration aspects involved in such a trade, except in a qualitative sense for the determination of concept feasibility.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework
Talluto, Matthew V.; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C. Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A.; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-01-01
Aim Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Location Eastern North America (as an example). Methods Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple (Acer saccharum), an abundant tree native to eastern North America. Results For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. Main conclusions We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making. PMID:27499698
Scientific Decision Making, Policy Decisions, and the Obesity Pandemic
Hebert, James R.; Allison, David B.; Archer, Edward; Lavie, Carl J.; Blair, Steven N.
2013-01-01
Rising and epidemic rates of obesity in many parts of the world are leading to increased suffering and economic stress from diverting health care resources to treating a variety of serious, but preventable, chronic diseases etiologically linked to obesity, particularly type 2 diabetes mellitus and cardiovascular diseases. Despite decades of research into the causes of the obesity pandemic, we seem to be no nearer to a solution now than when the rise in body weights was first chronicled decades ago. The case is made that impediments to a clear understanding of the nature of the problem occur at many levels. These obstacles begin with defining obesity and include lax application of scientific standards of review, tenuous assumption making, flawed measurement and other methods, constrained discourse limiting examination of alternative explanations of cause, and policies that determine funding priorities. These issues constrain creativity and stifle expansive thinking that could otherwise advance the field in preventing and treating obesity and its complications. Suggestions are made to create a climate of open exchange of ideas and redirection of policies that can remove the barriers that prevent us from making material progress in solving a pressing major public health problem of the early 21st century. PMID:23726399
Sardella, Roccaldo; Ianni, Federica; Lisanti, Antonella; Scorzoni, Stefania; Marini, Francesca; Sternativo, Silvia; Natalini, Benedetto
2014-05-01
To the best of our knowledge enantioselective chromatographic protocols on β-amino acids with polysaccharide-based chiral stationary phases (CSPs) have not yet appeared in the literature. Therefore, the primary objective of this work was the development of chromatographic methods based on the use of an amylose derivative CSP (Lux Amylose-2), enabling the direct normal-phase (NP) enantioresolution of four fully constrained β-amino acids. Also, the results obtained with the glycopeptide-type Chirobiotic T column employed in the usual polar-ionic (PI) mode of elution are compared with those achieved with the polysaccharide-based phase. The Lux Amylose-2 column, in combination with alkyl sulfonic acid containing NP eluent systems, prevailed over the Chirobiotic T one, when used under the PI mode of elution, and hence can be considered as the elective choice for the enantioseparation of this class of rigid β-amino acids. Moreover, the extraordinarily high α (up to 4.60) and R S (up to 10.60) values provided by the polysaccharidic polymer, especially when used with camphor sulfonic acid containing eluent systems, make it also suitable for preparative-scale enantioisolations.
Nodal weighting factor method for ex-core fast neutron fluence evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, R. T.
The nodal weighting factor method is developed for evaluating ex-core fast neutron flux in a nuclear reactor by utilizing adjoint neutron flux, a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV, the unit fission source, and relative assembly nodal powers. The method determines each nodal weighting factor for ex-core neutron fast flux evaluation by solving the steady-state adjoint neutron transport equation with a fictitious unit detector cross section for neutron energy above 1 or 0.1 MeV as the adjoint source, by integrating the unit fission source with a typical fission spectrum to the solved adjointmore » flux over all energies, all angles and given nodal volume, and by dividing it with the sum of all nodal weighting factors, which is a normalization factor. Then, the fast neutron flux can be obtained by summing the various relative nodal powers times the corresponding nodal weighting factors of the adjacent significantly contributed peripheral assembly nodes and times a proper fast neutron attenuation coefficient over an operating period. A generic set of nodal weighting factors can be used to evaluate neutron fluence at the same location for similar core design and fuel cycles, but the set of nodal weighting factors needs to be re-calibrated for a transition-fuel-cycle. This newly developed nodal weighting factor method should be a useful and simplified tool for evaluating fast neutron fluence at selected locations of interest in ex-core components of contemporary nuclear power reactors. (authors)« less
NASA Astrophysics Data System (ADS)
Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.
2015-12-01
Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.
An information diffusion technique to assess integrated hazard risks.
Huang, Chongfu; Huang, Yundong
2018-02-01
An integrated risk is a scene in the future associated with some adverse incident caused by multiple hazards. An integrated probability risk is the expected value of disaster. Due to the difficulty of assessing an integrated probability risk with a small sample, weighting methods and copulas are employed to avoid this obstacle. To resolve the problem, in this paper, we develop the information diffusion technique to construct a joint probability distribution and a vulnerability surface. Then, an integrated risk can be directly assessed by using a small sample. A case of an integrated risk caused by flood and earthquake is given to show how the suggested technique is used to assess the integrated risk of annual property loss. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Watkins, Stephen E.; Whittaker, Alexander C.; Bell, Rebecca E.; Brooke, Sam A. S.; McNeill, Lisa C.; Gawthorpe, Robert L.
2017-04-01
The volumes, grain sizes and characteristics of sediment supplied from source catchments fundamentally controls basin stratigraphy. However, to date, few studies have constrained sediment budgets, including grain size, released into an active rift basin at a regional scale. The Gulf of Corinth, central Greece, is one of the most rapidly extending rifts in the world, with geodetic measurements of 5 mm/yr in the East to 15 mm/yr in the West. It has well-constrained climatic and tectonic boundary conditions and bedrock lithologies are well-characterised. It is therefore an ideal natural laboratory to study the grain-size export for a rift. In the field, we visited the river mouths of 49 catchments draining into the Corinth Gulf, which in total drain 83% of the rift. At each site, hydraulic geometries, surface grain-size of channel bars and full-weighted grain-size distributions of river sediment were obtained. The surface grain-size was measured using the Wolman point count method and the full-weighted grain-size distribution of the bedload by in-situ sieving. In total, approximately 17,000 point counts and 3 tonnes of sediment were processed. The grain-size distributions show an overall increase from East to West on the southern coast of the gulf, with largest grain-sizes exported from the Western rift catchments. D84 ranges from 20 to 110 mm, however 50% of D84 grain-sizes are less than 40 mm. Subsequently, we derived the full Holocene sediment budget for the Corinth Gulf by combining our grain size data with catchment sediment fluxes, constrained using the BQART model and calibrated to known Holocene sediment volumes in the basin from seismic data (c.f. Watkins et al., in review). This is the first time such a budget has been derived for the Corinth Rift. Finally, our estimates of sediment budgets and grain sizes were compared to regional uplift constraints, fault distributions, slip rates and lithology to identify the relative importance of these controls on sediment supply to the basin.
Finite difference schemes for long-time integration
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1993-01-01
Finite difference schemes for the evaluation of first and second derivatives are presented. These second order compact schemes were designed for long-time integration of evolution equations by solving a quadratic constrained minimization problem. The quadratic cost function measures the global truncation error while taking into account the initial data. The resulting schemes are applicable for integration times fourfold, or more, longer than similar previously studied schemes. A similar approach was used to obtain improved integration schemes.
Hyle, Emily P; Naidoo, Kogieleum; Su, Amanda E; El-Sadr, Wafaa M; Freedberg, Kenneth A
2014-09-01
Unprecedented investments in health systems in low- and middle-income countries (LMICs) have resulted in more than 8 million individuals on antiretroviral therapy. Such individuals experience dramatically increased survival but are increasingly at risk of developing common noncommunicable diseases (NCDs). Integrating clinical care for HIV, other infectious diseases, and NCDs could make health services more effective and provide greater value. Cost-effectiveness analysis is a method to evaluate the clinical benefits and costs associated with different health care interventions and offers guidance for prioritization of investments and scale-up, especially as resources are increasingly constrained. We first examine tuberculosis and HIV as 1 example of integrated care already successfully implemented in several LMICs; we then review the published literature regarding cervical cancer and depression as 2 examples of NCDs for which integrating care with HIV services could offer excellent value. Direct evidence of the benefits of integrated services generally remains scarce; however, data suggest that improved effectiveness and reduced costs may be attained by integrating additional services with existing HIV clinical care. Further investigation into clinical outcomes and costs of care for NCDs among people living with HIV in LMICs will help to prioritize specific health care services by contributing to an understanding of the affordability and implementation of an integrated approach.
NASA Astrophysics Data System (ADS)
Yang, B. D.; Chu, M. L.; Menq, C. H.
1998-03-01
Mechanical systems in which moving components are mutually constrained through contacts often lead to complex contact kinematics involving tangential and normal relative motions. A friction contact model is proposed to characterize this type of contact kinematics that imposes both friction non-linearity and intermittent separation non-linearity on the system. The stick-slip friction phenomenon is analyzed by establishing analytical criteria that predict the transition between stick, slip, and separation of the interface. The established analytical transition criteria are particularly important to the proposed friction contact model for the transition conditions of the contact kinematics are complicated by the effect of normal load variation and possible interface separation. With these transition criteria, the induced friction force on the contact plane and the variable normal load perpendicular to the contact plane, can be predicted for any given cyclic relative motions at the contact interface and hysteresis loops can be produced so as to characterize the equivalent damping and stiffness of the friction contact. These-non-linear damping and stiffness methods along with the harmonic balance method are then used to predict the resonant response of a frictionally constrained two-degree-of-freedom oscillator. The predicted results are compared with those of the time integration method and the damping effect, the resonant frequency shift, and the jump phenomenon are examined.
On advanced configuration enhance adaptive system optimization
NASA Astrophysics Data System (ADS)
Liu, Hua; Ding, Quanxin; Wang, Helong; Guo, Chunjie; Chen, Hongliang; Zhou, Liwei
2017-10-01
For aim to find an effective method to structure to enhance these adaptive system with some complex function and look forward to establish an universally applicable solution in prototype and optimization. As the most attractive component in adaptive system, wave front corrector is constrained by some conventional technique and components, such as polarization dependence and narrow working waveband. Advanced configuration based on a polarized beam split can optimized energy splitting method used to overcome these problems effective. With the global algorithm, the bandwidth has been amplified by more than five times as compared with that of traditional ones. Simulation results show that the system can meet the application requirements in MTF and other related criteria. Compared with the conventional design, the system has reduced in volume and weight significantly. Therefore, the determining factors are the prototype selection and the system configuration, Results show their effectiveness.
Feizizadeh, Bakhtiar; Blaschke, Thomas
2014-03-04
GIS-based multicriteria decision analysis (MCDA) methods are increasingly being used in landslide susceptibility mapping. However, the uncertainties that are associated with MCDA techniques may significantly impact the results. This may sometimes lead to inaccurate outcomes and undesirable consequences. This article introduces a new GIS-based MCDA approach. We illustrate the consequences of applying different MCDA methods within a decision-making process through uncertainty analysis. Three GIS-MCDA methods in conjunction with Monte Carlo simulation (MCS) and Dempster-Shafer theory are analyzed for landslide susceptibility mapping (LSM) in the Urmia lake basin in Iran, which is highly susceptible to landslide hazards. The methodology comprises three stages. First, the LSM criteria are ranked and a sensitivity analysis is implemented to simulate error propagation based on the MCS. The resulting weights are expressed through probability density functions. Accordingly, within the second stage, three MCDA methods, namely analytical hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA), are used to produce the landslide susceptibility maps. In the third stage, accuracy assessments are carried out and the uncertainties of the different results are measured. We compare the accuracies of the three MCDA methods based on (1) the Dempster-Shafer theory and (2) a validation of the results using an inventory of known landslides and their respective coverage based on object-based image analysis of IRS-ID satellite images. The results of this study reveal that through the integration of GIS and MCDA models, it is possible to identify strategies for choosing an appropriate method for LSM. Furthermore, our findings indicate that the integration of MCDA and MCS can significantly improve the accuracy of the results. In LSM, the AHP method performed best, while the OWA reveals better performance in the reliability assessment. The WLC operation yielded poor results.
Stress-Constrained Structural Topology Optimization with Design-Dependent Loads
NASA Astrophysics Data System (ADS)
Lee, Edmund
Topology optimization is commonly used to distribute a given amount of material to obtain the stiffest structure, with predefined fixed loads. The present work investigates the result of applying stress constraints to topology optimization, for problems with design-depending loading, such as self-weight and pressure. In order to apply pressure loading, a material boundary identification scheme is proposed, iteratively connecting points of equal density. In previous research, design-dependent loading problems have been limited to compliance minimization. The present study employs a more practical approach by minimizing mass subject to failure constraints, and uses a stress relaxation technique to avoid stress constraint singularities. The results show that these design dependent loading problems may converge to a local minimum when stress constraints are enforced. Comparisons between compliance minimization solutions and stress-constrained solutions are also given. The resulting topologies of these two solutions are usually vastly different, demonstrating the need for stress-constrained topology optimization.
Self-constrained inversion of potential fields
NASA Astrophysics Data System (ADS)
Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.
2013-11-01
We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.
Real time simulation of computer-assisted sequencing of terminal area operations
NASA Technical Reports Server (NTRS)
Dear, R. G.
1981-01-01
A simulation was developed to investigate the utilization of computer assisted decision making for the task of sequencing and scheduling aircraft in a high density terminal area. The simulation incorporates a decision methodology termed Constrained Position Shifting. This methodology accounts for aircraft velocity profiles, routes, and weight classes in dynamically sequencing and scheduling arriving aircraft. A sample demonstration of Constrained Position Shifting is presented where six aircraft types (including both light and heavy aircraft) are sequenced to land at Denver's Stapleton International Airport. A graphical display is utilized and Constrained Position Shifting with a maximum shift of four positions (rearward or forward) is compared to first come, first serve with respect to arrival at the runway. The implementation of computer assisted sequencing and scheduling methodologies is investigated. A time based control concept will be required and design considerations for such a system are discussed.
An unusual mode of failure of a tripolar constrained acetabular liner: a case report.
Banks, Louisa N; McElwain, John P
2010-04-01
Dislocation after primary total hip arthroplasty (THA) is the most commonly encountered complication and is unpleasant for both the patient and the surgeon. Constrained acetabular components can be used to treat or prevent instability after primary total hip arthroplasty. We present the case of a 42-year-old female with a BMI of 41. At 18 months post-primary THA the patient underwent further revision hip surgery after numerous (more than 20) dislocations. She had a tripolar Trident acetabular cup (Stryker-Howmedica-Osteonics, Rutherford, New Jersey) inserted. Shortly afterwards the unusual mode of failure of the constrained acetabular liner was noted from radiographs in that the inner liner had dissociated from the outer. The reinforcing ring remained intact and in place. We believe that the patient's weight, combined with poor abductor musculature caused excessive demand on the device leading to failure at this interface when the patient flexed forward. Constrained acetabular components are useful implants to treat instability but have been shown to have up to 42% long-term failure rates with problems such as dissociated inserts, dissociated constraining rings and dissociated femoral rings being sited. Sometimes they may be the only option left in difficult cases such as illustrated here, but still unfortunately have the capacity to fail in unusual ways.
Efficient Robust Regression via Two-Stage Generalized Empirical Likelihood
Bondell, Howard D.; Stefanski, Leonard A.
2013-01-01
Large- and finite-sample efficiency and resistance to outliers are the key goals of robust statistics. Although often not simultaneously attainable, we develop and study a linear regression estimator that comes close. Efficiency obtains from the estimator’s close connection to generalized empirical likelihood, and its favorable robustness properties are obtained by constraining the associated sum of (weighted) squared residuals. We prove maximum attainable finite-sample replacement breakdown point, and full asymptotic efficiency for normal errors. Simulation evidence shows that compared to existing robust regression estimators, the new estimator has relatively high efficiency for small sample sizes, and comparable outlier resistance. The estimator is further illustrated and compared to existing methods via application to a real data set with purported outliers. PMID:23976805
Quantum dynamics by the constrained adiabatic trajectory method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leclerc, A.; Jolicard, G.; Guerin, S.
2011-03-15
We develop the constrained adiabatic trajectory method (CATM), which allows one to solve the time-dependent Schroedinger equation constraining the dynamics to a single Floquet eigenstate, as if it were adiabatic. This constrained Floquet state (CFS) is determined from the Hamiltonian modified by an artificial time-dependent absorbing potential whose forms are derived according to the initial conditions. The main advantage of this technique for practical implementation is that the CFS is easy to determine even for large systems since its corresponding eigenvalue is well isolated from the others through its imaginary part. The properties and limitations of the CATM are exploredmore » through simple examples.« less
An Approach to the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.
1997-01-01
A design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. After obtaining the initial airfoil's pressure distribution at the design lift coefficient using an Euler solver coupled with an integral turbulent boundary layer method, the calculations from a laminar boundary layer solver are used by a stability analysis code to obtain estimates of the transition location (using N-Factors) for the starting airfoil. A new design method then calculates a target pressure distribution that will increase the laminar flow toward the desired amount. An airfoil design method is then iteratively used to design an airfoil that possesses that target pressure distribution. The new airfoil's boundary layer stability characteristics are determined, and this iterative process continues until an airfoil is designed that meets the laminar flow requirement and as many of the other constraints as possible.
ERIC Educational Resources Information Center
Zhou, Xiaolin; Jiang, Xiaoming; Ye, Zheng; Zhang, Yaxu; Lou, Kaiyang; Zhan, Weidong
2010-01-01
An event-related potential (ERP) study was conducted to investigate the temporal neural dynamics of semantic integration processes at different levels of syntactic hierarchy during Chinese sentence reading. In a hierarchical structure, "subject noun" + "verb" + "numeral" + "classifier" + "object noun," the object noun is constrained by selectional…
Scoring of Side-Chain Packings: An Analysis of Weight Factors and Molecular Dynamics Structures.
Colbes, Jose; Aguila, Sergio A; Brizuela, Carlos A
2018-02-26
The protein side-chain packing problem (PSCPP) is a central task in computational protein design. The problem is usually modeled as a combinatorial optimization problem, which consists of searching for a set of rotamers, from a given rotamer library, that minimizes a scoring function (SF). The SF is a weighted sum of terms, that can be decomposed in physics-based and knowledge-based terms. Although there are many methods to obtain approximate solutions for this problem, all of them have similar performances and there has not been a significant improvement in recent years. Studies on protein structure prediction and protein design revealed the limitations of current SFs to achieve further improvements for these two problems. In the same line, a recent work reported a similar result for the PSCPP. In this work, we ask whether or not this negative result regarding further improvements in performance is due to (i) an incorrect weighting of the SFs terms or (ii) the constrained conformation resulting from the protein crystallization process. To analyze these questions, we (i) model the PSCPP as a bi-objective combinatorial optimization problem, optimizing, at the same time, the two most important terms of two SFs of state-of-the-art algorithms and (ii) performed a preprocessing relaxation of the crystal structure through molecular dynamics to simulate the protein in the solvent and evaluated the performance of these two state-of-the-art SFs under these conditions. Our results indicate that (i) no matter what combination of weight factors we use the current SFs will not lead to better performances and (ii) the evaluated SFs will not be able to improve performance on relaxed structures. Furthermore, the experiments revealed that the SFs and the methods are biased toward crystallized structures.
Mapping thunder sources by inverting acoustic and electromagnetic observations
NASA Astrophysics Data System (ADS)
Anderson, J. F.; Johnson, J. B.; Arechiga, R. O.; Thomas, R. J.
2014-12-01
We present a new method of locating current flow in lightning strikes by inversion of thunder recordings constrained by Lightning Mapping Array observations. First, radio frequency (RF) pulses are connected to reconstruct conductive channels created by leaders. Then, acoustic signals that would be produced by current flow through each channel are forward modeled. The recorded thunder is considered to consist of a weighted superposition of these acoustic signals. We calculate the posterior distribution of acoustic source energy for each channel with a Markov Chain Monte Carlo inversion that fits power envelopes of modeled and recorded thunder; these results show which parts of the flash carry current and produce thunder. We examine the effects of RF pulse location imprecision and atmospheric winds on quality of results and apply this method to several lightning flashes over the Magdalena Mountains in New Mexico, USA. This method will enable more detailed study of lightning phenomena by allowing researchers to map current flow in addition to leader propagation.
Choosing health, constrained choices.
Chee Khoon Chan
2009-12-01
In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.
On the state of crystallography at the dawn of the electron microscopy revolution.
Higgins, Matthew K; Lea, Susan M
2017-10-01
While protein crystallography has, for many years, been the most used method for structural analysis of macromolecular complexes, remarkable recent advances in high-resolution electron cryo-microscopy led to suggestions that 'the revolution will not be crystallised'. Here we highlight the current success rate, speed and ease of modern crystallographic structure determination and some recent triumphs of both 'classical' crystallography and the use of X-ray free electron lasers. We also outline fundamental differences between structure determination using X-ray crystallography and electron microscopy. We suggest that crystallography will continue to co-exist with electron microscopy as part of an integrated array of methods, allowing structural biologists to focus on fundamental biological questions rather than being constrained by the methods available. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wohrer, Adrien; Machens, Christian K.
2015-01-01
All of our perceptual experiences arise from the activity of neural populations. Here we study the formation of such percepts under the assumption that they emerge from a linear readout, i.e., a weighted sum of the neurons’ firing rates. We show that this assumption constrains the trial-to-trial covariance structure of neural activities and animal behavior. The predicted covariance structure depends on the readout parameters, and in particular on the temporal integration window w and typical number of neurons K used in the formation of the percept. Using these predictions, we show how to infer the readout parameters from joint measurements of a subject’s behavior and neural activities. We consider three such scenarios: (1) recordings from the complete neural population, (2) recordings of neuronal sub-ensembles whose size exceeds K, and (3) recordings of neuronal sub-ensembles that are smaller than K. Using theoretical arguments and artificially generated data, we show that the first two scenarios allow us to recover the typical spatial and temporal scales of the readout. In the third scenario, we show that the readout parameters can only be recovered by making additional assumptions about the structure of the full population activity. Our work provides the first thorough interpretation of (feed-forward) percept formation from a population of sensory neurons. We discuss applications to experimental recordings in classic sensory decision-making tasks, which will hopefully provide new insights into the nature of perceptual integration. PMID:25793393
Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald
2015-03-10
The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS.
Probability genotype imputation method and integrated weighted lasso for QTL identification.
Demetrashvili, Nino; Van den Heuvel, Edwin R; Wit, Ernst C
2013-12-30
Many QTL studies have two common features: (1) often there is missing marker information, (2) among many markers involved in the biological process only a few are causal. In statistics, the second issue falls under the headings "sparsity" and "causal inference". The goal of this work is to develop a two-step statistical methodology for QTL mapping for markers with binary genotypes. The first step introduces a novel imputation method for missing genotypes. Outcomes of the proposed imputation method are probabilities which serve as weights to the second step, namely in weighted lasso. The sparse phenotype inference is employed to select a set of predictive markers for the trait of interest. Simulation studies validate the proposed methodology under a wide range of realistic settings. Furthermore, the methodology outperforms alternative imputation and variable selection methods in such studies. The methodology was applied to an Arabidopsis experiment, containing 69 markers for 165 recombinant inbred lines of a F8 generation. The results confirm previously identified regions, however several new markers are also found. On the basis of the inferred ROC behavior these markers show good potential for being real, especially for the germination trait Gmax. Our imputation method shows higher accuracy in terms of sensitivity and specificity compared to alternative imputation method. Also, the proposed weighted lasso outperforms commonly practiced multiple regression as well as the traditional lasso and adaptive lasso with three weighting schemes. This means that under realistic missing data settings this methodology can be used for QTL identification.
NASA Astrophysics Data System (ADS)
Yi, Lei; Xu, Caijun; Wen, Yangmao; Zhang, Xu; Jiang, Guoyan
2018-01-01
The 2016 Ecuador earthquake ruptured the Ecuador-Colombia subduction interface where several historic megathrust earthquakes had occurred. In order to determine a detailed rupture model, Interferometric Synthetic Aperture Radar (InSAR) images and teleseismic data sets were objectively weighted by using a modified Akaika's Bayesian Information Criterion (ABIC) method to jointly invert for the rupture process of the earthquake. In modeling the rupture process, a constrained waveform length method, unlike the traditional subjective selected waveform length method, was used since the lengths of inverted waveforms were strictly constrained by the rupture velocity and rise time (the slip duration time). The optimal rupture velocity and rise time of the earthquake were estimated from grid search, which were determined to be 2.0 km/s and 20 s, respectively. The inverted model shows that the event is dominated by thrust movement and the released moment is 5.75 × 1020 Nm (Mw 7.77). The slip distribution extends southward along the Ecuador coast line in an elongated stripe at a depth between 10 and 25 km. The slip model is composed of two asperities and slipped over 4 m. The source time function is approximate 80 s that separated into two segments corresponding to the two asperities. The small magnitude of the slip occurred in the updip section of the fault plane resulted in small tsunami waves that were verified by observations near the coast. We suggest a possible situation that the rupture zone of the 2016 earthquake is likely not overlapped with that of the 1942 earthquake.
A finite element-boundary integral method for cavities in a circular cylinder
NASA Technical Reports Server (NTRS)
Kempel, Leo C.; Volakis, John L.
1992-01-01
Conformal antenna arrays offer many cost and weight advantages over conventional antenna systems. However, due to a lack of rigorous mathematical models for conformal antenna arrays, antenna designers resort to measurement and planar antenna concepts for designing non-planar conformal antennas. Recently, we have found the finite element-boundary integral method to be very successful in modeling large planar arrays of arbitrary composition in a metallic plane. We extend this formulation to conformal arrays on large metallic cylinders. In this report, we develop the mathematical formulation. In particular, we discuss the shape functions, the resulting finite elements and the boundary integral equations, and the solution of the conformal finite element-boundary integral system. Some validation results are presented and we further show how this formulation can be applied with minimal computational and memory resources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patrício, João, E-mail: joao.patricio@chalmers.se; Kalmykova, Yuliya; Berg, Per E.O.
2015-05-15
Highlights: • Developed MFA method was validated by the national statistics. • Exponential increase of EEE sales leads to increase in integrated battery consumption. • Digital convergence is likely to be a cause for primary batteries consumption decline. • Factors for estimation of integrated batteries in EE are provided. • Sweden reached the collection rates defined by European Union. - Abstract: In this article, a new method based on Material Flow Accounting is proposed to study detailed material flows in battery consumption that can be replicated for other countries. The method uses regularly available statistics on import, industrial production andmore » export of batteries and battery-containing electric and electronic equipment (EEE). To promote method use by other scholars with no access to such data, several empirically results and their trends over time, for different types of batteries occurrence among the EEE types are provided. The information provided by the method can be used to: identify drivers of battery consumption; study the dynamic behavior of battery flows – due to technology development, policies, consumers behavior and infrastructures. The method is exemplified by the study of battery flows in Sweden for years 1996–2013. The batteries were accounted, both in units and weight, as primary and secondary batteries; loose and integrated; by electrochemical composition and share of battery use between different types of EEE. Results show that, despite a fivefold increase in the consumption of rechargeable batteries, they account for only about 14% of total use of portable batteries. Recent increase in digital convergence has resulted in a sharp decline in the consumption of primary batteries, which has now stabilized at a fairly low level. Conversely, the consumption of integrated batteries has increased sharply. In 2013, 61% of the total weight of batteries sold in Sweden was collected, and for the particular case of alkaline manganese dioxide batteries, the value achieved 74%.« less
Fan, Yaxin; Zhu, Xinyan; Guo, Wei; Guo, Tao
2018-01-01
The analysis of traffic collisions is essential for urban safety and the sustainable development of the urban environment. Reducing the road traffic injuries and the financial losses caused by collisions is the most important goal of traffic management. In addition, traffic collisions are a major cause of traffic congestion, which is a serious issue that affects everyone in the society. Therefore, traffic collision analysis is essential for all parties, including drivers, pedestrians, and traffic officers, to understand the road risks at a finer spatio-temporal scale. However, traffic collisions in the urban context are dynamic and complex. Thus, it is important to detect how the collision hotspots evolve over time through spatio-temporal clustering analysis. In addition, traffic collisions are not isolated events in space. The characteristics of the traffic collisions and their surrounding locations also present an influence of the clusters. This work tries to explore the spatio-temporal clustering patterns of traffic collisions by combining a set of network-constrained methods. These methods were tested using the traffic collision data in Jianghan District of Wuhan, China. The results demonstrated that these methods offer different perspectives of the spatio-temporal clustering patterns. The weighted network kernel density estimation provides an intuitive way to incorporate attribute information. The network cross K-function shows that there are varying clustering tendencies between traffic collisions and different types of POIs. The proposed network differential Local Moran’s I and network local indicators of mobility association provide straightforward and quantitative measures of the hotspot changes. This case study shows that these methods could help researchers, practitioners, and policy-makers to better understand the spatio-temporal clustering patterns of traffic collisions. PMID:29672551
Phase derivative method for reconstruction of slightly off-axis digital holograms.
Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan
2014-12-15
A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.
NASA Astrophysics Data System (ADS)
Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping
2012-04-01
Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOS-Chem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.
NASA Technical Reports Server (NTRS)
Wang, Jun; Xu, Xiaoguang; Henze, Daven K.; Zeng, Jing; Ji, Qiang; Tsay, Si-Chee; Huang, Jianping
2012-01-01
Predicting the influences of dust on atmospheric composition, climate, and human health requires accurate knowledge of dust emissions, but large uncertainties persist in quantifying mineral sources. This study presents a new method for combined use of satellite-measured radiances and inverse modeling to spatially constrain the amount and location of dust emissions. The technique is illustrated with a case study in May 2008; the dust emissions in Taklimakan and Gobi deserts are spatially optimized using the GEOSChem chemical transport model and its adjoint constrained by aerosol optical depth (AOD) that are derived over the downwind dark-surface region in China from MODIS (Moderate Resolution Imaging Spectroradiometer) reflectance with the aerosol single scattering properties consistent with GEOS-chem. The adjoint inverse modeling yields an overall 51% decrease in prior dust emissions estimated by GEOS-Chem over the Taklimakan-Gobi area, with more significant reductions south of the Gobi Desert. The model simulation with optimized dust emissions shows much better agreement with independent observations from MISR (Multi-angle Imaging SpectroRadiometer) AOD and MODIS Deep Blue AOD over the dust source region and surface PM10 concentrations. The technique of this study can be applied to global multi-sensor remote sensing data for constraining dust emissions at various temporal and spatial scales, and hence improving the quantification of dust effects on climate, air quality, and human health.
An Anatomically Constrained Model for Path Integration in the Bee Brain.
Stone, Thomas; Webb, Barbara; Adden, Andrea; Weddig, Nicolai Ben; Honkanen, Anna; Templin, Rachel; Wcislo, William; Scimeca, Luca; Warrant, Eric; Heinze, Stanley
2017-10-23
Path integration is a widespread navigational strategy in which directional changes and distance covered are continuously integrated on an outward journey, enabling a straight-line return to home. Bees use vision for this task-a celestial-cue-based visual compass and an optic-flow-based visual odometer-but the underlying neural integration mechanisms are unknown. Using intracellular electrophysiology, we show that polarized-light-based compass neurons and optic-flow-based speed-encoding neurons converge in the central complex of the bee brain, and through block-face electron microscopy, we identify potential integrator cells. Based on plausible output targets for these cells, we propose a complete circuit for path integration and steering in the central complex, with anatomically identified neurons suggested for each processing step. The resulting model circuit is thus fully constrained biologically and provides a functional interpretation for many previously unexplained architectural features of the central complex. Moreover, we show that the receptive fields of the newly discovered speed neurons can support path integration for the holonomic motion (i.e., a ground velocity that is not precisely aligned with body orientation) typical of bee flight, a feature not captured in any previously proposed model of path integration. In a broader context, the model circuit presented provides a general mechanism for producing steering signals by comparing current and desired headings-suggesting a more basic function for central complex connectivity, from which path integration may have evolved. Copyright © 2017 Elsevier Ltd. All rights reserved.
The covariance matrix for the solution vector of an equality-constrained least-squares problem
NASA Technical Reports Server (NTRS)
Lawson, C. L.
1976-01-01
Methods are given for computing the covariance matrix for the solution vector of an equality-constrained least squares problem. The methods are matched to the solution algorithms given in the book, 'Solving Least Squares Problems.'
A Constrained Linear Estimator for Multiple Regression
ERIC Educational Resources Information Center
Davis-Stober, Clintin P.; Dana, Jason; Budescu, David V.
2010-01-01
"Improper linear models" (see Dawes, Am. Psychol. 34:571-582, "1979"), such as equal weighting, have garnered interest as alternatives to standard regression models. We analyze the general circumstances under which these models perform well by recasting a class of "improper" linear models as "proper" statistical models with a single predictor. We…
Value, Cost, and Sharing: Open Issues in Constrained Clustering
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.
2006-01-01
Clustering is an important tool for data mining, since it can identify major patterns or trends without any supervision (labeled data). Over the past five years, semi-supervised (constrained) clustering methods have become very popular. These methods began with incorporating pairwise constraints and have developed into more general methods that can learn appropriate distance metrics. However, several important open questions have arisen about which constraints are most useful, how they can be actively acquired, and when and how they should be propagated to neighboring points. This position paper describes these open questions and suggests future directions for constrained clustering research.
Nims, Robert J; Cigan, Alexander D; Durney, Krista M; Jones, Brian K; O'Neill, John D; Law, Wing-Sum A; Vunjak-Novakovic, Gordana; Hung, Clark T; Ateshian, Gerard A
2017-08-01
When cultured with sufficient nutrient supply, engineered cartilage synthesizes proteoglycans rapidly, producing an osmotic swelling pressure that destabilizes immature collagen and prevents the development of a robust collagen framework, a hallmark of native cartilage. We hypothesized that mechanically constraining the proteoglycan-induced tissue swelling would enhance construct functional properties through the development of a more stable collagen framework. To test this hypothesis, we developed a novel "cage" growth system to mechanically prevent tissue constructs from swelling while ensuring adequate nutrient supply to the growing construct. The effectiveness of constrained culture was examined by testing constructs embedded within two different scaffolds: agarose and cartilage-derived matrix hydrogel (CDMH). Constructs were seeded with immature bovine chondrocytes and cultured under free swelling (FS) conditions for 14 days with transforming growth factor-β before being placed into a constraining cage for the remainder of culture. Controls were cultured under FS conditions throughout. Agarose constructs cultured in cages did not expand after the day 14 caging while FS constructs expanded to 8 × their day 0 weight after 112 days of culture. In addition to the physical differences in growth, by day 56, caged constructs had higher equilibrium (agarose: 639 ± 179 kPa and CDMH: 608 ± 257 kPa) and dynamic compressive moduli (agarose: 3.4 ± 1.0 MPa and CDMH 2.8 ± 1.0 MPa) than FS constructs (agarose: 193 ± 74 kPa and 1.1 ± 0.5 MPa and CDMH: 317 ± 93 kPa and 1.8 ± 1.0 MPa for equilibrium and dynamic properties, respectively). Interestingly, when normalized to final day wet weight, cage and FS constructs did not exhibit differences in proteoglycan or collagen content. However, caged culture enhanced collagen maturation through the increased formation of pyridinoline crosslinks and improved collagen matrix stability as measured by α-chymotrypsin solubility. These findings demonstrate that physically constrained culture of engineered cartilage constructs improves functional properties through improved collagen network maturity and stability. We anticipate that constrained culture may benefit other reported engineered cartilage systems that exhibit a mismatch in proteoglycan and collagen synthesis.
Park, Yoon Soo; Lineberry, Matthew; Hyderi, Abbas; Bordage, Georges; Xing, Kuan; Yudkowsky, Rachel
2016-11-01
Medical schools administer locally developed graduation competency examinations (GCEs) following the structure of the United States Medical Licensing Examination Step 2 Clinical Skills that combine standardized patient (SP)-based physical examination and the patient note (PN) to create integrated clinical encounter (ICE) scores. This study examines how different subcomponent scoring weights in a locally developed GCE affect composite score reliability and pass-fail decisions for ICE scores, contributing to internal structure and consequential validity evidence. Data from two M4 cohorts (2014: n = 177; 2015: n = 182) were used. The reliability of SP encounter (history taking and physical examination), PN, and communication and interpersonal skills scores were estimated with generalizability studies. Composite score reliability was estimated for varying weight combinations. Faculty were surveyed for preferred weights on the SP encounter and PN scores. Composite scores based on Kane's method were compared with weighted mean scores. Faculty suggested weighting PNs higher (60%-70%) than the SP encounter scores (30%-40%). Statistically, composite score reliability was maximized when PN scores were weighted at 40% to 50%. Composite score reliability of ICE scores increased by up to 0.20 points when SP-history taking (SP-Hx) scores were included; excluding SP-Hx only increased composite score reliability by 0.09 points. Classification accuracy for pass-fail decisions between composite and weighted mean scores was 0.77; misclassification was < 5%. Medical schools and certification agencies should consider implications of assigning weights with respect to composite score reliability and consequences on pass-fail decisions.
Zhou, Zhengdong; Guan, Shaolin; Xin, Runchao; Li, Jianbo
2018-06-01
Contrast-enhanced subtracted breast computer tomography (CESBCT) images acquired using energy-resolved photon counting detector can be helpful to enhance the visibility of breast tumors. In such technology, one challenge is the limited number of photons in each energy bin, thereby possibly leading to high noise in separate images from each energy bin, the projection-based weighted image, and the subtracted image. In conventional low-dose CT imaging, iterative image reconstruction provides a superior signal-to-noise compared with the filtered back projection (FBP) algorithm. In this paper, maximum a posteriori expectation maximization (MAP-EM) based on projection-based weighting imaging for reconstruction of CESBCT images acquired using an energy-resolving photon counting detector is proposed, and its performance was investigated in terms of contrast-to-noise ratio (CNR). The simulation study shows that MAP-EM based on projection-based weighting imaging can improve the CNR in CESBCT images by 117.7%-121.2% compared with FBP based on projection-based weighting imaging method. When compared with the energy-integrating imaging that uses the MAP-EM algorithm, projection-based weighting imaging that uses the MAP-EM algorithm can improve the CNR of CESBCT images by 10.5%-13.3%. In conclusion, MAP-EM based on projection-based weighting imaging shows significant improvement the CNR of the CESBCT image compared with FBP based on projection-based weighting imaging, and MAP-EM based on projection-based weighting imaging outperforms MAP-EM based on energy-integrating imaging for CESBCT imaging.
Invariant algebraic surfaces for a virus dynamics
NASA Astrophysics Data System (ADS)
Valls, Claudia
2015-08-01
In this paper, we provide a complete classification of the invariant algebraic surfaces and of the rational first integrals for a well-known virus system. In the proofs, we use the weight-homogeneous polynomials and the method of characteristic curves for solving linear partial differential equations.
Zhang, Y M; Huang, G; Lu, H W; He, Li
2015-08-15
A key issue facing integrated water resources management and water pollution control is to address the vague parametric information. A full credibility-based chance-constrained programming (FCCP) method is thus developed by introducing the new concept of credibility into the modeling framework. FCCP can deal with fuzzy parameters appearing concurrently in the objective and both sides of the constraints of the model, but also provide a credibility level indicating how much confidence one can believe the optimal modeling solutions. The method is applied to Heshui River watershed in the south-central China for demonstration. Results from the case study showed that groundwater would make up for the water shortage in terms of the shrinking surface water and rising water demand, and the optimized total pumpage of groundwater from both alluvial and karst aquifers would exceed 90% of its maximum allowable levels when credibility level is higher than or equal to 0.9. It is also indicated that an increase in credibility level would induce a reduction in cost for surface water acquisition, a rise in cost from groundwater withdrawal, and negligible variation in cost for water pollution control. Copyright © 2015 Elsevier B.V. All rights reserved.
Mang, Andreas; Biros, George
2017-01-01
We propose an efficient numerical algorithm for the solution of diffeomorphic image registration problems. We use a variational formulation constrained by a partial differential equation (PDE), where the constraints are a scalar transport equation. We use a pseudospectral discretization in space and second-order accurate semi-Lagrangian time stepping scheme for the transport equations. We solve for a stationary velocity field using a preconditioned, globalized, matrix-free Newton-Krylov scheme. We propose and test a two-level Hessian preconditioner. We consider two strategies for inverting the preconditioner on the coarse grid: a nested preconditioned conjugate gradient method (exact solve) and a nested Chebyshev iterative method (inexact solve) with a fixed number of iterations. We test the performance of our solver in different synthetic and real-world two-dimensional application scenarios. We study grid convergence and computational efficiency of our new scheme. We compare the performance of our solver against our initial implementation that uses the same spatial discretization but a standard, explicit, second-order Runge-Kutta scheme for the numerical time integration of the transport equations and a single-level preconditioner. Our improved scheme delivers significant speedups over our original implementation. As a highlight, we observe a 20 × speedup for a two dimensional, real world multi-subject medical image registration problem.
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale
Diao, Yuzhu; Hu, Aqin
2018-01-01
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation. PMID:29498699
Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.
Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin
2018-03-02
Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.
Path optimization method for the sign problem
NASA Astrophysics Data System (ADS)
Ohnishi, Akira; Mori, Yuto; Kashiwa, Kouji
2018-03-01
We propose a path optimization method (POM) to evade the sign problem in the Monte-Carlo calculations for complex actions. Among many approaches to the sign problem, the Lefschetz-thimble path-integral method and the complex Langevin method are promising and extensively discussed. In these methods, real field variables are complexified and the integration manifold is determined by the flow equations or stochastically sampled. When we have singular points of the action or multiple critical points near the original integral surface, however, we have a risk to encounter the residual and global sign problems or the singular drift term problem. One of the ways to avoid the singular points is to optimize the integration path which is designed not to hit the singular points of the Boltzmann weight. By specifying the one-dimensional integration-path as z = t +if(t)(f ɛ R) and by optimizing f(t) to enhance the average phase factor, we demonstrate that we can avoid the sign problem in a one-variable toy model for which the complex Langevin method is found to fail. In this proceedings, we propose POM and discuss how we can avoid the sign problem in a toy model. We also discuss the possibility to utilize the neural network to optimize the path.
Ehsan, Shoaib; Clark, Adrian F.; ur Rehman, Naveed; McDonald-Maier, Klaus D.
2015-01-01
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems. PMID:26184211
Ehsan, Shoaib; Clark, Adrian F; Naveed ur Rehman; McDonald-Maier, Klaus D
2015-07-10
The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.
Material Distribution Optimization for the Shell Aircraft Composite Structure
NASA Astrophysics Data System (ADS)
Shevtsov, S.; Zhilyaev, I.; Oganesyan, P.; Axenov, V.
2016-09-01
One of the main goal in aircraft structures designing isweight decreasing and stiffness increasing. Composite structures recently became popular in aircraft because of their mechanical properties and wide range of optimization possibilities.Weight distribution and lay-up are keys to creating lightweight stiff strictures. In this paperwe discuss optimization of specific structure that undergoes the non-uniform air pressure at the different flight conditions and reduce a level of noise caused by the airflowinduced vibrations at the constrained weight of the part. Initial model was created with CAD tool Siemens NX, finite element analysis and post processing were performed with COMSOL Multiphysicsr and MATLABr. Numerical solutions of the Reynolds averaged Navier-Stokes (RANS) equations supplemented by k-w turbulence model provide the spatial distributions of air pressure applied to the shell surface. At the formulation of optimization problem the global strain energy calculated within the optimized shell was assumed as the objective. Wall thickness has been changed using parametric approach by an initiation of auxiliary sphere with varied radius and coordinates of the center, which were the design variables. To avoid a local stress concentration, wall thickness increment was defined as smooth function on the shell surface dependent of auxiliary sphere position and size. Our study consists of multiple steps: CAD/CAE transformation of the model, determining wind pressure for different flow angles, optimizing wall thickness distribution for specific flow angles, designing a lay-up for optimal material distribution. The studied structure was improved in terms of maximum and average strain energy at the constrained expense ofweight growth. Developed methods and tools can be applied to wide range of shell-like structures made of multilayered quasi-isotropic laminates.
Fat-constrained 18F-FDG PET reconstruction using Dixon MR imaging and the origin ensemble algorithm
NASA Astrophysics Data System (ADS)
Wülker, Christian; Heinzer, Susanne; Börnert, Peter; Renisch, Steffen; Prevrhal, Sven
2015-03-01
Combined PET/MR imaging allows to incorporate the high-resolution anatomical information delivered by MRI into the PET reconstruction algorithm for improvement of PET accuracy beyond standard corrections. We used the working hypothesis that glucose uptake in adipose tissue is low. Thus, our aim was to shift 18F-FDG PET signal into image regions with a low fat content. Dixon MR imaging can be used to generate fat-only images via the water/fat chemical shift difference. On the other hand, the Origin Ensemble (OE) algorithm, a novel Markov chain Monte Carlo method, allows to reconstruct PET data without the use of forward- and back projection operations. By adequate modifications to the Markov chain transition kernel, it is possible to include anatomical a priori knowledge into the OE algorithm. In this work, we used the OE algorithm to reconstruct PET data of a modified IEC/NEMA Body Phantom simulating body water/fat composition. Reconstruction was performed 1) natively, 2) informed with the Dixon MR fat image to down-weight 18F-FDG signal in fatty tissue compartments in favor of adjacent regions, and 3) informed with the fat image to up-weight 18F-FDG signal in fatty tissue compartments, for control purposes. Image intensity profiles confirmed the visibly improved contrast and reduced partial volume effect at water/fat interfaces. We observed a 17+/-2% increased SNR of hot lesions surrounded by fat, while image quality was almost completely retained in fat-free image regions. An additional in vivo experiment proved the applicability of the presented technique in practice, and again verified the beneficial impact of fat-constrained OE reconstruction on PET image quality.
THE IMPORTANCE OF {sup 56}Ni IN SHAPING THE LIGHT CURVES OF TYPE II SUPERNOVAE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakar, Ehud; Poznanski, Dovi; Katz, Boaz
2016-06-01
What intrinsic properties shape the light curves of SNe II? To address this question we derive observational measures that are robust (i.e., insensitive to detailed radiative transfer) and constrain the contribution from {sup 56}Ni as well as a combination of the envelope mass, progenitor radius, and explosion energy. By applying our methods to a sample of SNe II from the literature, we find that a {sup 56}Ni contribution is often significant. In our sample, its contribution to the time-weighted integrated luminosity during the photospheric phase ranges between 8% and 72% with a typical value of 30%. We find that themore » {sup 56}Ni relative contribution is anti-correlated with the luminosity decline rate. When added to other clues, this in turn suggests that the flat plateaus often observed in SNe II are not a generic feature of the cooling envelope emission, and that without {sup 56}Ni many of the SNe that are classified as II-P would have shown a decline rate that is steeper by up to 1 mag/100 days. Nevertheless, we find that the cooling envelope emission, and not {sup 56}Ni contribution, is the main driver behind the observed range of decline rates. Furthermore, contrary to previous suggestions, our findings indicate that fast decline rates are not driven by lower envelope masses. We therefore suggest that the difference in observed decline rates is mainly a result of different density profiles of the progenitors.« less
NASA Astrophysics Data System (ADS)
Ordóñez Cabrera, Manuel; Volodin, Andrei I.
2005-05-01
From the classical notion of uniform integrability of a sequence of random variables, a new concept of integrability (called h-integrability) is introduced for an array of random variables, concerning an array of constantsE We prove that this concept is weaker than other previous related notions of integrability, such as Cesàro uniform integrability [Chandra, Sankhya Ser. A 51 (1989) 309-317], uniform integrability concerning the weights [Ordóñez Cabrera, Collect. Math. 45 (1994) 121-132] and Cesàro [alpha]-integrability [Chandra and Goswami, J. Theoret. ProbabE 16 (2003) 655-669]. Under this condition of integrability and appropriate conditions on the array of weights, mean convergence theorems and weak laws of large numbers for weighted sums of an array of random variables are obtained when the random variables are subject to some special kinds of dependence: (a) rowwise pairwise negative dependence, (b) rowwise pairwise non-positive correlation, (c) when the sequence of random variables in every row is [phi]-mixing. Finally, we consider the general weak law of large numbers in the sense of Gut [Statist. Probab. Lett. 14 (1992) 49-52] under this new condition of integrability for a Banach space setting.
NASA Astrophysics Data System (ADS)
Asgari, Jamal; Mohammadloo, Tannaz H.; Amiri-Simkooei, Ali Reza
2015-09-01
GNSS kinematic techniques are capable of providing precise coordinates in extremely short observation time-span. These methods usually determine the coordinates of an unknown station with respect to a reference one. To enhance the precision, accuracy, reliability and integrity of the estimated unknown parameters, GNSS kinematic equations are to be augmented by possible constraints. Such constraints could be derived from the geometric relation of the receiver positions in motion. This contribution presents the formulation of the constrained kinematic global navigation satellite systems positioning. Constraints effectively restrict the definition domain of the unknown parameters from the three-dimensional space to a subspace defined by the equation of motion. To test the concept of the constrained kinematic positioning method, the equation of a circle is employed as a constraint. A device capable of moving on a circle was made and the observations from 11 positions on the circle were analyzed. Relative positioning was conducted by considering the center of the circle as the reference station. The equation of the receiver's motion was rewritten in the ECEF coordinates system. A special attention is drawn onto how a constraint is applied to kinematic positioning. Implementing the constraint in the positioning process provides much more precise results compared to the unconstrained case. This has been verified based on the results obtained from the covariance matrix of the estimated parameters and the empirical results using kinematic positioning samples as well. The theoretical standard deviations of the horizontal components are reduced by a factor ranging from 1.24 to 2.64. The improvement on the empirical standard deviation of the horizontal components ranges from 1.08 to 2.2.
NASA Astrophysics Data System (ADS)
Sturtz, Timothy M.
Source apportionment models attempt to untangle the relationship between pollution sources and the impacts at downwind receptors. Two frameworks of source apportionment models exist: source-oriented and receptor-oriented. Source based apportionment models use presumed emissions and atmospheric processes to estimate the downwind source contributions. Conversely, receptor based models leverage speciated concentration data from downwind receptors and apply statistical methods to predict source contributions. Integration of both source-oriented and receptor-oriented models could lead to a better understanding of the implications sources have on the environment and society. The research presented here investigated three different types of constraints applied to the Positive Matrix Factorization (PMF) receptor model within the framework of the Multilinear Engine (ME-2): element ratio constraints, spatial separation constraints, and chemical transport model (CTM) source attribution constraints. PM10-2.5 mass and trace element concentrations were measured in Winston-Salem, Chicago, and St. Paul at up to 60 sites per city during two different seasons in 2010. PMF was used to explore the underlying sources of variability. Information on previously reported PM10-2.5 tire and brake wear profiles were used to constrain these features in PMF by prior specification of selected species ratios. We also modified PMF to allow for combining the measurements from all three cities into a single model while preserving city-specific soil features. Relatively minor differences were observed between model predictions with and without the prior ratio constraints, increasing confidence in our ability to identify separate brake wear and tire wear features. Using separate data, source contributions to total fine particle carbon predicted by a CTM were incorporated into the PMF receptor model to form a receptor-oriented hybrid model. The level of influence of the CTM versus traditional PMF was varied using a weighting parameter applied to an object function as implemented in ME-2. The resulting hybrid model was used to quantify the contributions of total carbon from both wildfires and biogenic sources at two Interagency Monitoring of Protected Visual Environment monitoring sites, Monture and Sula Peak, Montana, from 2006 through 2008.
Numerical study of a matrix-free trust-region SQP method for equality constrained optimization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinkenschloss, Matthias; Ridzal, Denis; Aguilo, Miguel Antonio
2011-12-01
This is a companion publication to the paper 'A Matrix-Free Trust-Region SQP Algorithm for Equality Constrained Optimization' [11]. In [11], we develop and analyze a trust-region sequential quadratic programming (SQP) method that supports the matrix-free (iterative, in-exact) solution of linear systems. In this report, we document the numerical behavior of the algorithm applied to a variety of equality constrained optimization problems, with constraints given by partial differential equations (PDEs).
Epoch of reionization 21 cm forecasting from MCMC-constrained semi-numerical models
NASA Astrophysics Data System (ADS)
Hassan, Sultan; Davé, Romeel; Finlator, Kristian; Santos, Mario G.
2017-06-01
The recent low value of Planck Collaboration XLVII integrated optical depth to Thomson scattering suggests that the reionization occurred fairly suddenly, disfavouring extended reionization scenarios. This will have a significant impact on the 21 cm power spectrum. Using a semi-numerical framework, we improve our model from instantaneous to include time-integrated ionization and recombination effects, and find that this leads to more sudden reionization. It also yields larger H II bubbles that lead to an order of magnitude more 21 cm power on large scales, while suppressing the small-scale ionization power. Local fluctuations in the neutral hydrogen density play the dominant role in boosting the 21 cm power spectrum on large scales, while recombinations are subdominant. We use a Monte Carlo Markov chain approach to constrain our model to observations of the star formation rate functions at z = 6, 7, 8 from Bouwens et al., the Planck Collaboration XLVII optical depth measurements and the Becker & Bolton ionizing emissivity data at z ˜ 5. We then use this constrained model to perform 21 cm forecasting for Low Frequency Array, Hydrogen Epoch of Reionization Array and Square Kilometre Array in order to determine how well such data can characterize the sources driving reionization. We find that the Mock 21 cm power spectrum alone can somewhat constrain the halo mass dependence of ionizing sources, the photon escape fraction and ionizing amplitude, but combining the Mock 21 cm data with other current observations enables us to separately constrain all these parameters. Our framework illustrates how the future 21 cm data can play a key role in understanding the sources and topology of reionization as observations improve.
SENS-5D trajectory and wind-sensitivity calculations for unguided rockets
NASA Technical Reports Server (NTRS)
Singh, R. P.; Huang, L. C. P.; Cook, R. A.
1975-01-01
A computational procedure is described which numerically integrates the equations of motion of an unguided rocket. Three translational and two angular (roll discarded) degrees of freedom are integrated through the final burnout; and then, through impact, only three translational motions are considered. Input to the routine is: initial time, altitude and velocity, vehicle characteristics, and other defined options. Input format has a wide range of flexibility for special calculations. Output is geared mainly to the wind-weighting procedure, and includes summary of trajectory at burnout, apogee and impact, summary of spent-stage trajectories, detailed position and vehicle data, unit-wind effects for head, tail and cross winds, coriolis deflections, range derivative, and the sensitivity curves (the so called F(Z) and DF(Z) curves). The numerical integration procedure is a fourth-order, modified Adams-Bashforth Predictor-Corrector method. This method is supplemented by a fourth-order Runge-Kutta method to start the integration at t=0 and whenever error criteria demand a change in step size.
Quadrature imposition of compatibility conditions in Chebyshev methods
NASA Technical Reports Server (NTRS)
Gottlieb, D.; Streett, C. L.
1990-01-01
Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.
Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors
Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech
2011-01-01
Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935
NASA Astrophysics Data System (ADS)
Yeboah-Forson, Albert; Comas, Xavier; Whitman, Dean
2014-07-01
The limestone composing the Biscayne Aquifer in southeast Florida is characterized by cavities and solution features that are difficult to detect and quantify accurately because of their heterogeneous spatial distribution. Such heterogeneities have been shown by previous studies to exert a strong influence in the direction of groundwater flow. In this study we use an integrated array of geophysical methods to detect the lateral extent and distribution of solution features as indicative of anisotropy in the Biscayne Aquifer. Geophysical methods included azimuthal resistivity measurements, electrical resistivity imaging (ERI) and ground penetrating radar (GPR) and were constrained with direct borehole information from nearby wells. The geophysical measurements suggest the presence of a zone of low electrical resistivity (from ERI) and low electromagnetic wave velocity (from GPR) below the water table at depths of 4-9 m that corresponds to the depth of solution conduits seen in digital borehole images. Azimuthal electrical measurements at the site reported coefficients of electrical anisotropy as high as 1.36 suggesting the presence of an area of high porosity (most likely comprising different types of porosity) oriented in the E-W direction. This study shows how integrated geophysical methods can help detect the presence of areas of enhanced porosity which may influence the direction of groundwater flow in a complex anisotropic and heterogeneous karst system like the Biscayne Aquifer.
Robust audio-visual speech recognition under noisy audio-video conditions.
Stewart, Darryl; Seymour, Rowan; Pass, Adrian; Ming, Ji
2014-02-01
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
ERIC Educational Resources Information Center
Gil, Laura; Braten, Ivar; Vidal-Abarca, Eduardo; Stromso, Helge I.
2010-01-01
One of the major challenges of a knowledge society is that students as well as other citizens must learn to understand and integrate information from multiple textual sources. Still, task and reader characteristics that may facilitate or constrain such intertextual processes are not well understood by researchers. In this study, we compare the…
NASA Technical Reports Server (NTRS)
Braslow, A. L.; Whitehead, A. H., Jr.
1973-01-01
The anticipated growth of air transportation is in danger of being constrained by increased prices and insecure sources of petroleum-based fuel. Fuel-conservation possibilities attainable through the application of advances in aeronautical technology to aircraft design are identified with the intent of stimulating NASA R and T and systems-study activities in the various disciplinary areas. The material includes drag reduction; weight reduction; increased efficiency of main and auxiliary power systems; unconventional air transport of cargo; and operational changes.
A Study of Penalty Function Methods for Constraint Handling with Genetic Algorithm
NASA Technical Reports Server (NTRS)
Ortiz, Francisco
2004-01-01
COMETBOARDS (Comparative Evaluation Testbed of Optimization and Analysis Routines for Design of Structures) is a design optimization test bed that can evaluate the performance of several different optimization algorithms. A few of these optimization algorithms are the sequence of unconstrained minimization techniques (SUMT), sequential linear programming (SLP) and the sequential quadratic programming techniques (SQP). A genetic algorithm (GA) is a search technique that is based on the principles of natural selection or "survival of the fittest". Instead of using gradient information, the GA uses the objective function directly in the search. The GA searches the solution space by maintaining a population of potential solutions. Then, using evolving operations such as recombination, mutation and selection, the GA creates successive generations of solutions that will evolve and take on the positive characteristics of their parents and thus gradually approach optimal or near-optimal solutions. By using the objective function directly in the search, genetic algorithms can be effectively applied in non-convex, highly nonlinear, complex problems. The genetic algorithm is not guaranteed to find the global optimum, but it is less likely to get trapped at a local optimum than traditional gradient-based search methods when the objective function is not smooth and generally well behaved. The purpose of this research is to assist in the integration of genetic algorithm (GA) into COMETBOARDS. COMETBOARDS cast the design of structures as a constrained nonlinear optimization problem. One method used to solve constrained optimization problem with a GA to convert the constrained optimization problem into an unconstrained optimization problem by developing a penalty function that penalizes infeasible solutions. There have been several suggested penalty function in the literature each with there own strengths and weaknesses. A statistical analysis of some suggested penalty functions is performed in this study. Also, a response surface approach to robust design is used to develop a new penalty function approach. This new penalty function approach is then compared with the other existing penalty functions.
MONSS: A multi-objective nonlinear simplex search approach
NASA Astrophysics Data System (ADS)
Zapotecas-Martínez, Saúl; Coello Coello, Carlos A.
2016-01-01
This article presents a novel methodology for dealing with continuous box-constrained multi-objective optimization problems (MOPs). The proposed algorithm adopts a nonlinear simplex search scheme in order to obtain multiple elements of the Pareto optimal set. The search is directed by a well-distributed set of weight vectors, each of which defines a scalarization problem that is solved by deforming a simplex according to the movements described by Nelder and Mead's method. Considering an MOP with n decision variables, the simplex is constructed using n+1 solutions which minimize different scalarization problems defined by n+1 neighbor weight vectors. All solutions found in the search are used to update a set of solutions considered to be the minima for each separate problem. In this way, the proposed algorithm collectively obtains multiple trade-offs among the different conflicting objectives, while maintaining a proper representation of the Pareto optimal front. In this article, it is shown that a well-designed strategy using just mathematical programming techniques can be competitive with respect to the state-of-the-art multi-objective evolutionary algorithms against which it was compared.
Robust head pose estimation via supervised manifold learning.
Wang, Chao; Song, Xubo
2014-05-01
Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Thermomechanical Fatigue of Ductile Cast Iron and Its Life Prediction
NASA Astrophysics Data System (ADS)
Wu, Xijia; Quan, Guangchun; MacNeil, Ryan; Zhang, Zhong; Liu, Xiaoyang; Sloss, Clayton
2015-06-01
Thermomechanical fatigue (TMF) behaviors of ductile cast iron (DCI) were investigated under out-of-phase (OP), in-phase (IP), and constrained strain-control conditions with temperature hold in various temperature ranges: 573 K to 1073 K, 723 K to 1073 K, and 433 K to 873 K (300 °C to 800 °C, 450 °C to 800 °C, and 160 °C to 600 °C). The integrated creep-fatigue theory (ICFT) model was incorporated into the finite element method to simulate the hysteresis behavior and predict the TMF life of DCI under those test conditions. With the consideration of four deformation/damage mechanisms: (i) plasticity-induced fatigue, (ii) intergranular embrittlement, (iii) creep, and (iv) oxidation, as revealed from the previous study on low cycle fatigue of the material, the model delineates the contributions of these physical mechanisms in the asymmetrical hysteresis behavior and the damage accumulation process leading to final TMF failure. This study shows that the ICFT model can simulate the stress-strain response and life of DCI under complex TMF loading profiles (OP and IP, and constrained with temperature hold).
Liu, Yan; Ma, Jianhua; Zhang, Hao; Wang, Jing; Liang, Zhengrong
2014-01-01
Background The negative effects of X-ray exposure, such as inducing genetic and cancerous diseases, has arisen more attentions. Objective This paper aims to investigate a penalized re-weighted least-square (PRWLS) strategy for low-mAs X-ray computed tomography image reconstruction by incorporating an adaptive weighted total variation (AwTV) penalty term and a noise variance model of projection data. Methods An AwTV penalty is introduced in the objective function by considering both piecewise constant property and local nearby intensity similarity of the desired image. Furthermore, the weight of data fidelity term in the objective function is determined by our recent study on modeling variance estimation of projection data in the presence of electronic background noise. Results The presented AwTV-PRWLS algorithm can achieve the highest full-width-at-half-maximum (FWHM) measurement, for data conditions of (1) full-view 10mA acquisition and (2) sparse-view 80mA acquisition. In comparison between the AwTV/TV-PRWLS strategies and the previous reported AwTV/TV-projection onto convex sets (AwTV/TV-POCS) approaches, the former can gain in terms of FWHM for data condition (1), but cannot gain for the data condition (2). Conclusions In the case of full-view 10mA projection data, the presented AwTV-PRWLS shows potential improvement. However, in the case of sparse-view 80mA projection data, the AwTV/TV-POCS shows advantage over the PRWLS strategies. PMID:25080113
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition
2017-01-01
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user’s location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively. PMID:28817094
Combined Dynamic Time Warping with Multiple Sensors for 3D Gesture Recognition.
Choi, Hyo-Rim; Kim, TaeYong
2017-08-17
Cyber-physical systems, which closely integrate physical systems and humans, can be applied to a wider range of applications through user movement analysis. In three-dimensional (3D) gesture recognition, multiple sensors are required to recognize various natural gestures. Several studies have been undertaken in the field of gesture recognition; however, gesture recognition was conducted based on data captured from various independent sensors, which rendered the capture and combination of real-time data complicated. In this study, a 3D gesture recognition method using combined information obtained from multiple sensors is proposed. The proposed method can robustly perform gesture recognition regardless of a user's location and movement directions by providing viewpoint-weighted values and/or motion-weighted values. In the proposed method, the viewpoint-weighted dynamic time warping with multiple sensors has enhanced performance by preventing joint measurement errors and noise due to sensor measurement tolerance, which has resulted in the enhancement of recognition performance by comparing multiple joint sequences effectively.
NASA Astrophysics Data System (ADS)
Rödenbeck, Christian; Bakker, Dorothee; Gruber, Nicolas; Iida, Yosuke; Jacobson, Andy; Jones, Steve; Landschützer, Peter; Metzl, Nicolas; Nakaoka, Shin-ichiro; Olsen, Are; Park, Geun-Ha; Peylin, Philippe; Rodgers, Keith; Sasse, Tristan; Schuster, Ute; Shutler, James; Valsala, Vinu; Wanninkhof, Rik; Zeng, Jiye
2016-04-01
Using measurements of the surface-ocean COtwo partial pressure (pCOtwo) from the SOCAT and LDEO data bases and 14 different pCOtwo mapping methods recently collated by the Surface Ocean pCOtwo Mapping intercomparison (SOCOM) initiative, variations in regional and global sea-air COtwo fluxes are investigated. Though the available mapping methods use widely different approaches, we find relatively consistent estimates of regional pCOtwo seasonality, in line with previous estimates. In terms of interannual variability (IAV), all mapping methods estimate the largest variations to occur in the Eastern equatorial Pacific. Despite considerable spread in the detailed variations, mapping methods that fit the data more closely also tend to agree more closely with each other in regional averages. Encouragingly, this includes mapping methods belonging to complementary types - taking variability either directly from the pCOtwo data or indirectly from driver data via regression. From a weighted ensemble average, we find an IAV amplitude of the global sea-air COtwo flux of IAVampl (standard deviation over AnalysisPeriod), which is larger than simulated by biogeochemical process models. On a decadal perspective, the global ocean COtwo uptake is estimated to have gradually increased since about 2000, with little decadal change prior to that. The weighted mean net global ocean COtwo sink estimated by the SOCOM ensemble is -1.75 UPgCyr (AnalysisPeriod), consistent within uncertainties with estimates from ocean-interior carbon data or atmospheric oxygen trends. Using data-based sea-air COtwo fluxes in atmospheric COtwo inversions also helps to better constrain land-atmosphere COtwo fluxes.
Development of an integrated BEM approach for hot fluid structure interaction
NASA Technical Reports Server (NTRS)
Dargush, G. F.; Banerjee, P. K.; Shi, Y.
1990-01-01
A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.
Treatment of constraints in the stochastic quantization method and covariantized Langevin equation
NASA Astrophysics Data System (ADS)
Ikegami, Kenji; Kimura, Tadahiko; Mochizuki, Riuji
1993-04-01
We study the treatment of the constraints in the stochastic quantization method. We improve the treatment of the stochastic consistency condition proposed by Namiki et al. by suitably taking into account the Ito calculus. Then we obtain an improved Langevi equation and the Fokker-Planck equation which naturally leads to the correct path integral quantization of the constrained system as the stochastic equilibrium state. This treatment is applied to an O( N) non-linear α model and it is shown that singular terms appearing in the improved Langevin equation cancel out the σ n(O) divergences in one loop order. We also ascertain that the above Langevin equation, rewritten in terms of idependent variables, is actually equivalent to the one in the general-coordinate transformation covariant and vielbein-rotation invariant formalish.
Imparting Desired Attributes by Optimization in Structural Design
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard
2003-01-01
Commonly available optimization methods typically produce a single optimal design as a Constrained minimum of a particular objective function. However, in engineering design practice it is quite often important to explore as much of the design space as possible with respect to many attributes to find out what behaviors are possible and not possible within the initially adopted design concept. The paper shows that the very simple method of the sum of objectives is useful for such exploration. By geometrical argument it is demonstrated that if every weighting coefficient is allowed to change its magnitude and its sign then the method returns a set of designs that are all feasible, diverse in their attributes, and include the Pareto and non-Pareto solutions, at least for convex cases. Numerical examples in the paper include a case of an aircraft wing structural box with thousands of degrees of freedom and constraints, and over 100 design variables, whose attributes are structural mass, volume, displacement, and frequency. The method is inherently suitable for parallel, coarse-grained implementation that enables exploration of the design space in the elapsed time of a single structural optimization.
Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.
Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane
2017-07-01
This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.
Analytic study of a rolling sphere on a rough surface
NASA Astrophysics Data System (ADS)
Florea, Olivia A.; Rosca, Ileana C.
2016-11-01
In this paper it is realized an analytic study of the rolling's sphere on a rough horizontal plane under the action of its own gravity. The necessities of integration of the system of dynamical equations of motion lead us to find a reference system where the motion equations should be transformed into simpler expressions and which, in the presence of some significant hypothesis to permit the application of some original methods of analytical integration. In technical applications, the bodies may have a free rolling motion or a motion constrained by geometrical relations in assemblies of parts and machine parts. This study involves a lot of investigations in the field of tribology and of applied dynamics accompanied by experiments. Multiple recordings of several trajectories of the sphere, as well as their treatment of images, also followed by statistical processing experimental data allowed highlighting a very good agreement between the theoretical findings and experimental results.
Integrating Entropy-Based Naïve Bayes and GIS for Spatial Evaluation of Flood Hazard.
Liu, Rui; Chen, Yun; Wu, Jianping; Gao, Lei; Barrett, Damian; Xu, Tingbao; Li, Xiaojuan; Li, Linyi; Huang, Chang; Yu, Jia
2017-04-01
Regional flood risk caused by intensive rainfall under extreme climate conditions has increasingly attracted global attention. Mapping and evaluation of flood hazard are vital parts in flood risk assessment. This study develops an integrated framework for estimating spatial likelihood of flood hazard by coupling weighted naïve Bayes (WNB), geographic information system, and remote sensing. The north part of Fitzroy River Basin in Queensland, Australia, was selected as a case study site. The environmental indices, including extreme rainfall, evapotranspiration, net-water index, soil water retention, elevation, slope, drainage proximity, and density, were generated from spatial data representing climate, soil, vegetation, hydrology, and topography. These indices were weighted using the statistics-based entropy method. The weighted indices were input into the WNB-based model to delineate a regional flood risk map that indicates the likelihood of flood occurrence. The resultant map was validated by the maximum inundation extent extracted from moderate resolution imaging spectroradiometer (MODIS) imagery. The evaluation results, including mapping and evaluation of the distribution of flood hazard, are helpful in guiding flood inundation disaster responses for the region. The novel approach presented consists of weighted grid data, image-based sampling and validation, cell-by-cell probability inferring and spatial mapping. It is superior to an existing spatial naive Bayes (NB) method for regional flood hazard assessment. It can also be extended to other likelihood-related environmental hazard studies. © 2016 Society for Risk Analysis.
A precision analogue integrator system for heavy current measurement in MFDC resistance spot welding
NASA Astrophysics Data System (ADS)
Xia, Yu-Jun; Zhang, Zhong-Dian; Xia, Zhen-Xin; Zhu, Shi-Liang; Zhang, Rui
2016-02-01
In order to control and monitor the quality of middle frequency direct current (MFDC) resistance spot welding (RSW), precision measurement of the welding current up to 100 kA is required, for which Rogowski coils are the only viable current transducers at present. Thus, a highly accurate analogue integrator is the key to restoring the converted signals collected from the Rogowski coils. Previous studies emphasised that the integration drift is a major factor that influences the performance of analogue integrators, but capacitive leakage error also has a significant impact on the result, especially in long-time pulse integration. In this article, new methods of measuring and compensating capacitive leakage error are proposed to fabricate a precision analogue integrator system for MFDC RSW. A voltage holding test is carried out to measure the integration error caused by capacitive leakage, and an original integrator with a feedback adder is designed to compensate capacitive leakage error in real time. The experimental results and statistical analysis show that the new analogue integrator system could constrain both drift and capacitive leakage error, of which the effect is robust to different voltage levels of output signals. The total integration error is limited within ±0.09 mV s-1 0.005% s-1 or full scale at a 95% confidence level, which makes it possible to achieve the precision measurement of the welding current of MFDC RSW with Rogowski coils of 0.1% accuracy class.
Liu, Dong-jun; Li, Li
2015-01-01
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field. PMID:26110332
Liu, Dong-jun; Li, Li
2015-06-23
For the issue of haze-fog, PM2.5 is the main influence factor of haze-fog pollution in China. The trend of PM2.5 concentration was analyzed from a qualitative point of view based on mathematical models and simulation in this study. The comprehensive forecasting model (CFM) was developed based on the combination forecasting ideas. Autoregressive Integrated Moving Average Model (ARIMA), Artificial Neural Networks (ANNs) model and Exponential Smoothing Method (ESM) were used to predict the time series data of PM2.5 concentration. The results of the comprehensive forecasting model were obtained by combining the results of three methods based on the weights from the Entropy Weighting Method. The trend of PM2.5 concentration in Guangzhou China was quantitatively forecasted based on the comprehensive forecasting model. The results were compared with those of three single models, and PM2.5 concentration values in the next ten days were predicted. The comprehensive forecasting model balanced the deviation of each single prediction method, and had better applicability. It broadens a new prediction method for the air quality forecasting field.
NASA Astrophysics Data System (ADS)
Wu, Yunna; Chen, Kaifeng; Xu, Hu; Xu, Chuanbo; Zhang, Haobo; Yang, Meng
2017-12-01
There is insufficient research relating to offshore wind farm site selection in China. The current methods for site selection have some defects. First, information loss is caused by two aspects: the implicit assumption that the probability distribution on the interval number is uniform; and ignoring the value of decision makers' (DMs') common opinion on the criteria information evaluation. Secondly, the difference in DMs' utility function has failed to receive attention. An innovative method is proposed in this article to solve these drawbacks. First, a new form of interval number and its weighted operator are proposed to reflect the uncertainty and reduce information loss. Secondly, a new stochastic dominance degree is proposed to quantify the interval number with a probability distribution. Thirdly, a two-stage method integrating the weighted operator with stochastic dominance degree is proposed to evaluate the alternatives. Finally, a case from China proves the effectiveness of this method.
Mayhew, Susannah H.; Ploubidis, George B.; Sloggett, Andy; Church, Kathryn; Obure, Carol D.; Birdthistle, Isolde; Sweeney, Sedona; Warren, Charlotte E.; Watts, Charlotte; Vassall, Anna
2016-01-01
Background The body of knowledge on evaluating complex interventions for integrated healthcare lacks both common definitions of ‘integrated service delivery’ and standard measures of impact. Using multiple data sources in combination with statistical modelling the aim of this study is to develop a measure of HIV-reproductive health (HIV-RH) service integration that can be used to assess the degree of service integration, and the degree to which integration may have health benefits to clients, or reduce service costs. Methods and Findings Data were drawn from the Integra Initiative’s client flow (8,263 clients in Swaziland and 25,539 in Kenya) and costing tools implemented between 2008–2012 in 40 clinics providing RH services in Kenya and Swaziland. We used latent variable measurement models to derive dimensions of HIV-RH integration using these data, which quantified the extent and type of integration between HIV and RH services in Kenya and Swaziland. The modelling produced two clear and uncorrelated dimensions of integration at facility level leading to the development of two sub-indexes: a Structural Integration Index (integrated physical and human resource infrastructure) and a Functional Integration Index (integrated delivery of services to clients). The findings highlight the importance of multi-dimensional assessments of integration, suggesting that structural integration is not sufficient to achieve the integrated delivery of care to clients—i.e. “functional integration”. Conclusions These Indexes are an important methodological contribution for evaluating complex multi-service interventions. They help address the need to broaden traditional evaluations of integrated HIV-RH care through the incorporation of a functional integration measure, to avoid misleading conclusions on its ‘impact’ on health outcomes. This is particularly important for decision-makers seeking to promote integration in resource constrained environments. PMID:26800517
Kulovitz, Michelle G; Kolkmeyer, Deborah; Conn, Carole A; Cohen, Deborah A; Ferraro, Robert T
2014-01-01
The aim of this study was to investigate body composition changes in fat mass (FM) to lean body mass (LBM) ratios following 15% body weight loss (WL) in both integrated medical treatment and bariatric surgery groups. Obese patients (body mass index [BMI] 46.6 ± 6.5 kg/m(2)) who underwent laparoscopic gastric bypass surgery (BS), were matched with 24 patients undergoing integrated medical and behavioral treatment (MT). The BS and MT groups were evaluated for body weight, BMI, body composition, and waist circumference (WC) at baseline and after 15% WL. Following 15% body WL, there were significant decreases in %FM and increased %LBM (P < 0.0001). Additionally, both groups saw 76% of WL from FM, and 24% from LBM indicating a 3:1 ratio of FM to LBM loss during the first 15% reduction in body weight. Finally, no significant differences (P = 0.103) between groups for maintenance of WL at 1 y were found. For both groups, baseline FM was found to be negatively correlated with percentage of weight regained (%WR) at 1 y post-WL (r = -0.457; P = 0.007). Baseline WC and rate of WL to 15% were significant predictors of %WR only in the BS group (r = 0.713; P = 0.020). If followed closely by professionals during the first 15% body WL, patients losing 15% weight by either medical or surgical treatments can attain similar FM:LBM loss ratios and can maintain WL for 1 y. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Chan, Kai-Wing; Zhang, William W.; Schofield, Mark J.; Numata, Ai; Mazzarella, James R.; Saha, Timo T.; Biskach, Michael P.; McCelland, Ryan S.; Niemeyer, Jason; Sharpe, Marton V.;
2016-01-01
High-resolution, high throughput optics for x-ray astronomy requires fabrication of well-formed mirror segments and their integration with arc-second level precision. Recently, advances of fabrication of silicon mirrors developed at NASA/Goddard prompted us to develop a new method of mirror integration. The new integration scheme takes advantage of the stiffer, more thermally conductive, and lower-CTE silicon, compared to glass, to build a telescope of much lighter weight. In this paper, we address issues of aligning and bonding mirrors with this method. In this preliminary work, we demonstrated the basic viability of such scheme. Using glass mirrors, we demonstrated that alignment error of 1" and bonding error 2" can be achieved for mirrors in a single shell. We will address the immediate plan to demonstrate the bonding reliability and to develop technology to build up a mirror stack and a whole "meta-shell".
Portable Microfluidic Integrated Plasmonic Platform for Pathogen Detection
Tokel, Onur; Yildiz, Umit Hakan; Inci, Fatih; Durmus, Naside Gozde; Ekiz, Okan Oner; Turker, Burak; Cetin, Can; Rao, Shruthi; Sridhar, Kaushik; Natarajan, Nalini; Shafiee, Hadi; Dana, Aykutlu; Demirci, Utkan
2015-01-01
Timely detection of infectious agents is critical in early diagnosis and treatment of infectious diseases. Conventional pathogen detection methods, such as enzyme linked immunosorbent assay (ELISA), culturing or polymerase chain reaction (PCR) require long assay times, and complex and expensive instruments, which are not adaptable to point-of-care (POC) needs at resource-constrained as well as primary care settings. Therefore, there is an unmet need to develop simple, rapid, and accurate methods for detection of pathogens at the POC. Here, we present a portable, multiplex, inexpensive microfluidic-integrated surface plasmon resonance (SPR) platform that detects and quantifies bacteria, i.e., Escherichia coli (E. coli) and Staphylococcus aureus (S. aureus) rapidly. The platform presented reliable capture and detection of E. coli at concentrations ranging from ~105 to 3.2 × 107 CFUs/mL in phosphate buffered saline (PBS) and peritoneal dialysis (PD) fluid. The multiplexing and specificity capability of the platform was also tested with S. aureus samples. The presented platform technology could potentially be applicable to capture and detect other pathogens at the POC and primary care settings. PMID:25801042
iGeoT v1.0: Automatic Parameter Estimation for Multicomponent Geothermometry, User's Guide
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spycher, Nicolas; Finsterle, Stefan
GeoT implements the multicomponent geothermometry method developed by Reed and Spycher [1984] into a stand-alone computer program to ease the application of this method and to improve the prediction of geothermal reservoir temperatures using full and integrated chemical analyses of geothermal fluids. Reservoir temperatures are estimated from statistical analyses of mineral saturation indices computed as a function of temperature. The reconstruction of the deep geothermal fluid compositions, and geothermometry computations, are all implemented into the same computer program, allowing unknown or poorly constrained input parameters to be estimated by numerical optimization. This integrated geothermometry approach presents advantages over classical geothermometersmore » for fluids that have not fully equilibrated with reservoir minerals and/or that have been subject to processes such as dilution and gas loss. This manual contains installation instructions for iGeoT, and briefly describes the input formats needed to run iGeoT in Automatic or Expert Mode. An example is also provided to demonstrate the use of iGeoT.« less
Dupont, Sara M; De Leener, Benjamin; Taso, Manuel; Le Troter, Arnaud; Nadeau, Sylvie; Stikov, Nikola; Callot, Virginie; Cohen-Adad, Julien
2017-04-15
The spinal cord white and gray matter can be affected by various pathologies such as multiple sclerosis, amyotrophic lateral sclerosis or trauma. Being able to precisely segment the white and gray matter could help with MR image analysis and hence be useful in further understanding these pathologies, and helping with diagnosis/prognosis and drug development. Up to date, white/gray matter segmentation has mostly been done manually, which is time consuming, induces a bias related to the rater and prevents large-scale multi-center studies. Recently, few methods have been proposed to automatically segment the spinal cord white and gray matter. However, no single method exists that combines the following criteria: (i) fully automatic, (ii) works on various MRI contrasts, (iii) robust towards pathology and (iv) freely available and open source. In this study we propose a multi-atlas based method for the segmentation of the spinal cord white and gray matter that addresses the previous limitations. Moreover, to study the spinal cord morphology, atlas-based approaches are increasingly used. These approaches rely on the registration of a spinal cord template to an MR image, however the registration usually doesn't take into account the spinal cord internal structure and thus lacks accuracy. In this study, we propose a new template registration framework that integrates the white and gray matter segmentation to account for the specific gray matter shape of each individual subject. Validation of segmentation was performed in 24 healthy subjects using T 2 * -weighted images, in 8 healthy subjects using diffusion weighted images (exhibiting inverted white-to-gray matter contrast compared to T 2 *-weighted), and in 5 patients with spinal cord injury. The template registration was validated in 24 subjects using T 2 *-weighted data. Results of automatic segmentation on T 2 *-weighted images was in close correspondence with the manual segmentation (Dice coefficient in the white/gray matter of 0.91/0.71 respectively). Similarly, good results were obtained in data with inverted contrast (diffusion-weighted image) and in patients. When compared to the classical template registration framework, the proposed framework that accounts for gray matter shape significantly improved the quality of the registration (comparing Dice coefficient in gray matter: p=9.5×10 -6 ). While further validation is needed to show the benefits of the new registration framework in large cohorts and in a variety of patients, this study provides a fully-integrated tool for quantitative assessment of white/gray matter morphometry and template-based analysis. All the proposed methods are implemented in the Spinal Cord Toolbox (SCT), an open-source software for processing spinal cord multi-parametric MRI data. Copyright © 2017 Elsevier Inc. All rights reserved.
Using Perturbation Theory to Reduce Noise in Diffusion Tensor Fields
Bansal, Ravi; Staib, Lawrence H.; Xu, Dongrong; Laine, Andrew F.; Liu, Jun; Peterson, Bradley S.
2009-01-01
We propose the use of Perturbation theory to reduce noise in Diffusion Tensor (DT) fields. Diffusion Tensor Imaging (DTI) encodes the diffusion of water molecules along different spatial directions in a positive-definite, 3 × 3 symmetric tensor. Eigenvectors and eigenvalues of DTs allow the in vivo visualization and quantitative analysis of white matter fiber bundles across the brain. The validity and reliability of these analyses are limited, however, by the low spatial resolution and low Signal-to-Noise Ratio (SNR) in DTI datasets. Our procedures can be applied to improve the validity and reliability of these quantitative analyses by reducing noise in the tensor fields. We model a tensor field as a three-dimensional Markov Random Field and then compute the likelihood and the prior terms of this model using Perturbation theory. The prior term constrains the tensor field to be smooth, whereas the likelihood term constrains the smoothed tensor field to be similar to the original field. Thus, the proposed method generates a smoothed field that is close in structure to the original tensor field. We evaluate the performance of our method both visually and quantitatively using synthetic and real-world datasets. We quantitatively assess the performance of our method by computing the SNR for eigenvalues and the coherence measures for eigenvectors of DTs across tensor fields. In addition, we quantitatively compare the performance of our procedures with the performance of one method that uses a Riemannian distance to compute the similarity between two tensors, and with another method that reduces noise in tensor fields by anisotropically filtering the diffusion weighted images that are used to estimate diffusion tensors. These experiments demonstrate that our method significantly increases the coherence of the eigenvectors and the SNR of the eigenvalues, while simultaneously preserving the fine structure and boundaries between homogeneous regions, in the smoothed tensor field. PMID:19540791
Inferring drug-disease associations based on known protein complexes.
Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin
2015-01-01
Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.
Weighted small subdomain filtering technology
NASA Astrophysics Data System (ADS)
Tai, Zhenhua; Zhang, Fengxu; Zhang, Fengqin; Zhang, Xingzhou; Hao, Mengcheng
2017-09-01
A high-resolution method to define the horizontal edges of gravity sources is presented by improving the three-directional small subdomain filtering (TDSSF). This proposed method is the weighted small subdomain filtering (WSSF). The WSSF uses a numerical difference instead of the phase conversion in the TDSSF to reduce the computational complexity. To make the WSSF more insensitive to noise, the numerical difference is combined with the average algorithm. Unlike the TDSSF, the WSSF uses a weighted sum to integrate the numerical difference results along four directions into one contour, for making its interpretation more convenient and accurate. The locations of tightened gradient belts are used to define the edges of sources in the WSSF result. This proposed method is tested on synthetic data. The test results show that the WSSF provides the horizontal edges of sources more clearly and correctly, even if the sources are interfered with one another and the data is corrupted with random noise. Finally, the WSSF and two other known methods are applied to a real data respectively. The detected edges by the WSSF are sharper and more accurate.
Inferring drug-disease associations based on known protein complexes
2015-01-01
Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949
ODE constrained mixture modelling: a method for unraveling subpopulation structures and dynamics.
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J
2014-07-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity.
Characterization of sediment trapped by macroalgae on a Hawaiian reef flat
Stamski, R.E.; Field, M.E.
2006-01-01
Reef researchers studying community shifts in the balance between corals and fleshy macroalgae have noted that algae are often covered with sediment. This study characterizes sediment trapping by macroalgae within a Hawaiian reef habitat and constrains the controls on this process. Sediment-laden macroalgae were sampled and macroalgal cover was assessed on a wide (???1 km) reef flat off south-central Molokai. Macroalgae trapped a mean of 1.26 (??0.91 SD) grams of sediment per gram of dry weight biomass and that sediment was dominantly terrigenous mud (59% by weight). It was determined that biomass, as a proxy for algal size, and morphology were not strict controls on the sediment trapping process. Over 300 metric tons of sediment were estimated to be retained by macroalgae across 5.75 km2 of reef flat (54 g m-2), suggesting that this process is an important component of sediment budgets. In addition, understanding the character of sediment trapped by macroalgae may help constrain suspended sediment flux and has implications for nutrient dynamics in reef flat environments. ?? 2005 Elsevier Ltd. All rights reserved.
Characterization of sediment trapped by macroalgae on a Hawaiian reef flat
NASA Astrophysics Data System (ADS)
Stamski, Rebecca E.; Field, Michael E.
2006-01-01
Reef researchers studying community shifts in the balance between corals and fleshy macroalgae have noted that algae are often covered with sediment. This study characterizes sediment trapping by macroalgae within a Hawaiian reef habitat and constrains the controls on this process. Sediment-laden macroalgae were sampled and macroalgal cover was assessed on a wide (˜1 km) reef flat off south-central Molokai. Macroalgae trapped a mean of 1.26 (±0.91 SD) grams of sediment per gram of dry weight biomass and that sediment was dominantly terrigenous mud (59% by weight). It was determined that biomass, as a proxy for algal size, and morphology were not strict controls on the sediment trapping process. Over 300 metric tons of sediment were estimated to be retained by macroalgae across 5.75 km 2 of reef flat (54 g m -2), suggesting that this process is an important component of sediment budgets. In addition, understanding the character of sediment trapped by macroalgae may help constrain suspended sediment flux and has implications for nutrient dynamics in reef flat environments.
Cross-scale integration of knowledge for predicting species ranges: a metamodeling framework.
Talluto, Matthew V; Boulangeat, Isabelle; Ameztegui, Aitor; Aubin, Isabelle; Berteaux, Dominique; Butler, Alyssa; Doyon, Frédérik; Drever, C Ronnie; Fortin, Marie-Josée; Franceschini, Tony; Liénard, Jean; McKenney, Dan; Solarik, Kevin A; Strigul, Nikolay; Thuiller, Wilfried; Gravel, Dominique
2016-02-01
Current interest in forecasting changes to species ranges have resulted in a multitude of approaches to species distribution models (SDMs). However, most approaches include only a small subset of the available information, and many ignore smaller-scale processes such as growth, fecundity, and dispersal. Furthermore, different approaches often produce divergent predictions with no simple method to reconcile them. Here, we present a flexible framework for integrating models at multiple scales using hierarchical Bayesian methods. Eastern North America (as an example). Our framework builds a metamodel that is constrained by the results of multiple sub-models and provides probabilistic estimates of species presence. We applied our approach to a simulated dataset to demonstrate the integration of a correlative SDM with a theoretical model. In a second example, we built an integrated model combining the results of a physiological model with presence-absence data for sugar maple ( Acer saccharum ), an abundant tree native to eastern North America. For both examples, the integrated models successfully included information from all data sources and substantially improved the characterization of uncertainty. For the second example, the integrated model outperformed the source models with respect to uncertainty when modelling the present range of the species. When projecting into the future, the model provided a consensus view of two models that differed substantially in their predictions. Uncertainty was reduced where the models agreed and was greater where they diverged, providing a more realistic view of the state of knowledge than either source model. We conclude by discussing the potential applications of our method and its accessibility to applied ecologists. In ideal cases, our framework can be easily implemented using off-the-shelf software. The framework has wide potential for use in species distribution modelling and can drive better integration of multi-source and multi-scale data into ecological decision-making.
High-authority smart material integrated electric actuator
NASA Astrophysics Data System (ADS)
Weisensel, G. N.; Pierce, Thomas D.; Zunkel, Gary
1997-05-01
For many current applications, hydraulic power is still the preferred method of gaining mechanical advantage. However, in many of these applications, this power comes with the penalties of high weight, size, cost, and maintenance due to the system's distributed nature and redundancy requirements. A high authority smart material Integrated Electric Actuator (IEA) is a modular, self-contained linear motion device that is capable of producing dynamic output strokes similar to those of hydraulic actuators yet at significantly reduced weight and volume. It provides system simplification and miniaturization. This actuator concept has many innovative features, including a TERFENOL-D-based pump, TERFENOL-D- based active valves, control algorithms, a displacement amplification unit and integrated, unitized packaging. The IEA needs only electrical power and a control command signal as inputs to provide high authority, high response rate actuation. This approach is directly compatible with distributed control strategies. Aircraft control, automotive brakes and fuel injection, and fluid power delivery are just some examples of the IEA's pervasive applications in aerospace, defense and commercial systems.
Medical image segmentation by combining graph cuts and oriented active appearance models.
Chen, Xinjian; Udupa, Jayaram K; Bagci, Ulas; Zhuge, Ying; Yao, Jianhua
2012-04-01
In this paper, we propose a novel method based on a strategic combination of the active appearance model (AAM), live wire (LW), and graph cuts (GCs) for abdominal 3-D organ segmentation. The proposed method consists of three main parts: model building, object recognition, and delineation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the recognition part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW methods, resulting in the oriented AAM (OAAM). A multiobject strategy is utilized to help in object initialization. We employ a pseudo-3-D initialization strategy and segment the organs slice by slice via a multiobject OAAM method. For the object delineation part, a 3-D shape-constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT data set and also on the MICCAI 2007 Grand Challenge liver data set. The results show the following: 1) The overall segmentation accuracy of true positive volume fraction TPVF > 94.3% and false positive volume fraction can be achieved; 2) the initialization performance can be improved by combining the AAM and LW; 3) the multiobject strategy greatly facilitates initialization; 4) compared with the traditional 3-D AAM method, the pseudo-3-D OAAM method achieves comparable performance while running 12 times faster; and 5) the performance of the proposed method is comparable to state-of-the-art liver segmentation algorithm. The executable version of the 3-D shape-constrained GC method with a user interface can be downloaded from http://xinjianchen.wordpress.com/research/.
Identification of Early Risk Factors for Developmental Delay
ERIC Educational Resources Information Center
Delgado, Christine E. F.; Vagi, Sara J.; Scott, Keith G.
2007-01-01
Statewide birth certificate and preschool exceptionality records were integrated to identify risk factors for developmental delay (DD). Epidemiological methods were used to investigate both individual-level and population-level risk for DD associated with a number of child and maternal factors. Infants born with very low birth weight were at the…
Characterization of fast-pyrolysis bio-oil distillation residues and their potential applications
USDA-ARS?s Scientific Manuscript database
A typical petroleum refinery makes use of the vacuum gas oil by cracking the large molecular weight compounds into light fuel hydrocarbons. For various types of fast pyrolysis bio-oil, successful analogous methods for processing heavy fractions could expedite integration into a petroleum refinery fo...
NASA Astrophysics Data System (ADS)
Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo
2017-12-01
A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.
Alves, Daniele S. M.; El Hedri, Sonia; Wacker, Jay G.
2016-03-21
We discuss the relevance of directional detection experiments in the post-discovery era and propose a method to extract the local dark matter phase space distribution from directional data. The first feature of this method is a parameterization of the dark matter distribution function in terms of integrals of motion, which can be analytically extended to infer properties of the global distribution if certain equilibrium conditions hold. The second feature of our method is a decomposition of the distribution function in moments of a model independent basis, with minimal reliance on the ansatz for its functional form. We illustrate our methodmore » using the Via Lactea II N-body simulation as well as an analytical model for the dark matter halo. Furthermore, we conclude that O(1000) events are necessary to measure deviations from the Standard Halo Model and constrain or measure the presence of anisotropies.« less
Hamra, Ghassan; Richardson, David; Maclehose, Richard; Wing, Steve
2013-01-01
Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology.
Integrating Informative Priors from Experimental Research with Bayesian Methods
Hamra, Ghassan; Richardson, David; MacLehose, Richard; Wing, Steve
2013-01-01
Informative priors can be a useful tool for epidemiologists to handle problems of sparse data in regression modeling. It is sometimes the case that an investigator is studying a population exposed to two agents, X and Y, where Y is the agent of primary interest. Previous research may suggest that the exposures have different effects on the health outcome of interest, one being more harmful than the other. Such information may be derived from epidemiologic analyses; however, in the case where such evidence is unavailable, knowledge can be drawn from toxicologic studies or other experimental research. Unfortunately, using toxicologic findings to develop informative priors in epidemiologic analyses requires strong assumptions, with no established method for its utilization. We present a method to help bridge the gap between animal and cellular studies and epidemiologic research by specification of an order-constrained prior. We illustrate this approach using an example from radiation epidemiology. PMID:23222512
Integrated feature extraction and selection for neuroimage classification
NASA Astrophysics Data System (ADS)
Fan, Yong; Shen, Dinggang
2009-02-01
Feature extraction and selection are of great importance in neuroimage classification for identifying informative features and reducing feature dimensionality, which are generally implemented as two separate steps. This paper presents an integrated feature extraction and selection algorithm with two iterative steps: constrained subspace learning based feature extraction and support vector machine (SVM) based feature selection. The subspace learning based feature extraction focuses on the brain regions with higher possibility of being affected by the disease under study, while the possibility of brain regions being affected by disease is estimated by the SVM based feature selection, in conjunction with SVM classification. This algorithm can not only take into account the inter-correlation among different brain regions, but also overcome the limitation of traditional subspace learning based feature extraction methods. To achieve robust performance and optimal selection of parameters involved in feature extraction, selection, and classification, a bootstrapping strategy is used to generate multiple versions of training and testing sets for parameter optimization, according to the classification performance measured by the area under the ROC (receiver operating characteristic) curve. The integrated feature extraction and selection method is applied to a structural MR image based Alzheimer's disease (AD) study with 98 non-demented and 100 demented subjects. Cross-validation results indicate that the proposed algorithm can improve performance of the traditional subspace learning based classification.
Orthotic Body-Weight Support Through Underactuated Potential Energy Shaping with Contact Constraints
Lv, Ge; Gregg, Robert D.
2015-01-01
Body-weight support is an effective clinical tool for gait rehabilitation after neurological impairment. Body-weight supported training systems have been developed to help patients regain mobility and confidence during walking, but conventional systems constrain the patient's treatment in clinical environments. We propose that this challenge could be addressed by virtually providing patients with body-weight support through the actuators of a powered orthosis (or exoskeleton) utilizing potential energy shaping control. However, the changing contact conditions and degrees of underactuation encountered during human walking present significant challenges to consistently matching a desired potential energy for the human in closed loop. We therefore introduce a generalized matching condition for shaping Lagrangian systems with holonomic contact constraints. By satisfying this matching condition for four phases of gait, we derive control laws to achieve virtual body-weight support through a powered knee-ankle orthosis. We demonstrate beneficial effects of virtual body-weight support in simulations of a human-like biped model, indicating the potential clinical value of this proposed control approach. PMID:26900254
Fractional order integration and fuzzy logic based filter for denoising of echocardiographic image.
Saadia, Ayesha; Rashdi, Adnan
2016-12-01
Ultrasound is widely used for imaging due to its cost effectiveness and safety feature. However, ultrasound images are inherently corrupted with speckle noise which severely affects the quality of these images and create difficulty for physicians in diagnosis. To get maximum benefit from ultrasound imaging, image denoising is an essential requirement. To perform image denoising, a two stage methodology using fuzzy weighted mean and fractional integration filter has been proposed in this research work. In stage-1, image pixels are processed by applying a 3 × 3 window around each pixel and fuzzy logic is used to assign weights to the pixels in each window, replacing central pixel of the window with weighted mean of all neighboring pixels present in the same window. Noise suppression is achieved by assigning weights to the pixels while preserving edges and other important features of an image. In stage-2, the resultant image is further improved by fractional order integration filter. Effectiveness of the proposed methodology has been analyzed for standard test images artificially corrupted with speckle noise and real ultrasound B-mode images. Results of the proposed technique have been compared with different state-of-the-art techniques including Lsmv, Wiener, Geometric filter, Bilateral, Non-local means, Wavelet, Perona et al., Total variation (TV), Global Adaptive Fractional Integral Algorithm (GAFIA) and Improved Fractional Order Differential (IFD) model. Comparison has been done on quantitative and qualitative basis. For quantitative analysis different metrics like Peak Signal to Noise Ratio (PSNR), Speckle Suppression Index (SSI), Structural Similarity (SSIM), Edge Preservation Index (β) and Correlation Coefficient (ρ) have been used. Simulations have been done using Matlab. Simulation results of artificially corrupted standard test images and two real Echocardiographic images reveal that the proposed method outperforms existing image denoising techniques reported in the literature. The proposed method for denoising of Echocardiographic images is effective in noise suppression/removal. It not only removes noise from an image but also preserves edges and other important structure. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Stepner, D. E.; Mehra, R. K.
1973-01-01
A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.
Gao, Zhouzheng; Zhang, Hongping; Ge, Maorong; Niu, Xiaoji; Shen, Wenbin; Wickert, Jens; Schuh, Harald
2015-01-01
The continuity and reliability of precise GNSS positioning can be seriously limited by severe user observation environments. The Inertial Navigation System (INS) can overcome such drawbacks, but its performance is clearly restricted by INS sensor errors over time. Accordingly, the tightly coupled integration of GPS and INS can overcome the disadvantages of each individual system and together form a new navigation system with a higher accuracy, reliability and availability. Recently, ionosphere-constrained (IC) precise point positioning (PPP) utilizing raw GPS observations was proven able to improve both the convergence and positioning accuracy of the conventional PPP using ionosphere-free combined observations (LC-PPP). In this paper, a new mode of tightly coupled integration, in which the IC-PPP instead of LC-PPP is employed, is implemented to further improve the performance of the coupled system. We present the detailed mathematical model and the related algorithm of the new integration of IC-PPP and INS. To evaluate the performance of the new tightly coupled integration, data of both airborne and vehicle experiments with a geodetic GPS receiver and tactical grade inertial measurement unit are processed and the results are analyzed. The statistics show that the new approach can further improve the positioning accuracy compared with both IC-PPP and the tightly coupled integration of the conventional PPP and INS. PMID:25763647
NASA Astrophysics Data System (ADS)
Tang, Zhongqian; Zhang, Hua; Yi, Shanzhen; Xiao, Yangfan
2018-03-01
GIS-based multi-criteria decision analysis (MCDA) is increasingly used to support flood risk assessment. However, conventional GIS-MCDA methods fail to adequately represent spatial variability and are accompanied with considerable uncertainty. It is, thus, important to incorporate spatial variability and uncertainty into GIS-based decision analysis procedures. This research develops a spatially explicit, probabilistic GIS-MCDA approach for the delineation of potentially flood susceptible areas. The approach integrates the probabilistic and the local ordered weighted averaging (OWA) methods via Monte Carlo simulation, to take into account the uncertainty related to criteria weights, spatial heterogeneity of preferences and the risk attitude of the analyst. The approach is applied to a pilot study for the Gucheng County, central China, heavily affected by the hazardous 2012 flood. A GIS database of six geomorphological and hydrometeorological factors for the evaluation of susceptibility was created. Moreover, uncertainty and sensitivity analysis were performed to investigate the robustness of the model. The results indicate that the ensemble method improves the robustness of the model outcomes with respect to variation in criteria weights and identifies which criteria weights are most responsible for the variability of model outcomes. Therefore, the proposed approach is an improvement over the conventional deterministic method and can provides a more rational, objective and unbiased tool for flood susceptibility evaluation.
NASA Astrophysics Data System (ADS)
Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.
2016-08-01
Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.
Ma, Jun; Yank, Veronica; Lv, Nan; Goldhaber-Fiebert, Jeremy D.; Lewis, Megan A.; Kramer, M. Kaye; Snowden, Mark B.; Rosas, Lisa G.; Xiao, Lan; Blonstein, Andrea C.
2015-01-01
Effective interventions targeting comorbid obesity and depression are critical given the increasing prevalence and worsened outcomes for patients with both conditions. RAINBOW is a type 1 hybrid design randomized controlled trial. The objective is to evaluate the clinical and cost effectiveness and implementation potential of an integrated, technology-enhanced, collaborative care model for treating comorbid obesity and depression in primary care. Obese and depressed adults (n=404) will be randomized to usual care enhanced with the provision of a pedometer and information about the health system’s services for mood or weight management (control) or with the Integrated Coaching for Better Mood and Weight (I-CARE) program (intervention). The 12-month I-CARE program synergistically integrates two proven behavioral interventions: problem-solving therapy with as-needed intensification of pharmacotherapy for depression (PEARLS) and standardized behavioral treatment for obesity (Group Lifestyle Balance™). It utilizes traditional (e.g., office visits and phone consults) and emerging care delivery modalities (e.g., patient web portal and mobile applications). Follow-up assessments will occur at 6, 12, 18, and 24 months. We hypothesize that compared with controls, I-CARE participants will have greater improvements in weight and depression severity measured by the 20-item Depression Symptom Checklist at 12 months, which will be sustained at 24 months. We will also assess I-CARE’s cost-effectiveness and use mixed methods to examine its potential for reach, adoption, implementation, and maintenance. This study offers the potential to change how obese and depressed adults are treated—through a new model of accessible and integrative lifestyle medicine and mental health expertise—in primary care. PMID:26096714
Ma, Jun; Yank, Veronica; Lv, Nan; Goldhaber-Fiebert, Jeremy D; Lewis, Megan A; Kramer, M Kaye; Snowden, Mark B; Rosas, Lisa G; Xiao, Lan; Blonstein, Andrea C
2015-07-01
Effective interventions targeting comorbid obesity and depression are critical given the increasing prevalence and worsened outcomes for patients with both conditions. RAINBOW is a type 1 hybrid design randomized controlled trial. The objective is to evaluate the clinical and cost effectiveness and implementation potential of an integrated, technology-enhanced, collaborative care model for treating comorbid obesity and depression in primary care. Obese and depressed adults (n = 404) will be randomized to usual care enhanced with the provision of a pedometer and information about the health system's services for mood or weight management (control) or with the Integrated Coaching for Better Mood and Weight (I-CARE) program (intervention). The 12-month I-CARE program synergistically integrates two proven behavioral interventions: problem-solving therapy with as-needed intensification of pharmacotherapy for depression (PEARLS) and standardized behavioral treatment for obesity (Group Lifestyle Balance(™)). It utilizes traditional (e.g., office visits and phone consults) and emerging care delivery modalities (e.g., patient web portal and mobile applications). Follow-up assessments will occur at 6, 12, 18, and 24 months. We hypothesize that compared with controls, I-CARE participants will have greater improvements in weight and depression severity measured by the 20-item Depression Symptom Checklist at 12 months, which will be sustained at 24 months. We will also assess I-CARE's cost-effectiveness and use mixed methods to examine its potential for reach, adoption, implementation, and maintenance. This study offers the potential to change how obese and depressed adults are treated-through a new model of accessible and integrative lifestyle medicine and mental health expertise-in primary care. Copyright © 2015 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Stewart, Mary K.; Hagood, Danielle; Ching, Cynthia Carter
2017-01-01
This article examines two communities of youth who play an online game that integrates physical activity into virtual game play. Participating youth from two research sites--an urban middle school and a suburban junior high school--wore FitBits that tracked their physical activity and then integrated their real-world energy into game-world…
ERIC Educational Resources Information Center
Borovsky, Arielle; Elman, Jeffrey L.; Kutas, Marta
2012-01-01
We investigated the impact of contextual constraint on the integration of novel word meanings into semantic memory. Adults read strongly or weakly constraining sentences ending in known or unknown (novel) words as scalp-recorded electrical brain activity was recorded. Word knowledge was assessed via a lexical decision task in which recently seen…
Itthipuripat, Sirawaj; Serences, John T
2016-06-01
Neuroscience is inherently interdisciplinary, rapidly expanding beyond its roots in biological sciences to many areas of the social and physical sciences. This expansion has led to more sophisticated ways of thinking about the links between brains and behavior and has inspired the development of increasingly advanced tools to characterize the activity of large populations of neurons. However, along with these advances comes a heightened risk of fostering confusion unless efforts are made to better integrate findings across different model systems and to develop a better understanding about how different measurement techniques provide mutually constraining information. Here we use selective visuospatial attention as a case study to highlight the importance of these issues, and we suggest that exploiting multiple measures can better constrain models that relate neural activity to animal behavior. © The Author(s) 2015.
Jiang, T; Jiang, C-Y; Shu, J-H; Xu, Y-J
2017-07-10
The molecular mechanism of nasopharyngeal carcinoma (NPC) is poorly understood and effective therapeutic approaches are needed. This research aimed to excavate the attractor modules involved in the progression of NPC and provide further understanding of the underlying mechanism of NPC. Based on the gene expression data of NPC, two specific protein-protein interaction networks for NPC and control conditions were re-weighted using Pearson correlation coefficient. Then, a systematic tracking of candidate modules was conducted on the re-weighted networks via cliques algorithm, and a total of 19 and 38 modules were separately identified from NPC and control networks, respectively. Among them, 8 pairs of modules with similar gene composition were selected, and 2 attractor modules were identified via the attract method. Functional analysis indicated that these two attractor modules participate in one common bioprocess of cell division. Based on the strategy of integrating systemic module inference with the attract method, we successfully identified 2 attractor modules. These attractor modules might play important roles in the molecular pathogenesis of NPC via affecting the bioprocess of cell division in a conjunct way. Further research is needed to explore the correlations between cell division and NPC.
NASA Astrophysics Data System (ADS)
Yang, Xudong; Sun, Lingyu; Zhang, Cheng; Li, Lijun; Dai, Zongmiao; Xiong, Zhenkai
2018-03-01
The application of polymer composites as a substitution of metal is an effective approach to reduce vehicle weight. However, the final performance of composite structures is determined not only by the material types, structural designs and manufacturing process, but also by their mutual restrict. Hence, an integrated "material-structure-process-performance" method is proposed for the conceptual and detail design of composite components. The material selection is based on the principle of composite mechanics such as rule of mixture for laminate. The design of component geometry, dimension and stacking sequence is determined by parametric modeling and size optimization. The selection of process parameters are based on multi-physical field simulation. The stiffness and modal constraint conditions were obtained from the numerical analysis of metal benchmark under typical load conditions. The optimal design was found by multi-discipline optimization. Finally, the proposed method was validated by an application case of automotive hatchback using carbon fiber reinforced polymer. Compared with the metal benchmark, the weight of composite one reduces 38.8%, simultaneously, its torsion and bending stiffness increases 3.75% and 33.23%, respectively, and the first frequency also increases 44.78%.
Medical Image Segmentation by Combining Graph Cut and Oriented Active Appearance Models
Chen, Xinjian; Udupa, Jayaram K.; Bağcı, Ulaş; Zhuge, Ying; Yao, Jianhua
2017-01-01
In this paper, we propose a novel 3D segmentation method based on the effective combination of the active appearance model (AAM), live wire (LW), and graph cut (GC). The proposed method consists of three main parts: model building, initialization, and segmentation. In the model building part, we construct the AAM and train the LW cost function and GC parameters. In the initialization part, a novel algorithm is proposed for improving the conventional AAM matching method, which effectively combines the AAM and LW method, resulting in Oriented AAM (OAAM). A multi-object strategy is utilized to help in object initialization. We employ a pseudo-3D initialization strategy, and segment the organs slice by slice via multi-object OAAM method. For the segmentation part, a 3D shape constrained GC method is proposed. The object shape generated from the initialization step is integrated into the GC cost computation, and an iterative GC-OAAM method is used for object delineation. The proposed method was tested in segmenting the liver, kidneys, and spleen on a clinical CT dataset and also tested on the MICCAI 2007 grand challenge for liver segmentation training dataset. The results show the following: (a) An overall segmentation accuracy of true positive volume fraction (TPVF) > 94.3%, false positive volume fraction (FPVF) < 0.2% can be achieved. (b) The initialization performance can be improved by combining AAM and LW. (c) The multi-object strategy greatly facilitates the initialization. (d) Compared to the traditional 3D AAM method, the pseudo 3D OAAM method achieves comparable performance while running 12 times faster. (e) The performance of proposed method is comparable to the state of the art liver segmentation algorithm. The executable version of 3D shape constrained GC with user interface can be downloaded from website http://xinjianchen.wordpress.com/research/. PMID:22311862
Alimohammadzadeh, Khalil; Bahadori, Mohammadkarim; Hassani, Fariba
2016-01-01
Background: Radiology department as a service provider organization requires realization of quality concept concerning service provisioning knowledge, satisfaction and all issues relating to the customer as well as quality assurance and improvement issues. At present, radiology departments in hospitals are regarded as income generating units and they should continuously seek performance improvement so that they can survive in the changing and competitive environment of the health care sector. Objectives: The aim of this study was to propose a method for ranking of radiology departments in selected hospitals of Tehran city using analytical hierarchical process (AHP) and quality evaluation of their service in 2015. Materials and Methods: This study was an applied and cross-sectional study, carried out in radiology departments of 6 Tehran educational hospitals in 2015. The hospitals were selected using non-probability and purposeful method. Data gathering was performed using customized joint commission international (JCI) standards. Expert Choice 10.0 software was used for data analysis. AHP method was used for prioritization. Results: “Management and empowerment of human resources’’ (weight = 0.465) and “requirements and facilities” (weight = 0.139) were of highest and lowest significance respectively in the overall ranking of the hospitals. MS (weight = 0.316), MD (weight = 0.259), AT (weight = 0.14), TS (weight = 0.108), MO (weight = 0.095), and LH (0.082) achieved the first to sixth rankings respectively. Conclusion: The use of AHP method can be promising for fostering the evaluation method and subsequently promotion of the efficiency and effectiveness of the radiology departments. The present model can fill in the gap in the accreditation system of the country’s hospitals in respect with ranking and comparing them considering the significance and value of each individual criteria and standard. Accordingly, it can predict an integration of qualitative and quantitative criteria involved and thereby take a decisive step towards further efficiency and effectiveness of the health care evaluation systems. PMID:27127577
NASA Astrophysics Data System (ADS)
Arsenault, Richard; Gatien, Philippe; Renaud, Benoit; Brissette, François; Martel, Jean-Luc
2015-10-01
This study aims to test whether a weighted combination of several hydrological models can simulate flows more accurately than the models taken individually. In addition, the project attempts to identify the most efficient model averaging method and the optimal number of models to include in the weighting scheme. In order to address the first objective, streamflow was simulated using four lumped hydrological models (HSAMI, HMETS, MOHYSE and GR4J-6), each of which were calibrated with three different objective functions on 429 watersheds. The resulting 12 hydrographs (4 models × 3 metrics) were weighted and combined with the help of 9 averaging methods which are the simple arithmetic mean (SAM), Akaike information criterion (AICA), Bates-Granger (BGA), Bayes information criterion (BICA), Bayesian model averaging (BMA), Granger-Ramanathan average variant A, B and C (GRA, GRB and GRC) and the average by SCE-UA optimization (SCA). The same weights were then applied to the hydrographs in validation mode, and the Nash-Sutcliffe Efficiency metric was measured between the averaged and observed hydrographs. Statistical analyses were performed to compare the accuracy of weighted methods to that of individual models. A Kruskal-Wallis test and a multi-objective optimization algorithm were then used to identify the most efficient weighted method and the optimal number of models to integrate. Results suggest that the GRA, GRB, GRC and SCA weighted methods perform better than the individual members. Model averaging from these four methods were superior to the best of the individual members in 76% of the cases. Optimal combinations on all watersheds included at least one of each of the four hydrological models. None of the optimal combinations included all members of the ensemble of 12 hydrographs. The Granger-Ramanathan average variant C (GRC) is recommended as the best compromise between accuracy, speed of execution, and simplicity.
Structural weights analysis of advanced aerospace vehicles using finite element analysis
NASA Technical Reports Server (NTRS)
Bush, Lance B.; Lentz, Christopher A.; Rehder, John J.; Naftel, J. Chris; Cerro, Jeffrey A.
1989-01-01
A conceptual/preliminary level structural design system has been developed for structural integrity analysis and weight estimation of advanced space transportation vehicles. The system includes a three-dimensional interactive geometry modeler, a finite element pre- and post-processor, a finite element analyzer, and a structural sizing program. Inputs to the system include the geometry, surface temperature, material constants, construction methods, and aerodynamic and inertial loads. The results are a sized vehicle structure capable of withstanding the static loads incurred during assembly, transportation, operations, and missions, and a corresponding structural weight. An analysis of the Space Shuttle external tank is included in this paper as a validation and benchmark case of the system.
Predicting breast cancer using an expression values weighted clinical classifier.
Thomas, Minta; De Brabanter, Kris; Suykens, Johan A K; De Moor, Bart
2014-12-31
Clinical data, such as patient history, laboratory analysis, ultrasound parameters-which are the basis of day-to-day clinical decision support-are often used to guide the clinical management of cancer in the presence of microarray data. Several data fusion techniques are available to integrate genomics or proteomics data, but only a few studies have created a single prediction model using both gene expression and clinical data. These studies often remain inconclusive regarding an obtained improvement in prediction performance. To improve clinical management, these data should be fully exploited. This requires efficient algorithms to integrate these data sets and design a final classifier. LS-SVM classifiers and generalized eigenvalue/singular value decompositions are successfully used in many bioinformatics applications for prediction tasks. While bringing up the benefits of these two techniques, we propose a machine learning approach, a weighted LS-SVM classifier to integrate two data sources: microarray and clinical parameters. We compared and evaluated the proposed methods on five breast cancer case studies. Compared to LS-SVM classifier on individual data sets, generalized eigenvalue decomposition (GEVD) and kernel GEVD, the proposed weighted LS-SVM classifier offers good prediction performance, in terms of test area under ROC Curve (AUC), on all breast cancer case studies. Thus a clinical classifier weighted with microarray data set results in significantly improved diagnosis, prognosis and prediction responses to therapy. The proposed model has been shown as a promising mathematical framework in both data fusion and non-linear classification problems.
Advances in endoscopic balloon therapy for weight loss and its limitations
Vyas, Dinesh; Deshpande, Kaivalya; Pandya, Yagnik
2017-01-01
The field of medical and surgical weight loss is undergoing an explosion of new techniques and devices. A lot of these are geared towards endoscopic approaches rather than the conventional and more invasive laparoscopic or open approach. One such recent advance is the introduction of intrgastric balloons. In this article, we discuss the recently Food and Drug Administration approved following balloons for weight loss: the Orbera™ Intragastric Balloon System (Apollo Endosurgery Inc, Austin, TX, United States), the ReShape® Integrated Dual Balloon System (ReShape Medical, Inc., San Clemente, CA, United States), and the Obalon (Obalon® Therapeutics, Inc.). The individual features of each of these balloons, the method of introduction and removal, and the expected weight loss and possible complications are discussed. This review of the various balloons highlights the innovation in the field of weight loss. PMID:29209122
NASA Astrophysics Data System (ADS)
Wang, Shunguo; Kalscheuer, Thomas; Bastani, Mehrdad; Malehmir, Alireza; Pedersen, Laust B.; Dahlin, Torleif; Meqbel, Naser
2018-04-01
The electrical resistivity tomography (ERT) method provides moderately good constraints for both conductive and resistive structures, while the radio-magnetotelluric (RMT) method is well suited to constrain conductive structures. Additionally, RMT and ERT data may have different target coverage and are differently affected by various types of noise. Hence, joint inversion of RMT and ERT data sets may provide a better constrained model as compared to individual inversions. In this study, joint inversion of boat-towed RMT and lake-floor ERT data has for the first time been formulated and implemented. The implementation was tested on both synthetic and field data sets incorporating RMT transverse electrical mode and ERT data. Results from synthetic data demonstrate that the joint inversion yields models with better resolution compared with individual inversions. A case study from an area adjacent to the Äspö Hard Rock Laboratory (HRL) in southeastern Sweden was used to demonstrate the implementation of the method. A 790-m-long profile comprising lake-floor ERT and boat-towed RMT data combined with partial land data was used for this purpose. Joint inversions with and without weighting (applied to different data sets, vertical and horizontal model smoothness) as well as constrained joint inversions incorporating bathymetry data and water resistivity measurements were performed. The resulting models delineate subsurface structures such as a major northeasterly directed fracture system, which is observed in the HRL facility underground and confirmed by boreholes. A previously uncertain weakness zone, likely a fracture system in the northern part of the profile, is inferred in this study. The fractures are highly saturated with saline water, which make them good targets of resistivity-based geophysical methods. Nevertheless, conductive sediments overlain by the lake water add further difficulty to resolve these deep fracture zones. Therefore, the joint inversion of RMT and ERT data particularly helps to improve the resolution of the resistivity models in areas where the profile traverses shallow water and land sections. Our modification of the joint inversion of RMT and ERT data improves the study of geological units underneath shallow water bodies where underground infrastructures are planned. Thus, it allows better planning and mitigating the risks and costs associated with conductive weakness zones.
Climate Change Impacts and Vulnerability Assessment in Industrial Complexes
NASA Astrophysics Data System (ADS)
Lee, H. J.; Lee, D. K.
2016-12-01
Climate change has recently caused frequent natural disasters, such as floods, droughts, and heat waves. Such disasters have also increased industrial damages. We must establish climate change adaptation policies to reduce the industrial damages. It is important to make accurate vulnerability assessment to establish climate change adaptation policies. Thus, this study aims at establishing a new index to assess vulnerability level in industrial complexes. Most vulnerability indices have been developed with subjective approaches, such as the Delphi survey and the Analytic Hierarchy Process(AHP). The subjective approaches rely on the knowledge of a few experts, which provokes the lack of the reliability of the indices. To alleviate the problem, we have designed a vulnerability index incorporating objective approaches. We have investigated 42 industrial complex sites in Republic of Korea (ROK). To calculate weights of variables, we used entropy method as an objective method integrating the Delphi survey as a subjective method. Finally, we found our method integrating both subjective method and objective method could generate result. The integration of the entropy method enables us to assess the vulnerability objectively. Our method will be useful to establish climate change adaptation policies by reducing the uncertainties of the methods based on the subjective approaches.
Integration of sensory force feedback is disturbed in CRPS-related dystonia.
Mugge, Winfred; van der Helm, Frans C T; Schouten, Alfred C
2013-01-01
Complex regional pain syndrome (CRPS) is characterized by pain and disturbed blood flow, temperature regulation and motor control. Approximately 25% of cases develop fixed dystonia. The origin of this movement disorder is poorly understood, although recent insights suggest involvement of disturbed force feedback. Assessment of sensorimotor integration may provide insight into the pathophysiology of fixed dystonia. Sensory weighting is the process of integrating and weighting sensory feedback channels in the central nervous system to improve the state estimate. It was hypothesized that patients with CRPS-related dystonia bias sensory weighting of force and position toward position due to the unreliability of force feedback. The current study provides experimental evidence for dysfunctional sensory integration in fixed dystonia, showing that CRPS-patients with fixed dystonia weight force and position feedback differently than controls do. The study shows reduced force feedback weights in CRPS-patients with fixed dystonia, making it the first to demonstrate disturbed integration of force feedback in fixed dystonia, an important step towards understanding the pathophysiology of fixed dystonia.
Juhas, Mario; Ajioka, James W
2016-10-05
Escherichia coli K-12 is a frequently used host for a number of synthetic biology and biotechnology applications and chassis for the development of the minimal cell factories. Novel approaches for integrating high molecular weight DNA into the E. coli chromosome would therefore greatly facilitate engineering efforts in this bacterium. We developed a reliable and flexible lambda Red recombinase-based system, which utilizes overlapping DNA fragments for integration of the high molecular weight DNA into the E. coli chromosome. Our chromosomal integration strategy can be used to integrate high molecular weight DNA of variable length into any non-essential locus in the E. coli chromosome. Using this approach we integrated 15 kb DNA encoding sucrose catabolism and lactose metabolism and transport operons into the fliK locus of the flagellar region 3b in the E. coli K12 MG1655 chromosome. Furthermore, with this system we integrated 50 kb of Bacillus subtilis 168 DNA into two target sites in the E. coli K12 MG1655 chromosome. The chromosomal integrations into the fliK locus occurred with high efficiency, inhibited motility, and did not have a negative effect on the growth of E. coli. In addition to the rational design of synthetic biology devices, our high molecular weight DNA chromosomal integration system will facilitate metabolic and genome-scale engineering of E. coli.
NASA Astrophysics Data System (ADS)
Hasan, Mohammed A.
1997-11-01
In this dissertation, we present several novel approaches for detection and identification of targets of arbitrary shapes from the acoustic backscattered data and using the incident waveform. This problem is formulated as time- delay estimation and sinusoidal frequency estimation problems which both have applications in many other important areas in signal processing. Solving time-delay estimation problem allows the identification of the specular components in the backscattered signal from elastic and non-elastic targets. Thus, accurate estimation of these time delays would help in determining the existence of certain clues for detecting targets. Several new methods for solving these two problems in the time, frequency and wavelet domains are developed. In the time domain, a new block fast transversal filter (BFTF) is proposed for a fast implementation of the least squares (LS) method. This BFTF algorithm is derived by using data-related constrained block-LS cost function to guarantee global optimality. The new soft-constrained algorithm provides an efficient way of transferring weight information between blocks of data and thus it is computationally very efficient compared with other LS- based schemes. Additionally, the tracking ability of the algorithm can be controlled by varying the block length and/or a soft constrained parameter. The effectiveness of this algorithm is tested on several underwater acoustic backscattered data for elastic targets and non-elastic (cement chunk) objects. In the frequency domain, the time-delay estimation problem is converted to a sinusoidal frequency estimation problem by using the discrete Fourier transform. Then, the lagged sample covariance matrices of the resulting signal are computed and studied in terms of their eigen- structure. These matrices are shown to be robust and effective in extracting bases for the signal and noise subspaces. New MUSIC and matrix pencil-based methods are derived these subspaces. The effectiveness of the method is demonstrated on the problem of detection of multiple specular components in the acoustic backscattered data. Finally, a method for the estimation of time delays using wavelet decomposition is derived. The sub-band adaptive filtering uses discrete wavelet transform for multi- resolution or sub-band decomposition. Joint time delay estimation for identifying multi-specular components and subsequent adaptive filtering processes are performed on the signal in each sub-band. This would provide multiple 'look' of the signal at different resolution scale which results in more accurate estimates for delays associated with the specular components. Simulation results on the simulated and real shallow water data are provided which show the promise of this new scheme for target detection in a heavy cluttered environment.
A Comparison of Four Item-Selection Methods for Severely Constrained CATs
ERIC Educational Resources Information Center
He, Wei; Diao, Qi; Hauser, Carl
2014-01-01
This study compared four item-selection procedures developed for use with severely constrained computerized adaptive tests (CATs). Severely constrained CATs refer to those adaptive tests that seek to meet a complex set of constraints that are often not conclusive to each other (i.e., an item may contribute to the satisfaction of several…
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method.
Yu, Guan; Liu, Yufeng; Thung, Kim-Han; Shen, Dinggang
2014-01-01
Accurately identifying mild cognitive impairment (MCI) individuals who will progress to Alzheimer's disease (AD) is very important for making early interventions. Many classification methods focus on integrating multiple imaging modalities such as magnetic resonance imaging (MRI) and fluorodeoxyglucose positron emission tomography (FDG-PET). However, the main challenge for MCI classification using multiple imaging modalities is the existence of a lot of missing data in many subjects. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI) study, almost half of the subjects do not have PET images. In this paper, we propose a new and flexible binary classification method, namely Multi-task Linear Programming Discriminant (MLPD) analysis, for the incomplete multi-source feature learning. Specifically, we decompose the classification problem into different classification tasks, i.e., one for each combination of available data sources. To solve all different classification tasks jointly, our proposed MLPD method links them together by constraining them to achieve the similar estimated mean difference between the two classes (under classification) for those shared features. Compared with the state-of-the-art incomplete Multi-Source Feature (iMSF) learning method, instead of constraining different classification tasks to choose a common feature subset for those shared features, MLPD can flexibly and adaptively choose different feature subsets for different classification tasks. Furthermore, our proposed MLPD method can be efficiently implemented by linear programming. To validate our MLPD method, we perform experiments on the ADNI baseline dataset with the incomplete MRI and PET images from 167 progressive MCI (pMCI) subjects and 226 stable MCI (sMCI) subjects. We further compared our method with the iMSF method (using incomplete MRI and PET images) and also the single-task classification method (using only MRI or only subjects with both MRI and PET images). Experimental results show very promising performance of our proposed MLPD method. PMID:24820966
Constrained optimization via simulation models for new product innovation
NASA Astrophysics Data System (ADS)
Pujowidianto, Nugroho A.
2017-11-01
We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.
Gaining insight into the T _2^*-T2 relationship in surface NMR free-induction decay measurements
NASA Astrophysics Data System (ADS)
Grombacher, Denys; Auken, Esben
2018-05-01
One of the primary shortcomings of the surface nuclear magnetic resonance (NMR) free-induction decay (FID) measurement is the uncertainty surrounding which mechanism controls the signal's time dependence. Ideally, the FID-estimated relaxation time T_2^* that describes the signal's decay carries an intimate link to the geometry of the pore space. In this limit the parameter T_2^* is closely linked to a related parameter T2, which is more closely linked to pore-geometry. If T_2^* ˜eq {T_2} the FID can provide valuable insight into relative pore-size and can be used to make quantitative permeability estimates. However, given only FID measurements it is difficult to determine whether T_2^* is linked to pore geometry or whether it has been strongly influenced by background magnetic field inhomogeneity. If the link between an observed T_2^* and the underlying T2 could be further constrained the utility of the standard surface NMR FID measurement would be greatly improved. We hypothesize that an approach employing an updated surface NMR forward model that solves the full Bloch equations with appropriately weighted relaxation terms can be used to help constrain the T_2^*-T2 relationship. Weighting the relaxation terms requires estimating the poorly constrained parameters T2 and T1; to deal with this uncertainty we propose to conduct a parameter search involving multiple inversions that employ a suite of forward models each describing a distinct but plausible T_2^*-T2 relationship. We hypothesize that forward models given poor T2 estimates will produce poor data fits when using the complex-inversion, while forward models given reliable T2 estimates will produce satisfactory data fits. By examining the data fits produced by the suite of plausible forward models, the likely T_2^*-T2 can be constrained by identifying the range of T2 estimates that produce reliable data fits. Synthetic and field results are presented to investigate the feasibility of the proposed technique.
NASA Astrophysics Data System (ADS)
Brunner, Philip; Doherty, J.; Simmons, Craig T.
2012-07-01
The data set used for calibration of regional numerical models which simulate groundwater flow and vadose zone processes is often dominated by head observations. It is to be expected therefore, that parameters describing vadose zone processes are poorly constrained. A number of studies on small spatial scales explored how additional data types used in calibration constrain vadose zone parameters or reduce predictive uncertainty. However, available studies focused on subsets of observation types and did not jointly account for different measurement accuracies or different hydrologic conditions. In this study, parameter identifiability and predictive uncertainty are quantified in simulation of a 1-D vadose zone soil system driven by infiltration, evaporation and transpiration. The worth of different types of observation data (employed individually, in combination, and with different measurement accuracies) is evaluated by using a linear methodology and a nonlinear Pareto-based methodology under different hydrological conditions. Our main conclusions are (1) Linear analysis provides valuable information on comparative parameter and predictive uncertainty reduction accrued through acquisition of different data types. Its use can be supplemented by nonlinear methods. (2) Measurements of water table elevation can support future water table predictions, even if such measurements inform the individual parameters of vadose zone models to only a small degree. (3) The benefits of including ET and soil moisture observations in the calibration data set are heavily dependent on depth to groundwater. (4) Measurements of groundwater levels, measurements of vadose ET or soil moisture poorly constrain regional groundwater system forcing functions.
An Examination of Alternative Multidimensional Scaling Techniques
ERIC Educational Resources Information Center
Papazoglou, Sofia; Mylonas, Kostas
2017-01-01
The purpose of this study is to compare alternative multidimensional scaling (MDS) methods for constraining the stimuli on the circumference of a circle and on the surface of a sphere. Specifically, the existing MDS-T method for plotting the stimuli on the circumference of a circle is applied, and its extension is proposed for constraining the…
Micro-opto-mechanical devices and systems using epitaxial lift off
NASA Technical Reports Server (NTRS)
Camperi-Ginestet, C.; Kim, Young W.; Wilkinson, S.; Allen, M.; Jokerst, N. M.
1993-01-01
The integration of high quality, single crystal thin film gallium arsenide (GaAs) and indium phosphide (InP) based photonic and electronic materials and devices with host microstructures fabricated from materials such as silicon (Si), glass, and polymers will enable the fabrication of the next generation of micro-opto-mechanical systems (MOMS) and optoelectronic integrated circuits. Thin film semiconductor devices deposited onto arbitrary host substrates and structures create hybrid (more than one material) near-monolithic integrated systems which can be interconnected electrically using standard inexpensive microfabrication techniques such as vacuum metallization and photolithography. These integrated systems take advantage of the optical and electronic properties of compound semiconductor devices while still using host substrate materials such as silicon, polysilicon, glass and polymers in the microstructures. This type of materials optimization for specific tasks creates higher performance systems than those systems which must use trade-offs in device performance to integrate all of the function in a single material system. The low weight of these thin film devices also makes them attractive for integration with micromechanical devices which may have difficulty supporting and translating the full weight of a standard device. These thin film devices and integrated systems will be attractive for applications, however, only when the development of low cost, high yield fabrication and integration techniques makes their use economically feasible. In this paper, we discuss methods for alignment, selective deposition, and interconnection of thin film epitaxial GaAs and InP based devices onto host substrates and host microstructures.
Generating functions for weighted Hurwitz numbers
NASA Astrophysics Data System (ADS)
Guay-Paquet, Mathieu; Harnad, J.
2017-08-01
Double Hurwitz numbers enumerating weighted n-sheeted branched coverings of the Riemann sphere or, equivalently, weighted paths in the Cayley graph of Sn generated by transpositions are determined by an associated weight generating function. A uniquely determined 1-parameter family of 2D Toda τ -functions of hypergeometric type is shown to consist of generating functions for such weighted Hurwitz numbers. Four classical cases are detailed, in which the weighting is uniform: Okounkov's double Hurwitz numbers for which the ramification is simple at all but two specified branch points; the case of Belyi curves, with three branch points, two with specified profiles; the general case, with a specified number of branch points, two with fixed profiles, the rest constrained only by the genus; and the signed enumeration case, with sign determined by the parity of the number of branch points. Using the exponentiated quantum dilogarithm function as a weight generator, three new types of weighted enumerations are introduced. These determine quantum Hurwitz numbers depending on a deformation parameter q. By suitable interpretation of q, the statistical mechanics of quantum weighted branched covers may be related to that of Bosonic gases. The standard double Hurwitz numbers are recovered in the classical limit.
The Influence of Synaptic Weight Distribution on Neuronal Population Dynamics
Buice, Michael; Koch, Christof; Mihalas, Stefan
2013-01-01
The manner in which different distributions of synaptic weights onto cortical neurons shape their spiking activity remains open. To characterize a homogeneous neuronal population, we use the master equation for generalized leaky integrate-and-fire neurons with shot-noise synapses. We develop fast semi-analytic numerical methods to solve this equation for either current or conductance synapses, with and without synaptic depression. We show that its solutions match simulations of equivalent neuronal networks better than those of the Fokker-Planck equation and we compute bounds on the network response to non-instantaneous synapses. We apply these methods to study different synaptic weight distributions in feed-forward networks. We characterize the synaptic amplitude distributions using a set of measures, called tail weight numbers, designed to quantify the preponderance of very strong synapses. Even if synaptic amplitude distributions are equated for both the total current and average synaptic weight, distributions with sparse but strong synapses produce higher responses for small inputs, leading to a larger operating range. Furthermore, despite their small number, such synapses enable the network to respond faster and with more stability in the face of external fluctuations. PMID:24204219
MRM-Lasso: A Sparse Multiview Feature Selection Method via Low-Rank Analysis.
Yang, Wanqi; Gao, Yang; Shi, Yinghuan; Cao, Longbing
2015-11-01
Learning about multiview data involves many applications, such as video understanding, image classification, and social media. However, when the data dimension increases dramatically, it is important but very challenging to remove redundant features in multiview feature selection. In this paper, we propose a novel feature selection algorithm, multiview rank minimization-based Lasso (MRM-Lasso), which jointly utilizes Lasso for sparse feature selection and rank minimization for learning relevant patterns across views. Instead of simply integrating multiple Lasso from view level, we focus on the performance of sample-level (sample significance) and introduce pattern-specific weights into MRM-Lasso. The weights are utilized to measure the contribution of each sample to the labels in the current view. In addition, the latent correlation across different views is successfully captured by learning a low-rank matrix consisting of pattern-specific weights. The alternating direction method of multipliers is applied to optimize the proposed MRM-Lasso. Experiments on four real-life data sets show that features selected by MRM-Lasso have better multiview classification performance than the baselines. Moreover, pattern-specific weights are demonstrated to be significant for learning about multiview data, compared with view-specific weights.
Integration of real-time mapping technology in disaster relief distribution.
DOT National Transportation Integrated Search
2013-02-01
Vehicle routing for disaster relief distribution involves many challenges that distinguish this problem from those in commercial settings, given the time sensitive and resource constrained nature of relief activities. While operations research approa...
SPIDER: Next Generation Chip Scale Imaging Sensor Update
NASA Astrophysics Data System (ADS)
Duncan, A.; Kendrick, R.; Ogden, C.; Wuchenich, D.; Thurman, S.; Su, T.; Lai, W.; Chun, J.; Li, S.; Liu, G.; Yoo, S. J. B.
2016-09-01
The Lockheed Martin Advanced Technology Center (LM ATC) and the University of California at Davis (UC Davis) are developing an electro-optical (EO) imaging sensor called SPIDER (Segmented Planar Imaging Detector for Electro-optical Reconnaissance) that seeks to provide a 10x to 100x size, weight, and power (SWaP) reduction alternative to the traditional bulky optical telescope and focal-plane detector array. The substantial reductions in SWaP would reduce cost and/or provide higher resolution by enabling a larger-aperture imager in a constrained volume. Our SPIDER imager replaces the traditional optical telescope and digital focal plane detector array with a densely packed interferometer array based on emerging photonic integrated circuit (PIC) technologies that samples the object being imaged in the Fourier domain (i.e., spatial frequency domain), and then reconstructs an image. Our approach replaces the large optics and structures required by a conventional telescope with PICs that are accommodated by standard lithographic fabrication techniques (e.g., complementary metal-oxide-semiconductor (CMOS) fabrication). The standard EO payload integration and test process that involves precision alignment and test of optical components to form a diffraction limited telescope is, therefore, replaced by in-process integration and test as part of the PIC fabrication, which substantially reduces associated schedule and cost. This paper provides an overview of performance data on the second-generation PIC for SPIDER developed under the Defense Advanced Research Projects Agency (DARPA)'s SPIDER Zoom research funding. We also update the design description of the SPIDER Zoom imaging sensor and the second-generation PIC (high- and low resolution versions).
On geodynamo integrations conserving momentum flux
NASA Astrophysics Data System (ADS)
Wu, C.; Roberts, P. H.
2012-12-01
The equations governing the geodynamo are most often integrated by representing the magnetic field and fluid velocity by toroidal and poloidal scalars (for example, MAG code [1]). This procedure does not automatically conserve the momentum flux. The results can, particularly for flows with large shear, introduce significant errors, unless the viscosity is artificially increased. We describe a method that evades this difficulty, by solving the momentum equation directly while properly conserving momentum. It finds pressure by FFT and cyclic reduction, and integrates the governing equations on overlapping grids so avoiding the pole problem. The number of operations per time step is proportional to N3 where N is proportional to the number of grid points in each direction. This contrasts with the order N4 operations of standard spectral transform methods. The method is easily parallelized. It can also be easily adapted to schemes such as the Weighted Essentially Non-Oscillatory (WENO) method [2], a flux based procedure based on upwinding that is numerically stable even for zero explicit viscosity. The method has been successfully used to investigate the generation of magnetic fields by flows confined to spheroidal containers and driven by precessional and librational forcing [3, 4]. For spherical systems it satisfies dynamo benchmarks [5]. [1] MAG, http://www.geodynamics.org/cig/software/mag [2] Liu, XD, Osher, S and Chan, T, Weighted Essentially Nonoscillatory Schemes, J. Computational Physics, 115, 200-212, 1994. [3] Wu, CC and Roberts, PH, On a dynamo driven by topographic precession, Geophysical & Astrophysical Fluid Dynamics, 103, 467-501, (DOI: 10.1080/03091920903311788), 2009. [4] Wu, CC and Roberts, PH, On a dynamo driven topographically by longitudinal libration, Geophysical & Astrophysical Fluid Dynamics, DOI:10.1080/03091929.2012.682990, 2012. [5] Christensen, U, et al., A numerical dynamo benchmark, Phys. Earth Planet Int., 128, 25-34, 2001.
ODE Constrained Mixture Modelling: A Method for Unraveling Subpopulation Structures and Dynamics
Hasenauer, Jan; Hasenauer, Christine; Hucho, Tim; Theis, Fabian J.
2014-01-01
Functional cell-to-cell variability is ubiquitous in multicellular organisms as well as bacterial populations. Even genetically identical cells of the same cell type can respond differently to identical stimuli. Methods have been developed to analyse heterogeneous populations, e.g., mixture models and stochastic population models. The available methods are, however, either incapable of simultaneously analysing different experimental conditions or are computationally demanding and difficult to apply. Furthermore, they do not account for biological information available in the literature. To overcome disadvantages of existing methods, we combine mixture models and ordinary differential equation (ODE) models. The ODE models provide a mechanistic description of the underlying processes while mixture models provide an easy way to capture variability. In a simulation study, we show that the class of ODE constrained mixture models can unravel the subpopulation structure and determine the sources of cell-to-cell variability. In addition, the method provides reliable estimates for kinetic rates and subpopulation characteristics. We use ODE constrained mixture modelling to study NGF-induced Erk1/2 phosphorylation in primary sensory neurones, a process relevant in inflammatory and neuropathic pain. We propose a mechanistic pathway model for this process and reconstructed static and dynamical subpopulation characteristics across experimental conditions. We validate the model predictions experimentally, which verifies the capabilities of ODE constrained mixture models. These results illustrate that ODE constrained mixture models can reveal novel mechanistic insights and possess a high sensitivity. PMID:24992156
NASA Technical Reports Server (NTRS)
Loewenstein, Michael
1992-01-01
An attempt is made to constrain the total mass distribution of the giant elliptical galaxy NGC 4472 by constructing simultaneous equilibrium models for the gas and stars. Emphasis is given to reconciling the value of the emission-weighted average value of kT derived from the Ginga spectrum with the amount of dark matter needed to account for velocity dispersion observations.
New Acoustic Treatment For Aircraft Sidewalls
NASA Technical Reports Server (NTRS)
Vaicaitis, Rimas
1988-01-01
New aircraft-sidewall acoustic treatment reduces interior noise to acceptable levels and minimizes addition of weight to aircraft. Transmission of noise through aircraft sidewall reduced by stiffening device attached to interior side of aircraft skin, constrained-layer damping tape attached to stiffening device, porous acoustic materials of high resistivity, and relatively-soft trim panel isolated from vibrations of main fuselage structure.