Sample records for nonlinear ill-posed problems

  1. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  2. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  3. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  4. Image reconstruction

    NASA Astrophysics Data System (ADS)

    Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich

    Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.

  5. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  6. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  7. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  8. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  9. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  10. Local well-posedness for dispersion generalized Benjamin-Ono equations in Sobolev spaces

    NASA Astrophysics Data System (ADS)

    Guo, Zihua

    We prove that the Cauchy problem for the dispersion generalized Benjamin-Ono equation ∂u+|∂u+uu=0, u(x,0)=u(x), is locally well-posed in the Sobolev spaces H for s>1-α if 0⩽α⩽1. The new ingredient is that we generalize the methods of Ionescu, Kenig and Tataru (2008) [13] to approach the problem in a less perturbative way, in spite of the ill-posedness results of Molinet, Saut and Tzvetkov (2001) [21]. Moreover, as a bi-product we prove that if 0<α⩽1 the corresponding modified equation (with the nonlinearity ±uuu) is locally well-posed in H for s⩾1/2-α/4.

  11. Reconstruction de defauts a partir de donnees issues de capteurs a courants de foucault avec modele direct differentiel

    NASA Astrophysics Data System (ADS)

    Trillon, Adrien

    Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.

  12. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  13. Ill Posed Problems: Numerical and Statistical Methods for Mildly, Moderately and Severely Ill Posed Problems with Noisy Data.

    DTIC Science & Technology

    1980-02-01

    to estimate f -..ell, -noderately ,-ell, or- poorly. 1 ’The sansitivity *of a rec-ilarized estimate of f to the noise is made explicit. After giving the...AD-A 7 .SA92 925 WISCONSIN UN! V-MADISON DEFT OF STATISTICS F /S 11,’ 1 ILL POSED PRORLEMS: NUMERICAL ANn STATISTICAL METHODS FOR MILOL-ETC(U FEB 80 a...estimate f given z. We first define the 1 intrinsic rank of the problem where jK(tit) f (t)dt is known exactly. This 0 definition is used to provide insight

  14. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  15. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    DTIC Science & Technology

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  16. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE PAGES

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2017-07-10

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  17. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  18. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  19. On the use of the Reciprocity Gap Functional in inverse scattering with near-field data: An application to mammography

    NASA Astrophysics Data System (ADS)

    Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele

    2008-11-01

    Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.

  20. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  1. Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography

    NASA Technical Reports Server (NTRS)

    Xu, Feng; Deshpande, Manohar

    2012-01-01

    Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.

  2. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  3. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  4. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less

  5. GEOPHYSICS, ASTRONOMY AND ASTROPHYSICS: A two scale nonlinear fractal sea surface model in a one dimensional deep sea

    NASA Astrophysics Data System (ADS)

    Xie, Tao; Zou, Guang-Hui; William, Perrie; Kuang, Hai-Lan; Chen, Wei

    2010-05-01

    Using the theory of nonlinear interactions between long and short waves, a nonlinear fractal sea surface model is presented for a one dimensional deep sea. Numerical simulation results show that spectra intensity changes at different locations (in both the wave number domain and temporal-frequency domain), and the system obeys the energy conservation principle. Finally, a method to limit the fractal parameters is also presented to ensure that the model system does not become ill-posed.

  6. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  7. An ambiguity of information content and error in an ill-posed satellite inversion

    NASA Astrophysics Data System (ADS)

    Koner, Prabhat

    According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.

  8. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  9. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  10. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  11. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  12. Semiclassical regularization of Vlasov equations and wavepackets for nonlinear Schrödinger equations

    NASA Astrophysics Data System (ADS)

    Athanassoulis, Agissilaos

    2018-03-01

    We consider the semiclassical limit of nonlinear Schrödinger equations with initial data that are well localized in both position and momentum (non-parametric wavepackets). We recover the Wigner measure (WM) of the problem, a macroscopic phase-space density which controls the propagation of the physical observables such as mass, energy and momentum. WMs have been used to create effective models for wave propagation in: random media, quantum molecular dynamics, mean field limits, and the propagation of electrons in graphene. In nonlinear settings, the Vlasov-type equations obtained for the WM are often ill-posed on the physically interesting spaces of initial data. In this paper we are able to select the measure-valued solution of the 1  +  1 dimensional Vlasov-Poisson equation which correctly captures the semiclassical limit, thus finally resolving the non-uniqueness in the seminal result of Zhang et al (2012 Comm. Pure Appl. Math. 55 582-632). The same approach is also applied to the Vlasov-Dirac-Benney equation with small wavepacket initial data, extending several known results.

  13. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  14. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  15. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  16. Separation of irradiance and reflectance from observed color images by logarithmical nonlinear diffusion process

    NASA Astrophysics Data System (ADS)

    Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi

    2006-02-01

    The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.

  17. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  18. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  19. SAFE HANDLING OF FOODS

    EPA Science Inventory

    Microbial food-borne illnesses pose a significant health problem in Japan. In 1996 the world's largest outbreak of Escherichia coli food illness occurred in Japan. Since then, new regulatory measures were established, including strict hygiene practices in meat and food processi...

  20. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    PubMed

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  1. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  2. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  3. Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.

    PubMed

    Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao

    2016-01-01

    Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.

  4. Ill-posedness of the 3D incompressible hyperdissipative Navier–Stokes system in critical Fourier-Herz spaces

    NASA Astrophysics Data System (ADS)

    Nie, Yao; Zheng, Xiaoxin

    2018-07-01

    We study the Cauchy problem for the 3D incompressible hyperdissipative Navier–Stokes equations and consider the well-posedness and ill-posedness in critical Fourier-Herz spaces . We prove that if and , the system is locally well-posed for large initial data as well as globally well-posed for small initial data. Also, we obtain the same result for and . More importantly, we show that the system is ill-posed in the sense of norm inflation for and q  >  2. The proof relies heavily on particular structure of initial data u 0 that we construct, which makes the first iteration of solution inflate. Specifically, the special structure of u 0 transforms an infinite sum into a finite sum in ‘remainder term’, which permits us to control the remainder.

  5. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  6. A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).

    PubMed

    Wang, Qi; Wang, Huaxiang

    2011-04-01

    During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  8. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  9. On the Problems of Construction and Statistical Inference Associated with a Generalization of Canonical Variables.

    DTIC Science & Technology

    1982-02-01

    of them are pre- sented in this paper. As an application, important practical problems similar to the one posed by Gnanadesikan (1977), p. 77 can be... Gnanadesikan and Wilk (1969) to search for a non-linear combination, giving rise to non-linear first principal component. So, a p-dinensional vector can...distribution, Gnanadesikan and Gupta (1970) and earlier Eaton (1967) have considered the problem of ranking the r underlying populations according to the

  10. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  11. Hysteresis and Phase Transitions in a Lattice Regularization of an Ill-Posed Forward-Backward Diffusion Equation

    NASA Astrophysics Data System (ADS)

    Helmers, Michael; Herrmann, Michael

    2018-03-01

    We consider a lattice regularization for an ill-posed diffusion equation with a trilinear constitutive law and study the dynamics of phase interfaces in the parabolic scaling limit. Our main result guarantees for a certain class of single-interface initial data that the lattice solutions satisfy asymptotically a free boundary problem with a hysteretic Stefan condition. The key challenge in the proof is to control the microscopic fluctuations that are inevitably produced by the backward diffusion when a particle passes the spinodal region.

  12. Multimodal, high-dimensional, model-based, Bayesian inverse problems with applications in biomechanics

    NASA Astrophysics Data System (ADS)

    Franck, I. M.; Koutsourelakis, P. S.

    2017-01-01

    This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.

  13. Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow

    NASA Astrophysics Data System (ADS)

    Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar

    2014-09-01

    We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.

  14. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  15. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  16. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  17. Preconditioner and convergence study for the Quantum Computer Aided Design (QCAD) nonlinear poisson problem posed on the Ottawa Flat 270 design geometry.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina

    2012-05-01

    A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less

  18. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  19. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  20. Rescuing Computerized Testing by Breaking Zipf's Law.

    ERIC Educational Resources Information Center

    Wainer, Howard

    2000-01-01

    Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…

  1. Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction

    NASA Astrophysics Data System (ADS)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2017-07-01

    The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.

  2. Retrieval of LAI and leaf chlorophyll content from remote sensing data by agronomy mechanism knowledge to solve the ill-posed inverse problem

    NASA Astrophysics Data System (ADS)

    Li, Zhenhai; Nie, Chenwei; Yang, Guijun; Xu, Xingang; Jin, Xiuliang; Gu, Xiaohe

    2014-10-01

    Leaf area index (LAI) and LCC, as the two most important crop growth variables, are major considerations in management decisions, agricultural planning and policy making. Estimation of canopy biophysical variables from remote sensing data was investigated using a radiative transfer model. However, the ill-posed problem is unavoidable for the unique solution of the inverse problem and the uncertainty of measurements and model assumptions. This study focused on the use of agronomy mechanism knowledge to restrict and remove the ill-posed inversion results. For this purpose, the inversion results obtained using the PROSAIL model alone (NAMK) and linked with agronomic mechanism knowledge (AMK) were compared. The results showed that AMK did not significantly improve the accuracy of LAI inversion. LAI was estimated with high accuracy, and there was no significant improvement after considering AMK. The validation results of the determination coefficient (R2) and the corresponding root mean square error (RMSE) between measured LAI and estimated LAI were 0.635 and 1.022 for NAMK, and 0.637 and 0.999 for AMK, respectively. LCC estimation was significantly improved with agronomy mechanism knowledge; the R2 and RMSE values were 0.377 and 14.495 μg cm-2 for NAMK, and 0.503 and 10.661 μg cm-2 for AMK, respectively. Results of the comparison demonstrated the need for agronomy mechanism knowledge in radiative transfer model inversion.

  3. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    ERIC Educational Resources Information Center

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  4. Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves

    NASA Astrophysics Data System (ADS)

    Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.

    2011-03-01

    The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.

  5. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  6. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  7. Numerical treatment of a geometrically nonlinear planar Cosserat shell model

    NASA Astrophysics Data System (ADS)

    Sander, Oliver; Neff, Patrizio; Bîrsan, Mircea

    2016-05-01

    We present a new way to discretize a geometrically nonlinear elastic planar Cosserat shell. The kinematical model is similar to the general six-parameter resultant shell model with drilling rotations. The discretization uses geodesic finite elements (GFEs), which leads to an objective discrete model which naturally allows arbitrarily large rotations. GFEs of any approximation order can be constructed. The resulting algebraic problem is a minimization problem posed on a nonlinear finite-dimensional Riemannian manifold. We solve this problem using a Riemannian trust-region method, which is a generalization of Newton's method that converges globally without intermediate loading steps. We present the continuous model and the discretization, discuss the properties of the discrete model, and show several numerical examples, including wrinkling of thin elastic sheets in shear.

  8. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  9. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    DOE PAGES

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; ...

    2016-07-13

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection–diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations andmore » model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov–Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems – i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian – we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. Here, we show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.« less

  10. Inversion of geothermal heat flux in a thermomechanically coupled nonlinear Stokes ice sheet model

    NASA Astrophysics Data System (ADS)

    Zhu, Hongyu; Petra, Noemi; Stadler, Georg; Isaac, Tobin; Hughes, Thomas J. R.; Ghattas, Omar

    2016-07-01

    We address the inverse problem of inferring the basal geothermal heat flux from surface velocity observations using a steady-state thermomechanically coupled nonlinear Stokes ice flow model. This is a challenging inverse problem since the map from basal heat flux to surface velocity observables is indirect: the heat flux is a boundary condition for the thermal advection-diffusion equation, which couples to the nonlinear Stokes ice flow equations; together they determine the surface ice flow velocity. This multiphysics inverse problem is formulated as a nonlinear least-squares optimization problem with a cost functional that includes the data misfit between surface velocity observations and model predictions. A Tikhonov regularization term is added to render the problem well posed. We derive adjoint-based gradient and Hessian expressions for the resulting partial differential equation (PDE)-constrained optimization problem and propose an inexact Newton method for its solution. As a consequence of the Petrov-Galerkin discretization of the energy equation, we show that discretization and differentiation do not commute; that is, the order in which we discretize the cost functional and differentiate it affects the correctness of the gradient. Using two- and three-dimensional model problems, we study the prospects for and limitations of the inference of the geothermal heat flux field from surface velocity observations. The results show that the reconstruction improves as the noise level in the observations decreases and that short-wavelength variations in the geothermal heat flux are difficult to recover. We analyze the ill-posedness of the inverse problem as a function of the number of observations by examining the spectrum of the Hessian of the cost functional. Motivated by the popularity of operator-split or staggered solvers for forward multiphysics problems - i.e., those that drop two-way coupling terms to yield a one-way coupled forward Jacobian - we study the effect on the inversion of a one-way coupling of the adjoint energy and Stokes equations. We show that taking such a one-way coupled approach for the adjoint equations can lead to an incorrect gradient and premature termination of optimization iterations. This is due to loss of a descent direction stemming from inconsistency of the gradient with the contours of the cost functional. Nevertheless, one may still obtain a reasonable approximate inverse solution particularly if important features of the reconstructed solution emerge early in optimization iterations, before the premature termination.

  11. Solution of Nonlinear Systems

    NASA Technical Reports Server (NTRS)

    Turner, L. R.

    1960-01-01

    The problem of solving systems of nonlinear equations has been relatively neglected in the mathematical literature, especially in the textbooks, in comparison to the corresponding linear problem. Moreover, treatments that have an appearance of generality fail to discuss the nature of the solutions and the possible pitfalls of the methods suggested. Probably it is unrealistic to expect that a unified and comprehensive treatment of the subject will evolve, owing to the great variety of situations possible, especially in the applied field where some requirement of human or mechanical efficiency is always present. Therefore we attempt here simply to pose the problem and to describe and partially appraise the methods of solution currently in favor.

  12. Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.

    DTIC Science & Technology

    1980-06-30

    4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of

  13. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  14. A Tale of Three Cases: Examining Accuracy, Efficiency, and Process Differences in Diagnosing Virtual Patient Cases

    ERIC Educational Resources Information Center

    Doleck, Tenzin; Jarrell, Amanda; Poitras, Eric G.; Chaouachi, Maher; Lajoie, Susanne P.

    2016-01-01

    Clinical reasoning is a central skill in diagnosing cases. However, diagnosing a clinical case poses several challenges that are inherent to solving multifaceted ill-structured problems. In particular, when solving such problems, the complexity stems from the existence of multiple paths to arriving at the correct solution (Lajoie, 2003). Moreover,…

  15. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  16. Stabilization of the Inverse Laplace Transform of Multiexponential Decay through Introduction of a Second Dimension

    PubMed Central

    Celik, Hasan; Bouhrara, Mustapha; Reiter, David A.; Fishbein, Kenneth W.; Spencer, Richard G.

    2013-01-01

    We propose a new approach to stabilizing the inverse Laplace transform of a multiexponential decay signal, a classically ill-posed problem, in the context of nuclear magnetic resonance relaxometry. The method is based on extension to a second, indirectly detected, dimension, that is, use of the established framework of two-dimensional relaxometry, followed by projection onto the desired axis. Numerical results for signals comprised of discrete T1 and T2 relaxation components and experiments performed on agarose gel phantoms are presented. We find markedly improved accuracy, and stability with respect to noise, as well as insensitivity to regularization in quantifying underlying relaxation components through use of the two-dimensional as compared to the one-dimensional inverse Laplace transform. This improvement is demonstrated separately for two different inversion algorithms, nonnegative least squares and non-linear least squares, to indicate the generalizability of this approach. These results may have wide applicability in approaches to the Fredholm integral equation of the first kind. PMID:24035004

  17. A combined reconstruction-classification method for diffuse optical tomography.

    PubMed

    Hiltunen, P; Prince, S J D; Arridge, S

    2009-11-07

    We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.

  18. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    PubMed

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  19. History of Physical Terms: "Pressure"

    ERIC Educational Resources Information Center

    Frontali, Clara

    2013-01-01

    Scientific terms drawn from common language are often charged with suggestions that may even be inconsistent with their restricted scientific meaning, thus posing didactic problems. The (non-linear) historical journey of the word "pressure" is illustrated here through original quotations from Stevinus, Torricelli, Pascal, Boyle,…

  20. Least Squares Computations in Science and Engineering

    DTIC Science & Technology

    1994-02-01

    iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory

  1. Regularized two-step brain activity reconstruction from spatiotemporal EEG data

    NASA Astrophysics Data System (ADS)

    Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry

    2004-10-01

    We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.

  2. Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Auriault, Laurent

    1996-01-01

    It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.

  3. A genetic algorithm approach to estimate glacier mass variations from GRACE data

    NASA Astrophysics Data System (ADS)

    Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten

    2017-04-01

    The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.

  4. Optimal control in adaptive optics modeling of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Herrmann, J.

    The problem of using an adaptive optics system to correct for nonlinear effects like thermal blooming is addressed using a model containing nonlinear lenses through which Gaussian beams are propagated. The best correction of this nonlinear system can be formulated as a deterministic open loop optimal control problem. This treatment gives a limit for the best possible correction. Aspects of adaptive control and servo systems are not included at this stage. An attempt is made to determine that control in the transmitter plane which minimizes the time averaged area or maximizes the fluence in the target plane. The standard minimization procedure leads to a two-point-boundary-value problem, which is ill-conditioned in the case. The optimal control problem was solved using an iterative gradient technique. An instantaneous correction is introduced and compared with the optimal correction. The results of the calculations show that for short times or weak nonlinearities the instantaneous correction is close to the optimal correction, but that for long times and strong nonlinearities a large difference develops between the two types of correction. For these cases the steady state correction becomes better than the instantaneous correction and approaches the optimum correction.

  5. Chemical approaches to solve mycotoxin problems and improve food safety

    USDA-ARS?s Scientific Manuscript database

    Foodborne illnesses are experienced by most of the population and are preventable. Agricultural produce can occasionally become contaminated with fungi capable of making mycotoxins that pose health risks and reduce values. Many strategies are employed to keep food safe from mycotoxin contamination. ...

  6. An ill-posed problem for the Black-Scholes equation for a profitable forecast of prices of stock options on real market data

    NASA Astrophysics Data System (ADS)

    Klibanov, Michael V.; Kuzhuget, Andrey V.; Golubnichiy, Kirill V.

    2016-01-01

    A new empirical mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock, new initial and new boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. To verify the validity of our model, real market data for 368 randomly selected liquid options are used. A new trading strategy is proposed. Our results indicates that our method is profitable on those options. Furthermore, it is shown that the performance of two simple extrapolation-based techniques is much worse. We conjecture that our method might lead to significant profits of those financial insitutions which trade large amounts of options. We caution, however, that further studies are necessary to verify this conjecture.

  7. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  8. An overview of adaptive model theory: solving the problems of redundancy, resources, and nonlinear interactions in human movement control.

    PubMed

    Neilson, Peter D; Neilson, Megan D

    2005-09-01

    Adaptive model theory (AMT) is a computational theory that addresses the difficult control problem posed by the musculoskeletal system in interaction with the environment. It proposes that the nervous system creates motor maps and task-dependent synergies to solve the problems of redundancy and limited central resources. These lead to the adaptive formation of task-dependent feedback/feedforward controllers able to generate stable, noninteractive control and render nonlinear interactions unobservable in sensory-motor relationships. AMT offers a unified account of how the nervous system might achieve these solutions by forming internal models. This is presented as the design of a simulator consisting of neural adaptive filters based on cerebellar circuitry. It incorporates a new network module that adaptively models (in real time) nonlinear relationships between inputs with changing and uncertain spectral and amplitude probability density functions as is the case for sensory and motor signals.

  9. Asymptotic analysis of the local potential approximation to the Wetterich equation

    NASA Astrophysics Data System (ADS)

    Bender, Carl M.; Sarkar, Sarben

    2018-06-01

    This paper reports a study of the nonlinear partial differential equation that arises in the local potential approximation to the Wetterich formulation of the functional renormalization group equation. A cut-off-dependent shift of the potential in this partial differential equation is performed. This shift allows a perturbative asymptotic treatment of the differential equation for large values of the infrared cut-off. To leading order in perturbation theory the differential equation becomes a heat equation, where the sign of the diffusion constant changes as the space-time dimension D passes through 2. When D  <  2, one obtains a forward heat equation whose initial-value problem is well-posed. However, for D  >  2 one obtains a backward heat equation whose initial-value problem is ill-posed. For the special case D  =  1 the asymptotic series for cubic and quartic models is extrapolated to the small infrared-cut-off limit by using Padé techniques. The effective potential thus obtained from the partial differential equation is then used in a Schrödinger-equation setting to study the stability of the ground state. For cubic potentials it is found that this Padé procedure distinguishes between a -symmetric theory and a conventional Hermitian theory (g real). For an theory the effective potential is nonsingular and has a stable ground state but for a conventional theory the effective potential is singular. For a conventional Hermitian theory and a -symmetric theory (g  >  0) the results are similar; the effective potentials in both cases are nonsingular and possess stable ground states.

  10. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  11. Designing Adaptive Low Dissipative High Order Schemes

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.; Parks, John W. (Technical Monitor)

    2002-01-01

    Proper control of the numerical dissipation/filter to accurately resolve all relevant multiscales of complex flow problems while still maintaining nonlinear stability and efficiency for long-time numerical integrations poses a great challenge to the design of numerical methods. The required type and amount of numerical dissipation/filter are not only physical problem dependent, but also vary from one flow region to another. This is particularly true for unsteady high-speed shock/shear/boundary-layer/turbulence/acoustics interactions and/or combustion problems since the dynamics of the nonlinear effect of these flows are not well-understood. Even with extensive grid refinement, it is of paramount importance to have proper control on the type and amount of numerical dissipation/filter in regions where it is needed.

  12. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  13. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  14. On the Use of Nonlinear Regularization in Inverse Methods for the Solar Tachocline Profile Determination

    NASA Astrophysics Data System (ADS)

    Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.

    Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.

  15. Adaptive relative pose control of spacecraft with model couplings and uncertainties

    NASA Astrophysics Data System (ADS)

    Sun, Liang; Zheng, Zewei

    2018-02-01

    The spacecraft pose tracking control problem for an uncertain pursuer approaching to a space target is researched in this paper. After modeling the nonlinearly coupled dynamics for relative translational and rotational motions between two spacecraft, position tracking and attitude synchronization controllers are developed independently by using a robust adaptive control approach. The unknown kinematic couplings, parametric uncertainties, and bounded external disturbances are handled with adaptive updating laws. It is proved via Lyapunov method that the pose tracking errors converge to zero asymptotically. Spacecraft close-range rendezvous and proximity operations are introduced as an example to validate the effectiveness of the proposed control approach.

  16. Determination of the Geometric Form of a Plane of a Tectonic Gap as the Inverse III-posed Problem of Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Sirota, Dmitry; Ivanov, Vadim

    2017-11-01

    Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.

  17. A Lower Bound for the Norm of the Solution of a Nonlinear Volterra Equation in One-Dimensional Viscoelasticity.

    DTIC Science & Technology

    1980-12-09

    34, Symp. on Non-well-posed Problems and Logarithmic Convexity (Lecture Notes on Math. #316), pp. 31-5h, Springer, 1973. 3. Greenberg , J.M., MacCamy, R.C...34Continuous Data Dependence for an Abstract Volterra Integro- Differential Equation in Hilbert Space with Applications to Viscoelasticity", Annali Scuola... Hilbert Space", to appear in the J. Applicable Analysis. 8. Slemrod, M., "Instability of Steady Shearing Flows in a Nonlinear Viscoelastic Fluid", Arch

  18. Nonlinear features for classification and pose estimation of machined parts from single views

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-10-01

    A new nonlinear feature extraction method is presented for classification and pose estimation of objects from single views. The feature extraction method is called the maximum representation and discrimination feature (MRDF) method. The nonlinear MRDF transformations to use are obtained in closed form, and offer significant advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We consider MRDFs on image data, provide a new 2-stage nonlinear MRDF solution, and show it specializes to well-known linear and nonlinear image processing transforms under certain conditions. We show the use of MRDF in estimating the class and pose of images of rendered solid CAD models of machine parts from single views using a feature-space trajectory neural network classifier. We show new results with better classification and pose estimation accuracy than are achieved by standard principal component analysis and Fukunaga-Koontz feature extraction methods.

  19. Joint Model and Parameter Dimension Reduction for Bayesian Inversion Applied to an Ice Sheet Flow Problem

    NASA Astrophysics Data System (ADS)

    Ghattas, O.; Petra, N.; Cui, T.; Marzouk, Y.; Benjamin, P.; Willcox, K.

    2016-12-01

    Model-based projections of the dynamics of the polar ice sheets play a central role in anticipating future sea level rise. However, a number of mathematical and computational challenges place significant barriers on improving predictability of these models. One such challenge is caused by the unknown model parameters (e.g., in the basal boundary conditions) that must be inferred from heterogeneous observational data, leading to an ill-posed inverse problem and the need to quantify uncertainties in its solution. In this talk we discuss the problem of estimating the uncertainty in the solution of (large-scale) ice sheet inverse problems within the framework of Bayesian inference. Computing the general solution of the inverse problem--i.e., the posterior probability density--is intractable with current methods on today's computers, due to the expense of solving the forward model (3D full Stokes flow with nonlinear rheology) and the high dimensionality of the uncertain parameters (which are discretizations of the basal sliding coefficient field). To overcome these twin computational challenges, it is essential to exploit problem structure (e.g., sensitivity of the data to parameters, the smoothing property of the forward model, and correlations in the prior). To this end, we present a data-informed approach that identifies low-dimensional structure in both parameter space and the forward model state space. This approach exploits the fact that the observations inform only a low-dimensional parameter space and allows us to construct a parameter-reduced posterior. Sampling this parameter-reduced posterior still requires multiple evaluations of the forward problem, therefore we also aim to identify a low dimensional state space to reduce the computational cost. To this end, we apply a proper orthogonal decomposition (POD) approach to approximate the state using a low-dimensional manifold constructed using ``snapshots'' from the parameter reduced posterior, and the discrete empirical interpolation method (DEIM) to approximate the nonlinearity in the forward problem. We show that using only a limited number of forward solves, the resulting subspaces lead to an efficient method to explore the high-dimensional posterior.

  20. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  1. General methodology for simultaneous representation and discrimination of multiple object classes

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.

  2. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  3. Dynamic Experiment Design Regularization Approach to Adaptive Imaging with Array Radar/SAR Sensor Systems

    PubMed Central

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859

  4. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.

    2016-08-14

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrixmore » condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.« less

  5. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    NASA Astrophysics Data System (ADS)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  6. The use of the Kalman filter in the automated segmentation of EIT lung images.

    PubMed

    Zifan, A; Liatsis, P; Chapman, B E

    2013-06-01

    In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.

  7. Poisson-Nernst-Planck equations for simulating biomolecular diffusion-reaction processes I: Finite element solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu Benzhuo; Holst, Michael J.; Center for Theoretical Biological Physics, University of California San Diego, La Jolla, CA 92093

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for simulating electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised formore » time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.« less

  8. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions

    PubMed Central

    Lu, Benzhuo; Holst, Michael J.; McCammon, J. Andrew; Zhou, Y. C.

    2010-01-01

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems. PMID:21709855

  9. Poisson-Nernst-Planck Equations for Simulating Biomolecular Diffusion-Reaction Processes I: Finite Element Solutions.

    PubMed

    Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C

    2010-09-20

    In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.

  10. Dual Super-Systolic Core for Real-Time Reconstructive Algorithms of High-Resolution Radar/SAR Imaging Systems

    PubMed Central

    Atoche, Alejandro Castillo; Castillo, Javier Vázquez

    2012-01-01

    A high-speed dual super-systolic core for reconstructive signal processing (SP) operations consists of a double parallel systolic array (SA) machine in which each processing element of the array is also conceptualized as another SA in a bit-level fashion. In this study, we addressed the design of a high-speed dual super-systolic array (SSA) core for the enhancement/reconstruction of remote sensing (RS) imaging of radar/synthetic aperture radar (SAR) sensor systems. The selected reconstructive SP algorithms are efficiently transformed in their parallel representation and then, they are mapped into an efficient high performance embedded computing (HPEC) architecture in reconfigurable Xilinx field programmable gate array (FPGA) platforms. As an implementation test case, the proposed approach was aggregated in a HW/SW co-design scheme in order to solve the nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) from a remotely sensed scene. We show how such dual SSA core, drastically reduces the computational load of complex RS regularization techniques achieving the required real-time operational mode. PMID:22736964

  11. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  12. Steady induction effects in geomagnetism. Part 1B: Geomagnetic estimation of steady surficial core motions: A non-linear inverse problem

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1993-01-01

    The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.

  13. Obstructions to Existence in Fast-Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Rodriguez, Ana; Vazquez, Juan L.

    The study of nonlinear diffusion equations produces a number of peculiar phenomena not present in the standard linear theory. Thus, in the sub-field of very fast diffusion it is known that the Cauchy problem can be ill-posed, either because of non-uniqueness, or because of non-existence of solutions with small data. The equations we consider take the general form ut=( D( u, ux) ux) x or its several-dimension analogue. Fast diffusion means that D→∞ at some values of the arguments, typically as u→0 or ux→0. Here, we describe two different types of non-existence phenomena. Some fast-diffusion equations with very singular D do not allow for solutions with sign changes, while other equations admit only monotone solutions, no oscillations being allowed. The examples we give for both types of anomaly are closely related. The most typical examples are vt=( vx/∣ v∣) x and ut= uxx/∣ ux∣. For these equations, we investigate what happens to the Cauchy problem when we take incompatible initial data and perform a standard regularization. It is shown that the limit gives rise to an initial layer where the data become admissible (positive or monotone, respectively), followed by a standard evolution for all t>0, once the obstruction has been removed.

  14. Visualizing the ill-posedness of the inversion of a canopy radiative transfer model: A case study for Sentinel-2

    NASA Astrophysics Data System (ADS)

    Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.

    2015-12-01

    Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.

  15. Sensitivity analysis as an aid in modelling and control of (poorly-defined) ecological systems. [closed ecological systems

    NASA Technical Reports Server (NTRS)

    Hornberger, G. M.; Rastetter, E. B.

    1982-01-01

    A literature review of the use of sensitivity analyses in modelling nonlinear, ill-defined systems, such as ecological interactions is presented. Discussions of previous work, and a proposed scheme for generalized sensitivity analysis applicable to ill-defined systems are included. This scheme considers classes of mathematical models, problem-defining behavior, analysis procedures (especially the use of Monte-Carlo methods), sensitivity ranking of parameters, and extension to control system design.

  16. Nonlinear Deformation of a Piecewise Homogeneous Cylinder Under the Action of Rotation

    NASA Astrophysics Data System (ADS)

    Akhundov, V. M.; Kostrova, M. M.

    2018-05-01

    Deformation of a piecewise cylinder under the action of rotation is investigated. The cylinder consists of an elastic matrix with circular fibers of square cross section made of a more rigid elastic material and arranged doubly periodically in the cylinder. Behavior of the cylinder under large displacements and deformations is examined using the equations of a nonlinear elasticity theory for cylinder constituents. The problem posed is solved by the finite-difference method using the method of continuation with respect to the rotational speed of the cylinder.

  17. Joint recognition and discrimination in nonlinear feature space

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1997-09-01

    A new general method for linear and nonlinear feature extraction is presented. It is novel since it provides both representation and discrimination while most other methods are concerned with only one of these issues. We call this approach the maximum representation and discrimination feature (MRDF) method and show that the Bayes classifier and the Karhunen- Loeve transform are special cases of it. We refer to our nonlinear feature extraction technique as nonlinear eigen- feature extraction. It is new since it has a closed-form solution and produces nonlinear decision surfaces with higher rank than do iterative methods. Results on synthetic databases are shown and compared with results from standard Fukunaga- Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem (discrimination) and to the classification and pose estimation of two similar objects (representation and discrimination).

  18. Adaptive nonlinear robust relative pose control of spacecraft autonomous rendezvous and proximity operations.

    PubMed

    Sun, Liang; Huo, Wei; Jiao, Zongxia

    2017-03-01

    This paper studies relative pose control for a rigid spacecraft with parametric uncertainties approaching to an unknown tumbling target in disturbed space environment. State feedback controllers for relative translation and relative rotation are designed in an adaptive nonlinear robust control framework. The element-wise and norm-wise adaptive laws are utilized to compensate the parametric uncertainties of chaser and target spacecraft, respectively. External disturbances acting on two spacecraft are treated as a lumped and bounded perturbation input for system. To achieve the prescribed disturbance attenuation performance index, feedback gains of controllers are designed by solving linear matrix inequality problems so that lumped disturbance attenuation with respect to the controlled output is ensured in the L 2 -gain sense. Moreover, in the absence of lumped disturbance input, asymptotical convergence of relative pose are proved by using the Lyapunov method. Numerical simulations are performed to show that position tracking and attitude synchronization are accomplished in spite of the presence of couplings and uncertainties. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Open problems of magnetic island control by electron cyclotron current drive

    DOE PAGES

    Grasso, Daniela; Lazzaro, E.; Borgogno, D.; ...

    2016-11-17

    This study reviews key aspects of the problem of magnetic islands control by electron cyclotron current drive in fusion devices. On the basis of the ordering of the basic spatial and time scales of the magnetic reconnection physics, we present the established results, highlighting some of the open issues posed by the small-scale structures that typically accompany the nonlinear evolution of the magnetic islands and constrain the effect of the control action.

  20. Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths

    NASA Astrophysics Data System (ADS)

    Chu, Jixun; Coron, Jean-Michel; Shang, Peipei

    2015-10-01

    We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.

  1. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  2. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  3. TOPICAL REVIEW: The stability for the Cauchy problem for elliptic equations

    NASA Astrophysics Data System (ADS)

    Alessandrini, Giovanni; Rondi, Luca; Rosset, Edi; Vessella, Sergio

    2009-12-01

    We discuss the ill-posed Cauchy problem for elliptic equations, which is pervasive in inverse boundary value problems modeled by elliptic equations. We provide essentially optimal stability results, in wide generality and under substantially minimal assumptions. As a general scheme in our arguments, we show that all such stability results can be derived by the use of a single building brick, the three-spheres inequality. Due to the current absence of research funding from the Italian Ministry of University and Research, this work has been completed without any financial support.

  4. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems

  5. Planning nonlinear access paths for temporal bone surgery.

    PubMed

    Fauser, Johannes; Sakas, Georgios; Mukhopadhyay, Anirban

    2018-05-01

    Interventions at the otobasis operate in the narrow region of the temporal bone where several highly sensitive organs define obstacles with minimal clearance for surgical instruments. Nonlinear trajectories for potential minimally invasive interventions can provide larger distances to risk structures and optimized orientations of surgical instruments, thus improving clinical outcomes when compared to existing linear approaches. In this paper, we present fast and accurate planning methods for such nonlinear access paths. We define a specific motion planning problem in [Formula: see text] with notable constraints in computation time and goal pose that reflect the requirements of temporal bone surgery. We then present [Formula: see text]-RRT-Connect: two suitable motion planners based on bidirectional Rapidly exploring Random Tree (RRT) to solve this problem efficiently. The benefits of [Formula: see text]-RRT-Connect are demonstrated on real CT data of patients. Their general performance is shown on a large set of realistic synthetic anatomies. We also show that these new algorithms outperform state-of-the-art methods based on circular arcs or Bézier-Splines when applied to this specific problem. With this work, we demonstrate that preoperative and intra-operative planning of nonlinear access paths is possible for minimally invasive surgeries at the otobasis.

  6. Nonlinear model updating applied to the IMAC XXXII Round Robin benchmark system

    NASA Astrophysics Data System (ADS)

    Kurt, Mehmet; Moore, Keegan J.; Eriten, Melih; McFarland, D. Michael; Bergman, Lawrence A.; Vakakis, Alexander F.

    2017-05-01

    We consider the application of a new nonlinear model updating strategy to a computational benchmark system. The approach relies on analyzing system response time series in the frequency-energy domain by constructing both Hamiltonian and forced and damped frequency-energy plots (FEPs). The system parameters are then characterized and updated by matching the backbone branches of the FEPs with the frequency-energy wavelet transforms of experimental and/or computational time series. The main advantage of this method is that no nonlinearity model is assumed a priori, and the system model is updated solely based on simulation and/or experimental measured time series. By matching the frequency-energy plots of the benchmark system and its reduced-order model, we show that we are able to retrieve the global strongly nonlinear dynamics in the frequency and energy ranges of interest, identify bifurcations, characterize local nonlinearities, and accurately reconstruct time series. We apply the proposed methodology to a benchmark problem, which was posed to the system identification community prior to the IMAC XXXII (2014) and XXXIII (2015) Conferences as a "Round Robin Exercise on Nonlinear System Identification". We show that we are able to identify the parameters of the non-linear element in the problem with a priori knowledge about its position.

  7. Regularization of nonlinear decomposition of spectral x-ray projection images.

    PubMed

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.

  8. Assessment of thyroid function in dogs with low plasma thyroxine concentration.

    PubMed

    Diaz Espineira, M M; Mol, J A; Peeters, M E; Pollak, Y W E A; Iversen, L; van Dijk, J E; Rijnberk, A; Kooistra, H S

    2007-01-01

    Differentiation between hypothyroidism and nonthyroidal illness in dogs poses specific problems, because plasma total thyroxine (TT4) concentrations are often low in nonthyroidal illness, and plasma thyroid stimulating hormone (TSH) concentrations are frequently not high in primary hypothyroidism. The serum concentrations of the common basal biochemical variables (TT4, freeT4 [fT4], and TSH) overlap between dogs with hypothyroidism and dogs with nonthyroidal illness, but, with stimulation tests and quantitative measurement of thyroidal 99mTcO4(-) uptake, differentiation will be possible. In 30 dogs with low plasma TT4 concentration, the final diagnosis was based upon histopathologic examination of thyroid tissue obtained by biopsy. Fourteen dogs had primary hypothyroidism, and 13 dogs had nonthyroidal illness. Two dogs had secondary hypothyroidism, and 1 dog had metastatic thyroid cancer. The diagnostic value was assessed for (1) plasma concentrations of TT4, fT4, and TSH; (2) TSH-stimulation test; (3) plasma TSH concentration after stimulation with TSH-releasing hormone (TRH); (4) occurrence of thyroglobulin antibodies (TgAbs); and (5) thyroidal 99mTcO4(-) uptake. Plasma concentrations of TT4, fT4, TSH, and the hormone pairs TT4/TSH and fT4/TSH overlapped in the 2 groups, whereas, with TgAbs, there was 1 false-negative result. Results of the TSH- and TRH-stimulation tests did not meet earlier established diagnostic criteria, overlapped, or both. With a quantitative measurement of thyroidal 99mTcO4(-) uptake, there was no overlap between dogs with primary hypothyroidism and dogs with nonthyroidal illness. The results of this study confirm earlier observations that, in dogs, accurate biochemical diagnosis of primary hypothyroidism poses specific problems. Previous studies, in which the TSH-stimulation test was used as the "gold standard" for the diagnosis of hypothyroidism may have suffered from misclassification. Quantitative measurement of thyroidal 99mTcO- uptake has the highest discriminatory power with regard to the differentiation between primary hypothyroidism and nonthyroidal illness.

  9. On Algorithms for Nonlinear Minimax and Min-Max-Min Problems and Their Efficiency

    DTIC Science & Technology

    2011-03-01

    dissertation is complete, I can finally stay home after dinner to play Wii with you. LET’S GO Mario and Yellow Mushroom... xv THIS PAGE INTENTIONALLY... balance the accuracy of the approximation with problem ill-conditioning. The sim- plest smoothing algorithm creates an accurate smooth approximating...sizing in electronic circuit boards (Chen & Fan, 1998), obstacle avoidance for robots (Kirjner- Neto & Polak, 1998), optimal design centering

  10. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  11. A Unified Development of Basis Reduction Methods for Rotor Blade Analysis

    NASA Technical Reports Server (NTRS)

    Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.

  12. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  13. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  14. Convex Relaxation For Hard Problem In Data Mining And Sensor Localization

    DTIC Science & Technology

    2017-04-13

    Drusvyatskiy, S.A. Vavasis, and H. Wolkowicz. Extreme point in- equalities and geometry of the rank sparsity ball. Math . Program., 152(1-2, Ser. A...521–544, 2015. [3] M-H. Lin and H. Wolkowicz. Hiroshima’s theorem and matrix norm inequalities. Acta Sci. Math . (Szeged), 81(1-2):45–53, 2015. [4] D...9867-4. [8] D. Drusvyatskiy, G. Li, and H. Wolkowicz. Alternating projections for ill-posed semidenite feasibility problems. Math . Program., 2016

  15. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  16. Human pose tracking from monocular video by traversing an image motion mapped body pose manifold

    NASA Astrophysics Data System (ADS)

    Basu, Saurav; Poulin, Joshua; Acton, Scott T.

    2010-01-01

    Tracking human pose from monocular video sequences is a challenging problem due to the large number of independent parameters affecting image appearance and nonlinear relationships between generating parameters and the resultant images. Unlike the current practice of fitting interpolation functions to point correspondences between underlying pose parameters and image appearance, we exploit the relationship between pose parameters and image motion flow vectors in a physically meaningful way. Change in image appearance due to pose change is realized as navigating a low dimensional submanifold of the infinite dimensional Lie group of diffeomorphisms of the two dimensional sphere S2. For small changes in pose, image motion flow vectors lie on the tangent space of the submanifold. Any observed image motion flow vector field is decomposed into the basis motion vector flow fields on the tangent space and combination weights are used to update corresponding pose changes in the different dimensions of the pose parameter space. Image motion flow vectors are largely invariant to style changes in experiments with synthetic and real data where the subjects exhibit variation in appearance and clothing. The experiments demonstrate the robustness of our method (within +/-4° of ground truth) to style variance.

  17. Intrinsic nonlinearity and method of disturbed observations in inverse problems of celestial mechanics

    NASA Astrophysics Data System (ADS)

    Avdyushev, Victor A.

    2017-12-01

    Orbit determination from a small sample of observations over a very short observed orbital arc is a strongly nonlinear inverse problem. In such problems an evaluation of orbital uncertainty due to random observation errors is greatly complicated, since linear estimations conventionally used are no longer acceptable for describing the uncertainty even as a rough approximation. Nevertheless, if an inverse problem is weakly intrinsically nonlinear, then one can resort to the so-called method of disturbed observations (aka observational Monte Carlo). Previously, we showed that the weaker the intrinsic nonlinearity, the more efficient the method, i.e. the more accurate it enables one to simulate stochastically the orbital uncertainty, while it is strictly exact only when the problem is intrinsically linear. However, as we ascertained experimentally, its efficiency was found to be higher than that of other stochastic methods widely applied in practice. In the present paper we investigate the intrinsic nonlinearity in complicated inverse problems of Celestial Mechanics when orbits are determined from little informative samples of observations, which typically occurs for recently discovered asteroids. To inquire into the question, we introduce an index of intrinsic nonlinearity. In asteroid problems it evinces that the intrinsic nonlinearity can be strong enough to affect appreciably probabilistic estimates, especially at the very short observed orbital arcs that the asteroids travel on for about a hundredth of their orbital periods and less. As it is known from regression analysis, the source of intrinsic nonlinearity is the nonflatness of the estimation subspace specified by a dynamical model in the observation space. Our numerical results indicate that when determining asteroid orbits it is actually very slight. However, in the parametric space the effect of intrinsic nonlinearity is exaggerated mainly by the ill-conditioning of the inverse problem. Even so, as for the method of disturbed observations, we conclude that it practically should be still entirely acceptable to adequately describe the orbital uncertainty since, from a geometrical point of view, the efficiency of the method directly depends only on the nonflatness of the estimation subspace and it gets higher as the nonflatness decreases.

  18. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  19. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  20. Transition from the labor market: older workers and retirement.

    PubMed

    Peterson, Chris L; Murphy, Greg

    2010-01-01

    The new millennium has seen the projected growth of older populations as a source of many problems, not the least of which is how to sustain this increasingly aging population. Some decades ago, early retirement from work posed few problems for governments, but most nations are now trying to ensure that workers remain in the workforce longer. In this context, the role played by older employees can be affected by at least two factors: their productivity (or perceived productivity) and their acceptance by younger workers and management. If the goal of maintaining employees into older age is to be achieved and sustained, opportunities must be provided, for example, for more flexible work arrangements and more possibilities to pursue bridge employment (work after formal retirement). The retirement experience varies, depending on people's circumstances. Some people, for example, have retirement forced upon them by illness or injury at work, by ill-health (such as chronic illnesses), or by downsizing and associated redundancies. This article focuses on the problems and opportunities associated with working to an older age or leaving the workforce early, particularly due to factors beyond one's control.

  1. Potential challenges facing distributed leadership in health care: evidence from the UK National Health Service.

    PubMed

    Martin, Graeme; Beech, Nic; MacIntosh, Robert; Bushfield, Stacey

    2015-01-01

    The discourse of leaderism in health care has been a subject of much academic and practical debate. Recently, distributed leadership (DL) has been adopted as a key strand of policy in the UK National Health Service (NHS). However, there is some confusion over the meaning of DL and uncertainty over its application to clinical and non-clinical staff. This article examines the potential for DL in the NHS by drawing on qualitative data from three co-located health-care organisations that embraced DL as part of their organisational strategy. Recent theorising positions DL as a hybrid model combining focused and dispersed leadership; however, our data raise important challenges for policymakers and senior managers who are implementing such a leadership policy. We show that there are three distinct forms of disconnect and that these pose a significant problem for DL. However, we argue that instead of these disconnects posing a significant problem for the discourse of leaderism, they enable a fantasy of leadership that draws on and supports the discourse. © 2014 The Authors. Sociology of Health & Illness © 2014 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd.

  2. Linear and nonlinear acoustic wave propagation in the atmosphere

    NASA Technical Reports Server (NTRS)

    Hariharan, S. I.; Yu, Ping

    1988-01-01

    The investigation of the acoustic wave propagation theory and numerical implementation for the situation of an isothermal atmosphere is described. A one-dimensional model to validate an asymptotic theory and a 3-D situation to relate to a realistic situation are considered. In addition, nonlinear wave propagation and the numerical treatment are included. It is known that the gravitational effects play a crucial role in the low frequency acoustic wave propagation. They propagate large distances and, as such, the numerical treatment of those problems become difficult in terms of posing boundary conditions which are valid for all frequencies.

  3. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  4. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology

    NASA Astrophysics Data System (ADS)

    Barker, T.; Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.

    2017-05-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities.

  5. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology

    PubMed Central

    Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.

    2017-01-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities. PMID:28588402

  6. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology.

    PubMed

    Barker, T; Schaeffer, D G; Shearer, M; Gray, J M N T

    2017-05-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ ( I )-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I -dependent rheology. When the I -dependence comes from a specific friction coefficient μ ( I ), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ ( I ) satisfies certain minimal, physically natural, inequalities.

  7. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  8. Regularization strategies for hyperplane classifiers: application to cancer classification with gene expression data.

    PubMed

    Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl

    2007-02-01

    Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.

  9. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  10. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  11. Photometric theory for wide-angle phenomena

    NASA Technical Reports Server (NTRS)

    Usher, Peter D.

    1990-01-01

    An examination is made of the problem posed by wide-angle photographic photometry, in order to extract a photometric-morphological history of Comet P/Halley. Photometric solutions are presently achieved over wide angles through a generalization of an assumption-free moment-sum method. Standard stars in the field allow a complete solution to be obtained for extinction, sky brightness, and the characteristic curve. After formulating Newton's method for the solution of the general nonlinear least-square problem, an implementation is undertaken for a canonical data set. Attention is given to the problem of random and systematic photometric errors.

  12. Aerodynamics of an airfoil with a jet issuing from its surface

    NASA Technical Reports Server (NTRS)

    Tavella, D. A.; Karamcheti, K.

    1982-01-01

    A simple, two dimensional, incompressible and inviscid model for the problem posed by a two dimensional wing with a jet issuing from its lower surface is considered and a parametric analysis is carried out to observe how the aerodynamic characteristics depend on the different parameters. The mathematical problem constitutes a boundary value problem where the position of part of the boundary is not known a priori. A nonlinear optimization approach was used to solve the problem, and the analysis reveals interesting characteristics that may help to better understand the physics involved in more complex situations in connection with high lift systems.

  13. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  14. Stiffness optimization of non-linear elastic structures

    DOE PAGES

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    2017-11-13

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  15. The Fisher-KPP problem with doubly nonlinear diffusion

    NASA Astrophysics Data System (ADS)

    Audrito, Alessandro; Vázquez, Juan Luis

    2017-12-01

    The famous Fisher-KPP reaction-diffusion model combines linear diffusion with the typical KPP reaction term, and appears in a number of relevant applications in biology and chemistry. It is remarkable as a mathematical model since it possesses a family of travelling waves that describe the asymptotic behaviour of a large class solutions 0 ≤ u (x , t) ≤ 1 of the problem posed in the real line. The existence of propagation waves with finite speed has been confirmed in some related models and disproved in others. We investigate here the corresponding theory when the linear diffusion is replaced by the "slow" doubly nonlinear diffusion and we find travelling waves that represent the wave propagation of more general solutions even when we extend the study to several space dimensions. A similar study is performed in the critical case that we call "pseudo-linear", i.e., when the operator is still nonlinear but has homogeneity one. With respect to the classical model and the "pseudo-linear" case, the "slow" travelling waves exhibit free boundaries.

  16. Stiffness optimization of non-linear elastic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel

    Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less

  17. Inverse scattering transform and soliton solutions for square matrix nonlinear Schrödinger equations with non-zero boundary conditions

    NASA Astrophysics Data System (ADS)

    Prinari, Barbara; Demontis, Francesco; Li, Sitai; Horikis, Theodoros P.

    2018-04-01

    The inverse scattering transform (IST) with non-zero boundary conditions at infinity is developed for an m × m matrix nonlinear Schrödinger-type equation which, in the case m = 2, has been proposed as a model to describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions (self-defocusing case), or attractive interatomic interactions and ferromagnetic spin-exchange interactions (self-focusing case). The IST for this system was first presented by Ieda et al. (2007) , using a different approach. In our formulation, both the direct and the inverse problems are posed in terms of a suitable uniformization variable which allows to develop the IST on the standard complex plane, instead of a two-sheeted Riemann surface or the cut plane with discontinuities along the cuts. Analyticity of the scattering eigenfunctions and scattering data, symmetries, properties of the discrete spectrum, and asymptotics are derived. The inverse problem is posed as a Riemann-Hilbert problem for the eigenfunctions, and the reconstruction formula of the potential in terms of eigenfunctions and scattering data is provided. In addition, the general behavior of the soliton solutions is analyzed in detail in the 2 × 2 self-focusing case, including some special solutions not previously discussed in the literature.

  18. Validating an artificial intelligence human proximity operations system with test cases

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    2013-05-01

    An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.

  19. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  20. Multimodal Deep Autoencoder for Human Pose Recovery.

    PubMed

    Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng

    2015-12-01

    Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.

  1. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  2. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  3. A Nonrigid Kernel-Based Framework for 2D-3D Pose Estimation and 2D Image Segmentation

    PubMed Central

    Sandhu, Romeil; Dambreville, Samuel; Yezzi, Anthony; Tannenbaum, Allen

    2013-01-01

    In this work, we present a nonrigid approach to jointly solving the tasks of 2D-3D pose estimation and 2D image segmentation. In general, most frameworks that couple both pose estimation and segmentation assume that one has exact knowledge of the 3D object. However, under nonideal conditions, this assumption may be violated if only a general class to which a given shape belongs is given (e.g., cars, boats, or planes). Thus, we propose to solve the 2D-3D pose estimation and 2D image segmentation via nonlinear manifold learning of 3D embedded shapes for a general class of objects or deformations for which one may not be able to associate a skeleton model. Thus, the novelty of our method is threefold: First, we present and derive a gradient flow for the task of nonrigid pose estimation and segmentation. Second, due to the possible nonlinear structures of one’s training set, we evolve the preimage obtained through kernel PCA for the task of shape analysis. Third, we show that the derivation for shape weights is general. This allows us to use various kernels, as well as other statistical learning methodologies, with only minimal changes needing to be made to the overall shape evolution scheme. In contrast with other techniques, we approach the nonrigid problem, which is an infinite-dimensional task, with a finite-dimensional optimization scheme. More importantly, we do not explicitly need to know the interaction between various shapes such as that needed for skeleton models as this is done implicitly through shape learning. We provide experimental results on several challenging pose estimation and segmentation scenarios. PMID:20733218

  4. Classification and pose estimation of objects using nonlinear features

    NASA Astrophysics Data System (ADS)

    Talukder, Ashit; Casasent, David P.

    1998-03-01

    A new nonlinear feature extraction method called the maximum representation and discrimination feature (MRDF) method is presented for extraction of features from input image data. It implements transformations similar to the Sigma-Pi neural network. However, the weights of the MRDF are obtained in closed form, and offer advantages compared to nonlinear neural network implementations. The features extracted are useful for both object discrimination (classification) and object representation (pose estimation). We show its use in estimating the class and pose of images of real objects and rendered solid CAD models of machine parts from single views using a feature-space trajectory (FST) neural network classifier. We show more accurate classification and pose estimation results than are achieved by standard principal component analysis (PCA) and Fukunaga-Koontz (FK) feature extraction methods.

  5. Treatment of Nuclear Data Covariance Information in Sample Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  6. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  7. Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media: II. The nonlinear theory

    NASA Astrophysics Data System (ADS)

    Bona, J. L.; Chen, M.; Saut, J.-C.

    2004-05-01

    In part I of this work (Bona J L, Chen M and Saut J-C 2002 Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media I: Derivation and the linear theory J. Nonlinear Sci. 12 283-318), a four-parameter family of Boussinesq systems was derived to describe the propagation of surface water waves. Similar systems are expected to arise in other physical settings where the dominant aspects of propagation are a balance between the nonlinear effects of convection and the linear effects of frequency dispersion. In addition to deriving these systems, we determined in part I exactly which of them are linearly well posed in various natural function classes. It was argued that linear well-posedness is a natural necessary requirement for the possible physical relevance of the model in question. In this paper, it is shown that the first-order correct models that are linearly well posed are in fact locally nonlinearly well posed. Moreover, in certain specific cases, global well-posedness is established for physically relevant initial data. In part I, higher-order correct models were also derived. A preliminary analysis of a promising subclass of these models shows them to be well posed.

  8. Sickness absence management: encouraging attendance or 'risk-taking' presenteeism in employees with chronic illness?

    PubMed

    Munir, Fehmidah; Yarker, Joanna; Haslam, Cheryl

    2008-01-01

    To investigate the organizational perspectives on the effectiveness of their attendance management policies for chronically ill employees. A mixed-method approach was employed involving questionnaire survey with employees and in-depth interviews with key stakeholders of the organizational policies. Participants reported that attendance management polices and the point at which systems were triggered, posed problems for employees managing chronic illness. These systems presented risk to health: employees were more likely to turn up for work despite feeling unwell (presenteeism) to avoid a disciplinary situation but absence-related support was only provided once illness progressed to long-term sick leave. Attendance management polices also raised ethical concerns for 'forced' illness disclosure and immense pressures on line managers to manage attendance. Participants felt their current attendance management polices were unfavourable toward those managing a chronic illness. The policies heavily focused on attendance despite illness and on providing return to work support following long-term sick leave. Drawing on the results, the authors conclude that attendance management should promote job retention rather than merely prevent absence per se. They outline areas of improvement in the attendance management of employees with chronic illness.

  9. Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.

    1984-01-01

    The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.

  10. Discrete Regularization for Calibration of Geologic Facies Against Dynamic Flow Data

    NASA Astrophysics Data System (ADS)

    Khaninezhad, Mohammad-Reza; Golmohammadi, Azarang; Jafarpour, Behnam

    2018-04-01

    Subsurface flow model calibration involves many more unknowns than measurements, leading to ill-posed problems with nonunique solutions. To alleviate nonuniqueness, the problem is regularized by constraining the solution space using prior knowledge. In certain sedimentary environments, such as fluvial systems, the contrast in hydraulic properties of different facies types tends to dominate the flow and transport behavior, making the effect of within facies heterogeneity less significant. Hence, flow model calibration in those formations reduces to delineating the spatial structure and connectivity of different lithofacies types and their boundaries. A major difficulty in calibrating such models is honoring the discrete, or piecewise constant, nature of facies distribution. The problem becomes more challenging when complex spatial connectivity patterns with higher-order statistics are involved. This paper introduces a novel formulation for calibration of complex geologic facies by imposing appropriate constraints to recover plausible solutions that honor the spatial connectivity and discreteness of facies models. To incorporate prior connectivity patterns, plausible geologic features are learned from available training models. This is achieved by learning spatial patterns from training data, e.g., k-SVD sparse learning or the traditional Principal Component Analysis. Discrete regularization is introduced as a penalty functions to impose solution discreteness while minimizing the mismatch between observed and predicted data. An efficient gradient-based alternating directions algorithm is combined with variable splitting to minimize the resulting regularized nonlinear least squares objective function. Numerical results show that imposing learned facies connectivity and discreteness as regularization functions leads to geologically consistent solutions that improve facies calibration quality.

  11. "It Was Not Me That Was Sick, It Was the Building": Rhetorical Identity Management Strategies in the Context of Observed or Suspected Indoor Air Problems in Workplaces.

    PubMed

    Finell, Eerika; Seppälä, Tuija; Suoninen, Eero

    2018-07-01

    Suffering from a contested illness poses a serious threat to one's identity. We analyzed the rhetorical identity management strategies respondents used when depicting their health problems and lives in the context of observed or suspected indoor air (IA) problems in the workplace. The data consisted of essays collected by the Finnish Literature Society. We used discourse-oriented methods to interpret a variety of language uses in the construction of identity strategies. Six strategies were identified: respondents described themselves as normal and good citizens with strong characters, and as IA sufferers who received acknowledge from others, offered positive meanings to their in-group, and demanded recognition. These identity strategies located on two continua: (a) individual- and collective-level strategies and (b) dissolved and emphasized (sub)category boundaries. The practical conclusion is that professionals should be aware of these complex coping strategies when aiming to interact effectively with people suffering from contested illnesses.

  12. Level-set techniques for facies identification in reservoir modeling

    NASA Astrophysics Data System (ADS)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  13. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  14. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John M.; Herren, Kenneth A.

    2008-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  15. Space Vehicle Pose Estimation via Optical Correlation and Nonlinear Estimation

    NASA Technical Reports Server (NTRS)

    Rakoczy, John; Herren, Kenneth

    2007-01-01

    A technique for 6-degree-of-freedom (6DOF) pose estimation of space vehicles is being developed. This technique draws upon recent developments in implementing optical correlation measurements in a nonlinear estimator, which relates the optical correlation measurements to the pose states (orientation and position). For the optical correlator, the use of both conjugate filters and binary, phase-only filters in the design of synthetic discriminant function (SDF) filters is explored. A static neural network is trained a priori and used as the nonlinear estimator. New commercial animation and image rendering software is exploited to design the SDF filters and to generate a large filter set with which to train the neural network. The technique is applied to pose estimation for rendezvous and docking of free-flying spacecraft and to terrestrial surface mobility systems for NASA's Vision for Space Exploration. Quantitative pose estimation performance will be reported. Advantages and disadvantages of the implementation of this technique are discussed.

  16. Solution of large nonlinear quasistatic structural mechanics problems on distributed-memory multiprocessor computers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blanford, M.

    1997-12-31

    Most commercially-available quasistatic finite element programs assemble element stiffnesses into a global stiffness matrix, then use a direct linear equation solver to obtain nodal displacements. However, for large problems (greater than a few hundred thousand degrees of freedom), the memory size and computation time required for this approach becomes prohibitive. Moreover, direct solution does not lend itself to the parallel processing needed for today`s multiprocessor systems. This talk gives an overview of the iterative solution strategy of JAS3D, the nonlinear large-deformation quasistatic finite element program. Because its architecture is derived from an explicit transient-dynamics code, it does not ever assemblemore » a global stiffness matrix. The author describes the approach he used to implement the solver on multiprocessor computers, and shows examples of problems run on hundreds of processors and more than a million degrees of freedom. Finally, he describes some of the work he is presently doing to address the challenges of iterative convergence for ill-conditioned problems.« less

  17. The neural network approximation method for solving multidimensional nonlinear inverse problems of geophysics

    NASA Astrophysics Data System (ADS)

    Shimelevich, M. I.; Obornev, E. A.; Obornev, I. E.; Rodionov, E. A.

    2017-07-01

    The iterative approximation neural network method for solving conditionally well-posed nonlinear inverse problems of geophysics is presented. The method is based on the neural network approximation of the inverse operator. The inverse problem is solved in the class of grid (block) models of the medium on a regularized parameterization grid. The construction principle of this grid relies on using the calculated values of the continuity modulus of the inverse operator and its modifications determining the degree of ambiguity of the solutions. The method provides approximate solutions of inverse problems with the maximal degree of detail given the specified degree of ambiguity with the total number of the sought parameters n × 103 of the medium. The a priori and a posteriori estimates of the degree of ambiguity of the approximated solutions are calculated. The work of the method is illustrated by the example of the three-dimensional (3D) inversion of the synthesized 2D areal geoelectrical (audio magnetotelluric sounding, AMTS) data corresponding to the schematic model of a kimberlite pipe.

  18. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    NASA Astrophysics Data System (ADS)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  19. Estimation of the parameters of disturbances on long-range radio-communication paths

    NASA Astrophysics Data System (ADS)

    Gerasimov, Iu. S.; Gordeev, V. A.; Kristal, V. S.

    1982-09-01

    Radio propagation on long-range paths is disturbed by such phenomena as ionospheric density fluctuations, meteor trails, and the Faraday effect. In the present paper, the determination of the characteristics of such disturbances on the basis of received-signal parameters is considered as an inverse and ill-posed problem. A method for investigating the indeterminacy which arises in such determinations is proposed, and a quantitative analysis of this indeterminacy is made.

  20. Spotted star mapping by light curve inversion: Tests and application to HD 12545

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.

    2013-06-01

    A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.

  1. Using the Hilbert uniqueness method in a reconstruction algorithm for electrical impedance tomography.

    PubMed

    Dai, W W; Marsili, P M; Martinez, E; Morucci, J P

    1994-05-01

    This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).

  2. Pattern-Based Inverse Modeling for Characterization of Subsurface Flow Models with Complex Geologic Heterogeneity

    NASA Astrophysics Data System (ADS)

    Golmohammadi, A.; Jafarpour, B.; M Khaninezhad, M. R.

    2017-12-01

    Calibration of heterogeneous subsurface flow models leads to ill-posed nonlinear inverse problems, where too many unknown parameters are estimated from limited response measurements. When the underlying parameters form complex (non-Gaussian) structured spatial connectivity patterns, classical variogram-based geostatistical techniques cannot describe the underlying connectivity patterns. Modern pattern-based geostatistical methods that incorporate higher-order spatial statistics are more suitable for describing such complex spatial patterns. Moreover, when the underlying unknown parameters are discrete (geologic facies distribution), conventional model calibration techniques that are designed for continuous parameters cannot be applied directly. In this paper, we introduce a novel pattern-based model calibration method to reconstruct discrete and spatially complex facies distributions from dynamic flow response data. To reproduce complex connectivity patterns during model calibration, we impose a feasibility constraint to ensure that the solution follows the expected higher-order spatial statistics. For model calibration, we adopt a regularized least-squares formulation, involving data mismatch, pattern connectivity, and feasibility constraint terms. Using an alternating directions optimization algorithm, the regularized objective function is divided into a continuous model calibration problem, followed by mapping the solution onto the feasible set. The feasibility constraint to honor the expected spatial statistics is implemented using a supervised machine learning algorithm. The two steps of the model calibration formulation are repeated until the convergence criterion is met. Several numerical examples are used to evaluate the performance of the developed method.

  3. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  4. Discrete optimal control approach to a four-dimensional guidance problem near terminal areas

    NASA Technical Reports Server (NTRS)

    Nagarajan, N.

    1974-01-01

    Description of a computer-oriented technique to generate the necessary control inputs to guide an aircraft in a given time from a given initial state to a prescribed final state subject to the constraints on airspeed, acceleration, and pitch and bank angles of the aircraft. A discrete-time mathematical model requiring five state variables and three control variables is obtained, assuming steady wind and zero sideslip. The guidance problem is posed as a discrete nonlinear optimal control problem with a cost functional of Bolza form. A solution technique for the control problem is investigated, and numerical examples are presented. It is believed that this approach should prove to be useful in automated air traffic control schemes near large terminal areas.

  5. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems

    PubMed Central

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-01-01

    Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289

  6. Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.

    PubMed

    Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R

    2006-11-02

    We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.

  7. Comparisons of linear and nonlinear pyramid schemes for signal and image processing

    NASA Astrophysics Data System (ADS)

    Morales, Aldo W.; Ko, Sung-Jea

    1997-04-01

    Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.

  8. The algebraic-hyperbolic approach to the linearized gravitational constraints on a Minkowski background

    NASA Astrophysics Data System (ADS)

    Winicour, Jeffrey

    2017-08-01

    An algebraic-hyperbolic method for solving the Hamiltonian and momentum constraints has recently been shown to be well posed for general nonlinear perturbations of the initial data for a Schwarzschild black hole. This is a new approach to solving the constraints of Einstein’s equations which does not involve elliptic equations and has potential importance for the construction of binary black hole data. In order to shed light on the underpinnings of this approach, we consider its application to obtain solutions of the constraints for linearized perturbations of Minkowski space. In that case, we find the surprising result that there are no suitable Cauchy hypersurfaces in Minkowski space for which the linearized algebraic-hyperbolic constraint problem is well posed.

  9. Moving from pixel to object scale when inverting radiative transfer models for quantitative estimation of biophysical variables in vegetation (Invited)

    NASA Astrophysics Data System (ADS)

    Atzberger, C.

    2013-12-01

    The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion

  10. [Multidisciplinary approach in public health research. The example of accidents and safety at work].

    PubMed

    Lert, F; Thebaud, A; Dassa, S; Goldberg, M

    1982-01-01

    This article critically analyses the various scientific approaches taken to industrial accidents, particularly in epidemiology, ergonomie and sociology, by attempting to outline the epistemological limitations in each respective field. An occupational accident is by its very nature not only a physical injury but also an economic, social and legal phenomenon, which more so than illness, enables us to examine the problems posed by the need for a multidisciplinary approach in Public Health research.

  11. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  12. Model-based elastography: a survey of approaches to the inverse elasticity problem

    PubMed Central

    Doyley, M M

    2012-01-01

    Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839

  13. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  14. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  15. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less

  16. Perceptual asymmetry in texture perception.

    PubMed

    Williams, D; Julesz, B

    1992-07-15

    A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.

  17. Refraction traveltime tomography based on damped wave equation for irregular topographic model

    NASA Astrophysics Data System (ADS)

    Park, Yunhui; Pyun, Sukjoon

    2018-03-01

    Land seismic data generally have time-static issues due to irregular topography and weathered layers at shallow depths. Unless the time static is handled appropriately, interpretation of the subsurface structures can be easily distorted. Therefore, static corrections are commonly applied to land seismic data. The near-surface velocity, which is required for static corrections, can be inferred from first-arrival traveltime tomography, which must consider the irregular topography, as the land seismic data are generally obtained in irregular topography. This paper proposes a refraction traveltime tomography technique that is applicable to an irregular topographic model. This technique uses unstructured meshes to express an irregular topography, and traveltimes calculated from the frequency-domain damped wavefields using the finite element method. The diagonal elements of the approximate Hessian matrix were adopted for preconditioning, and the principle of reciprocity was introduced to efficiently calculate the Fréchet derivative. We also included regularization to resolve the ill-posed inverse problem, and used the nonlinear conjugate gradient method to solve the inverse problem. As the damped wavefields were used, there were no issues associated with artificial reflections caused by unstructured meshes. In addition, the shadow zone problem could be circumvented because this method is based on the exact wave equation, which does not require a high-frequency assumption. Furthermore, the proposed method was both robust to an initial velocity model and efficient compared to full wavefield inversions. Through synthetic and field data examples, our method was shown to successfully reconstruct shallow velocity structures. To verify our method, static corrections were roughly applied to the field data using the estimated near-surface velocity. By comparing common shot gathers and stack sections with and without static corrections, we confirmed that the proposed tomography algorithm can be used to correct the statics of land seismic data.

  18. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  19. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  20. [Ethical questions related to nutrition and hidration: basic aspects].

    PubMed

    Collazo Chao, E; Girela, E

    2011-01-01

    Conditions that pose ethical problems related to nutrition and hydration are very common nowdays, particularly within Hospitals among terminally ill patients and other patients who require nutrition and hydration. In this article we intend to analyze some circumstances, according to widely accepted ethical values, in order to outline a clear action model to help clinicians in making such difficult decisions. The problematic situations analyzed include: should hydration and nutrition be considered basic care or therapeutic measures?, and the ethical aspects of enteral versus parenteral nutrition.

  1. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  2. A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound

    NASA Astrophysics Data System (ADS)

    Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.

    2012-10-01

    A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.

  3. Hydrological Parameter Estimations from a Conservative Tracer Test With Variable-Density Effects at the Boise Hydrogeophysical Research Site

    DTIC Science & Technology

    2011-12-15

    the measured porosity values can be taken as equivalent to effective porosity values for this aquifer with the risk of only very limited overestimation...information to constrain/control an increasingly ill-posed problem, and (3) risk estimation of a model with more heterogeneity than is needed to explain...coarse fluvial deposits: Boise Hydrogeophysical Research Site, Geological Society of America Bulletin, 116(9–10), 1059–1073. Barrash, W., T. Clemo

  4. Minimal norm constrained interpolation. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Irvine, L. D.

    1985-01-01

    In computational fluid dynamics and in CAD/CAM, a physical boundary is usually known only discreetly and most often must be approximated. An acceptable approximation preserves the salient features of the data such as convexity and concavity. In this dissertation, a smooth interpolant which is locally concave where the data are concave and is locally convex where the data are convex is described. The interpolant is found by posing and solving a minimization problem whose solution is a piecewise cubic polynomial. The problem is solved indirectly by using the Peano Kernal theorem to recast it into an equivalent minimization problem having the second derivative of the interpolant as the solution. This approach leads to the solution of a nonlinear system of equations. It is shown that Newton's method is an exceptionally attractive and efficient method for solving the nonlinear system of equations. Examples of shape-preserving interpolants, as well as convergence results obtained by using Newton's method are also shown. A FORTRAN program to compute these interpolants is listed. The problem of computing the interpolant of minimal norm from a convex cone in a normal dual space is also discussed. An extension of de Boor's work on minimal norm unconstrained interpolation is presented.

  5. Stigma and work.

    PubMed

    Stuart, Heather

    2004-01-01

    This paper addresses what is known about workplace stigma and employment inequity for people with mental and emotional problems. For people with serious mental disorders, studies show profound consequences of stigma, including diminished employability, lack of career advancement and poor quality of working life. People with serious mental illnesses are more likely to be unemployed or to be under-employed in inferior positions that are incommensurate with their skills or training. If they return to work following an illness, they often face hostility and reduced responsibilities. The result may be self-stigma and increased disability. Little is yet known about how workplace stigma affects those with less disabling psychological or emotional problems, even though these are likely to be more prevalent in workplace settings. Despite the heavy burden posed by poor mental health in the workplace, there is no regular source of population data relating to workplace stigma, and no evidence base to support the development of best-practice solutions for workplace anti-stigma programs. Suggestions for research are made in light of these gaps.

  6. Beyond Criminalization: Toward a Criminologically Informed Framework for Mental Health Policy and Services Research

    PubMed Central

    Silver, Eric; Wolff, Nancy

    2010-01-01

    The problems posed by persons with mental illness involved with the criminal justice system are vexing ones that have received attention at the local, state and national levels. The conceptual model currently guiding research and social action around these problems is shaped by the “criminalization” perspective and the associated belief that reconnecting individuals with mental health services will by itself reduce risk for arrest. This paper argues that such efforts are necessary but possibly not sufficient to achieve that reduction. Arguing for the need to develop a services research framework that identifies a broader range of risk factors for arrest, we describe three potentially useful criminological frameworks—the “life course,” “local life circumstances” and “routine activities” perspectives. Their utility as platforms for research in a population of persons with mental illness is discussed and suggestions are provided with regard to how services research guided by these perspectives might inform the development of community-based services aimed at reducing risk of arrest. PMID:16791518

  7. Evaluating model structure adequacy: The case of the Maggia Valley groundwater system, southern Switzerland

    USGS Publications Warehouse

    Hill, Mary C.; L. Foglia,; S. W. Mehl,; P. Burlando,

    2013-01-01

    Model adequacy is evaluated with alternative models rated using model selection criteria (AICc, BIC, and KIC) and three other statistics. Model selection criteria are tested with cross-validation experiments and insights for using alternative models to evaluate model structural adequacy are provided. The study is conducted using the computer codes UCODE_2005 and MMA (MultiModel Analysis). One recharge alternative is simulated using the TOPKAPI hydrological model. The predictions evaluated include eight heads and three flows located where ecological consequences and model precision are of concern. Cross-validation is used to obtain measures of prediction accuracy. Sixty-four models were designed deterministically and differ in representation of river, recharge, bedrock topography, and hydraulic conductivity. Results include: (1) What may seem like inconsequential choices in model construction may be important to predictions. Analysis of predictions from alternative models is advised. (2) None of the model selection criteria consistently identified models with more accurate predictions. This is a disturbing result that suggests to reconsider the utility of model selection criteria, and/or the cross-validation measures used in this work to measure model accuracy. (3) KIC displayed poor performance for the present regression problems; theoretical considerations suggest that difficulties are associated with wide variations in the sensitivity term of KIC resulting from the models being nonlinear and the problems being ill-posed due to parameter correlations and insensitivity. The other criteria performed somewhat better, and similarly to each other. (4) Quantities with high leverage are more difficult to predict. The results are expected to be generally applicable to models of environmental systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    J.A. Krommes

    Fusion physics poses an extremely challenging, practically complex problem that does not yield readily to simple paradigms. Nevertheless, various of the theoretical tools and conceptual advances emphasized at the KaufmanFest 2007 have motivated and/or found application to the development of fusion-related plasma turbulence theory. A brief historical commentary is given on some aspects of that specialty, with emphasis on the role (and limitations) of Hamiltonian/symplectic approaches, variational methods, oscillation-center theory, and nonlinear dynamics. It is shown how to extract a renormalized ponderomotive force from the statistical equations of plasma turbulence, and the possibility of a renormalized K-χ theorem is discussed.more » An unusual application of quasilinear theory to the problem of plasma equilibria in the presence of stochastic magnetic fields is described. The modern problem of zonal-flow dynamics illustrates a confluence of several techniques, including (i) the application of nonlinear-dynamics methods, especially center-manifold theory, to the problem of the transition to plasma turbulence in the face of self-generated zonal flows; and (ii) the use of Hamiltonian formalism to determine the appropriate (Casimir) invariant to be used in a novel wave-kinetic analysis of systems of interacting zonal flows and drift waves. Recent progress in the theory of intermittent chaotic statistics and the generation of coherent structures from turbulence is mentioned, and an appeal is made for some new tools to cope with these interesting and difficult problems in nonlinear plasma physics. Finally, the important influence of the intellectually stimulating research environment fostered by Prof. Allan Kaufman on the author's thinking and teaching methodology is described.« less

  9. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures

    PubMed Central

    Chen, Yun; Yang, Hui

    2016-01-01

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering. PMID:27966581

  10. A Novel Information-Theoretic Approach for Variable Clustering and Predictive Modeling Using Dirichlet Process Mixtures.

    PubMed

    Chen, Yun; Yang, Hui

    2016-12-14

    In the era of big data, there are increasing interests on clustering variables for the minimization of data redundancy and the maximization of variable relevancy. Existing clustering methods, however, depend on nontrivial assumptions about the data structure. Note that nonlinear interdependence among variables poses significant challenges on the traditional framework of predictive modeling. In the present work, we reformulate the problem of variable clustering from an information theoretic perspective that does not require the assumption of data structure for the identification of nonlinear interdependence among variables. Specifically, we propose the use of mutual information to characterize and measure nonlinear correlation structures among variables. Further, we develop Dirichlet process (DP) models to cluster variables based on the mutual-information measures among variables. Finally, orthonormalized variables in each cluster are integrated with group elastic-net model to improve the performance of predictive modeling. Both simulation and real-world case studies showed that the proposed methodology not only effectively reveals the nonlinear interdependence structures among variables but also outperforms traditional variable clustering algorithms such as hierarchical clustering.

  11. Impact of migration on illness experience and help-seeking strategies of patients from Turkey and Bosnia in primary health care in Basel.

    PubMed

    Gilgen, D; Maeusezahl, D; Salis Gross, C; Battegay, E; Flubacher, P; Tanner, M; Weiss, M G; Hatz, C

    2005-09-01

    Migration, particularly among refugees and asylum seekers, poses many challenges to the health system of host countries. This study examined the impact of migration history on illness experience, its meaning and help-seeking strategies of migrant patients from Bosnia and Turkey with a range of common health problems in general practice in Basel, Switzerland. The Explanatory Model Interview Catalogue, a data collection instrument for cross-cultural research which combines epidemiological and ethnographic research approaches, was used in semi-structured one-to-one patient interviews. Bosnian patients (n=36) who had more traumatic migration experiences than Turkish/Kurdish (n=62) or Swiss internal migrants (n=48) reported a larger number of health problems than the other groups. Psychological distress was reported most frequently by all three groups in response to focussed queries, but spontaneously reported symptoms indicated the prominence of somatic, rather than psychological or psychosocial, problems. Among Bosnians, 78% identified traumatic migration experiences as a cause of their illness, in addition to a range of psychological and biomedical causes. Help-seeking strategies for the current illness included a wide range of treatments, such as basic medical care at private surgeries, outpatients department in hospitals as well as alternative medical treatments among all groups. Findings provide a useful guide to clinicians who work with migrants and should inform policy in medical care, information and health promotion for migrants in Switzerland as well as further education of health professionals on issues concerning migrants health.

  12. An ill-posed parabolic evolution system for dispersive deoxygenation-reaeration in water

    NASA Astrophysics Data System (ADS)

    Azaïez, M.; Ben Belgacem, F.; Hecht, F.; Le Bot, C.

    2014-01-01

    We consider an inverse problem that arises in the management of water resources and pertains to the analysis of surface water pollution by organic matter. Most physically relevant models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor’s dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any conditions. In fact, for real-life concerns, measurements on the DO are easy to obtain and to save. On the contrary, collecting data on the BOD is a sensitive task and turns out to be a lengthy process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only is this problem plainly worth studying for its own interest but it could also be a mandatory step in other applications such as the identification of the location of pollution sources. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are the cause of the ill-posed data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can search for; it is the central purpose of the paper. Finally, we have performed some computational experiments to assess the capability of the mixed finite element in missing data recovery.

  13. Automated reverse engineering of nonlinear dynamical systems

    PubMed Central

    Bongard, Josh; Lipson, Hod

    2007-01-01

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated “reverse engineering” approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future. PMID:17553966

  14. Automated reverse engineering of nonlinear dynamical systems.

    PubMed

    Bongard, Josh; Lipson, Hod

    2007-06-12

    Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. Previous automated symbolic modeling approaches of coupled physical systems produced linear models or required a nonlinear model to be provided manually. The advance presented here is made possible by allowing the method to model each (possibly coupled) variable separately, intelligently perturbing and destabilizing the system to extract its less observable characteristics, and automatically simplifying the equations during modeling. We demonstrate this method on four simulated and two real systems spanning mechanics, ecology, and systems biology. Unlike numerical models, symbolic models have explanatory value, suggesting that automated "reverse engineering" approaches for model-free symbolic nonlinear system identification may play an increasing role in our ability to understand progressively more complex systems in the future.

  15. Local search heuristic for the discrete leader-follower problem with multiple follower objectives

    NASA Astrophysics Data System (ADS)

    Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand

    2016-10-01

    We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.

  16. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  17. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  18. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  19. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  20. Space structures insulating material's thermophysical and radiation properties estimation

    NASA Astrophysics Data System (ADS)

    Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.

    2007-11-01

    In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.

  1. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  2. Total variation superiorized conjugate gradient method for image reconstruction

    NASA Astrophysics Data System (ADS)

    Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.

    2018-03-01

    The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.

  3. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  4. Chopping Time of the FPU {α }-Model

    NASA Astrophysics Data System (ADS)

    Carati, A.; Ponno, A.

    2018-03-01

    We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.

  5. A Toolbox for Imaging Stellar Surfaces

    NASA Astrophysics Data System (ADS)

    Young, John

    2018-04-01

    In this talk I will review the available algorithms for synthesis imaging at visible and infrared wavelengths, including both gray and polychromatic methods. I will explain state-of-the-art approaches to constraining the ill-posed image reconstruction problem, and selecting an appropriate regularisation function and strength of regularisation. The reconstruction biases that can follow from non-optimal choices will be discussed, including their potential impact on the physical interpretation of the results. This discussion will be illustrated with example stellar surface imaging results from real VLTI and COAST datasets.

  6. Mathematics and Measurement.

    PubMed

    Boisvert, R F; Donahue, M J; Lozier, D W; McMichael, R; Rust, B W

    2001-01-01

    In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST's current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years.

  7. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  9. The Relationship between Students' Problem Posing and Problem Solving Abilities and Beliefs: A Small-Scale Study with Chinese Elementary School Children

    ERIC Educational Resources Information Center

    Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…

  10. Exploring equivalence domain in nonlinear inverse problems using Covariance Matrix Adaption Evolution Strategy (CMAES) and random sampling

    NASA Astrophysics Data System (ADS)

    Grayver, Alexander V.; Kuvshinov, Alexey V.

    2016-05-01

    This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.

  11. Obtaining the Bidirectional Transfer Distribution Function ofIsotropically Scattering Materials Using an Integrating Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonsson, Jacob C.; Branden, Henrik

    2006-10-19

    This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less

  12. Lassa fever: the challenges of curtailing a deadly disease.

    PubMed

    Ibekwe, Titus

    2012-01-01

    Today Lassa fever is mainly a disease of the developing world, however several imported cases have been reported in different parts of the world and there are growing concerns of the potentials of Lassa fever Virus as a biological weapon. Yet no tangible solution to this problem has been developed nearly half a decade after its identification. Hence, the paper is aimed at appraising the problems associated with LAF illness; the challenges in curbing the epidemic and recommendations on important focal points. A Review based on the documents from the EFAS conference 2011 and literature search on PubMed, Scopus and Science direct. The retrieval of relevant papers was via the University of British Columbia and University of Toronto Libraries. The two major search engines returned 61 and 920 articles respectively. Out of these, the final 26 articles that met the criteria were selected. Relevant information on epidemiology, burden of management and control were obtained. Prompt and effective containment of the Lassa fever disease in Lassa village four decades ago could have saved the West African sub-region and indeed the entire globe from the devastating effect and threats posed by this illness. That was a hard lesson calling for much more proactive measures towards the eradication of the illness at primary, secondary and tertiary levels of health care.

  13. SU-E-T-398: Evaluation of Radiobiological Parameters Using Serial Tumor Imaging During Radiotherapy as An Inverse Ill-Posed Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Sandison, G; Schwartz, J

    Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less

  14. Pre-Service Teachers' Free and Structured Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silber, Steven; Cai, Jinfa

    2017-01-01

    This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…

  15. Data Assimilation on a Quantum Annealing Computer: Feasibility and Scalability

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.; Halem, M.; Chapman, D. R.; Pelissier, C. S.

    2014-12-01

    Data assimilation is one of the ubiquitous and computationally hard problems in the Earth Sciences. In particular, ensemble-based methods require a large number of model evaluations to estimate the prior probability density over system states, and variational methods require adjoint calculations and iteration to locate the maximum a posteriori solution in the presence of nonlinear models and observation operators. Quantum annealing computers (QAC) like the new D-Wave housed at the NASA Ames Research Center can be used for optimization and sampling, and therefore offers a new possibility for efficiently solving hard data assimilation problems. Coding on the QAC is not straightforward: a problem must be posed as a Quadratic Unconstrained Binary Optimization (QUBO) and mapped to a spherical Chimera graph. We have developed a method for compiling nonlinear 4D-Var problems on the D-Wave that consists of five steps: Emulating the nonlinear model and/or observation function using radial basis functions (RBF) or Chebyshev polynomials. Truncating a Taylor series around each RBF kernel. Reducing the Taylor polynomial to a quadratic using ancilla gadgets. Mapping the real-valued quadratic to a fixed-precision binary quadratic. Mapping the fully coupled binary quadratic to a partially coupled spherical Chimera graph using ancilla gadgets. At present the D-Wave contains 512 qbits (with 1024 and 2048 qbit machines due in the next two years); this machine size allows us to estimate only 3 state variables at each satellite overpass. However, QAC's solve optimization problems using a physical (quantum) system, and therefore do not require iterations or calculation of model adjoints. This has the potential to revolutionize our ability to efficiently perform variational data assimilation, as the size of these computers grows in the coming years.

  16. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  17. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  18. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  19. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  20. A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics

    NASA Astrophysics Data System (ADS)

    Wopschall, Steven Robert

    The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.

  1. Creativity of Field-dependent and Field-independent Students in Posing Mathematical Problems

    NASA Astrophysics Data System (ADS)

    Azlina, N.; Amin, S. M.; Lukito, A.

    2018-01-01

    This study aims at describing the creativity of elementary school students with different cognitive styles in mathematical problem-posing. The posed problems were assessed based on three components of creativity, namely fluency, flexibility, and novelty. The free-type problem posing was used in this study. This study is a descriptive research with qualitative approach. Data collections were conducted through written task and task-based interviews. The subjects were two elementary students. One of them is Field Dependent (FD) and the other is Field Independent (FI) which were measured by GEFT (Group Embedded Figures Test). Further, the data were analyzed based on creativity components. The results show thatFD student’s posed problems have fulfilled the two components of creativity namely fluency, in which the subject posed at least 3 mathematical problems, and flexibility, in whichthe subject posed problems with at least 3 different categories/ideas. Meanwhile,FI student’s posed problems have fulfilled all three components of creativity, namely fluency, in which thesubject posed at least 3 mathematical problems, flexibility, in which thesubject posed problems with at least 3 different categories/ideas, and novelty, in which the subject posed problems that are purely the result of her own ideas and different from problems they have known.

  2. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  3. Skill Levels of Prospective Physics Teachers on Problem Posing

    ERIC Educational Resources Information Center

    Cildir, Sema; Sezen, Nazan

    2011-01-01

    Problem posing is one of the topics which the educators thoroughly accentuate. Problem posing skill is defined as an introvert activity of a student's learning. In this study, skill levels of prospective physics teachers on problem posing were determined and their views on problem posing were evaluated. To this end, prospective teachers were given…

  4. [Medicine at the "edge of chaos". Life, entropy and complexity].

    PubMed

    De Vito, Eduardo L

    2016-01-01

    The aim of this paper is to help physicians and health professionals, who constantly seek to improve their knowledge for the benefit of the ill, to incorporate new conceptual and methodological tools to understand the complexity inherent to the field of medicine. This article contains notions that are unfamiliar to these professionals and are intended to foster reflection and learning. It poses the need to define life from a thermodynamic point of view, linking it closely to complex systems, nonlinear dynamics and chaotic behavior, as well as to redefine conventional physiological control mechanisms based on the concept of homeostasis, and to travel the path that starts with the search for extraterrestrial life up to exposing medicine "near the edge of chaos". Complexity transcends the biological aspects; it includes a subjective and symbolic/social dimension. Viewing disease as a heterogeneous and multi-causal phenomenon can give rise to new approaches for the sick.

  5. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  6. On three dimensional object recognition and pose-determination: An abstraction based approach. Ph.D. Thesis - Michigan Univ. Final Report

    NASA Technical Reports Server (NTRS)

    Quek, Kok How Francis

    1990-01-01

    A method of computing reliable Gaussian and mean curvature sign-map descriptors from the polynomial approximation of surfaces was demonstrated. Such descriptors which are invariant under perspective variation are suitable for hypothesis generation. A means for determining the pose of constructed geometric forms whose algebraic surface descriptors are nonlinear in terms of their orienting parameters was developed. This was done by means of linear functions which are capable of approximating nonlinear forms and determining their parameters. It was shown that biquadratic surfaces are suitable companion linear forms for cylindrical approximation and parameter estimation. The estimates provided the initial parametric approximations necessary for a nonlinear regression stage to fine tune the estimates by fitting the actual nonlinear form to the data. A hypothesis-based split-merge algorithm for extraction and pose determination of cylinders and planes which merge smoothly into other surfaces was developed. It was shown that all split-merge algorithms are hypothesis-based. A finite-state algorithm for the extraction of the boundaries of run-length regions was developed. The computation takes advantage of the run list topology and boundary direction constraints implicit in the run-length encoding.

  7. A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus

    NASA Astrophysics Data System (ADS)

    Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei

    2005-01-01

    Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.

  8. Explicit solution techniques for impact with contact constraints

    NASA Technical Reports Server (NTRS)

    Mccarty, Robert E.

    1993-01-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  9. Explicit solution techniques for impact with contact constraints

    NASA Astrophysics Data System (ADS)

    McCarty, Robert E.

    1993-08-01

    Modern military aircraft transparency systems, windshields and canopies, are complex systems which must meet a large and rapidly growing number of requirements. Many of these transparency system requirements are conflicting, presenting difficult balances which must be achieved. One example of a challenging requirements balance or trade is shaping for stealth versus aircrew vision. The large number of requirements involved may be grouped in a variety of areas including man-machine interface; structural integration with the airframe; combat hazards; environmental exposures; and supportability. Some individual requirements by themselves pose very difficult, severely nonlinear analysis problems. One such complex problem is that associated with the dynamic structural response resulting from high energy bird impact. An improved analytical capability for soft-body impact simulation was developed.

  10. Fundamentals of diffusion MRI physics.

    PubMed

    Kiselev, Valerij G

    2017-03-01

    Diffusion MRI is commonly considered the "engine" for probing the cellular structure of living biological tissues. The difficulty of this task is threefold. First, in structurally heterogeneous media, diffusion is related to structure in quite a complicated way. The challenge of finding diffusion metrics for a given structure is equivalent to other problems in physics that have been known for over a century. Second, in most cases the MRI signal is related to diffusion in an indirect way dependent on the measurement technique used. Third, finding the cellular structure given the MRI signal is an ill-posed inverse problem. This paper reviews well-established knowledge that forms the basis for responding to the first two challenges. The inverse problem is briefly discussed and the reader is warned about a number of pitfalls on the way. Copyright © 2017 John Wiley & Sons, Ltd.

  11. PAN AIR modeling studies. [higher order panel method for aircraft design

    NASA Technical Reports Server (NTRS)

    Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.

    1983-01-01

    PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.

  12. Nonlinear dynamic model for visual object tracking on Grassmann manifolds with partial occlusion handling.

    PubMed

    Khan, Zulfiqar Hasan; Gu, Irene Yu-Hua

    2013-12-01

    This paper proposes a novel Bayesian online learning and tracking scheme for video objects on Grassmann manifolds. Although manifold visual object tracking is promising, large and fast nonplanar (or out-of-plane) pose changes and long-term partial occlusions of deformable objects in video remain a challenge that limits the tracking performance. The proposed method tackles these problems with the main novelties on: 1) online estimation of object appearances on Grassmann manifolds; 2) optimal criterion-based occlusion handling for online updating of object appearances; 3) a nonlinear dynamic model for both the appearance basis matrix and its velocity; and 4) Bayesian formulations, separately for the tracking process and the online learning process, that are realized by employing two particle filters: one is on the manifold for generating appearance particles and another on the linear space for generating affine box particles. Tracking and online updating are performed in an alternating fashion to mitigate the tracking drift. Experiments using the proposed tracker on videos captured by a single dynamic/static camera have shown robust tracking performance, particularly for scenarios when target objects contain significant nonplanar pose changes and long-term partial occlusions. Comparisons with eight existing state-of-the-art/most relevant manifold/nonmanifold trackers with evaluations have provided further support to the proposed scheme.

  13. Health-based ingestion exposure guidelines for Vibrio cholerae: Technical basis for water reuse applications.

    PubMed

    Watson, Annetta P; Armstrong, Anthony Q; White, George H; Thran, Brandolyn H

    2018-02-01

    U.S. military and allied contingency operations are increasingly occurring in locations with limited, unstable or compromised fresh water supplies. Non-potable graywater reuse is currently under assessment as a viable means to increase mission sustainability while significantly reducing the resources, logistics and attack vulnerabilities posed by transport of fresh water. Development of health-based (non-potable) exposure guidelines for the potential microbial components of graywater would provide a logical and consistent human-health basis for water reuse strategies. Such health-based strategies will support not only improved water security for contingency operations, but also sustainable military operations. Dose-response assessment of Vibrio cholerae based on adult human oral exposure data were coupled with operational water exposure scenario parameters common to numerous military activities, and then used to derive health risk-based water concentrations. The microbial risk assessment approach utilized oral human exposure V. cholerae dose studies in open literature. Selected studies focused on gastrointestinal illness associated with experimental infection by specific V. cholerae serogroups most often associated with epidemics and pandemics (O1 and O139). Nonlinear dose-response model analyses estimated V. cholerae effective doses (EDs) aligned with gastrointestinal illness severity categories characterized by diarrheal purge volume. The EDs and water exposure assumptions were used to derive Risk-Based Water Concentrations (CFU/100mL) for mission-critical illness severity levels over a range of water use activities common to military operations. Human dose-response studies, data and analyses indicate that ingestion exposures at the estimated ED 1 (50CFU) are unlikely to be associated with diarrheal illness while ingestion exposures at the lower limit (200CFU) of the estimated ED 10 are not expected to result in a level of diarrheal illness associated with degraded individual capability. The current analysis indicates that the estimated ED 20 (approximately 1000CFU) represents initiation of a more advanced stage of diarrheal illness associated with clinical care. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  14. WASP (Write a Scientific Paper): Special cases of selective non-treatment and/or DNR.

    PubMed

    Mallia, Pierre

    2018-05-03

    Fetuses at low gestational age limit of viability, neonates with life threatening or life limiting congenital anomalies and deteriorating acutely ill newborn babies in intensive care, pose taxing ethical questions on whether to forego or stop treatment and allow them to die naturally. Although there is essentially no ethical difference between end of life decision between neonates and other children and adults, in the former, the fact that we are dealing with a new life, may pose greater problems to staff and parents. Good communication skills and involvement of all the team and the parents should start from the beginning to see which treatment can be foregone or stopped in the best interests of the child. This article deals with the importance of clinical ethics to avoid legal and moral showdowns and discusses accepted moral practice in this difficult area. Copyright © 2018. Published by Elsevier B.V.

  15. Determining the Performances of Pre-Service Primary School Teachers in Problem Posing Situations

    ERIC Educational Resources Information Center

    Kilic, Cigdem

    2013-01-01

    This study examined the problem posing strategies of pre-service primary school teachers in different problem posing situations (PPSs) and analysed the issues they encounter while posing problems. A problem posing task consisting of six PPSs (two free, two structured, and two semi-structured situations) was delivered to 40 participants.…

  16. General design method for 3-dimensional, potential flow fields. Part 2: Computer program DIN3D1 for simple, unbranched ducts

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1985-01-01

    The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.

  17. Multistatic aerosol-cloud lidar in space: A theoretical perspective

    NASA Astrophysics Data System (ADS)

    Mishchenko, M. I.; Alexandrov, M. D.; Brian, C.; Travis, L. D.

    2016-12-01

    Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170° can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.

  18. Multistatic Aerosol Cloud Lidar in Space: A Theoretical Perspective

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Alexandrov, Mikhail D.; Cairns, Brian; Travis, Larry D.

    2016-01-01

    Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170deg can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.

  19. Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning

    ERIC Educational Resources Information Center

    Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting

    2012-01-01

    A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…

  20. Characteristics of Problem Posing of Grade 9 Students on Geometric Tasks

    ERIC Educational Resources Information Center

    Chua, Puay Huat; Wong, Khoon Yoong

    2012-01-01

    This is an exploratory study into the individual problem-posing characteristics of 480 Grade 9 Singapore students who were novice problem posers working on two geometric tasks. The students were asked to pose a problem for their friends to solve. Analyses of solvable posed problems were based on the problem type, problem information, solution type…

  1. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  2. [Legal aspects of the use of footbaths for cattle and sheep].

    PubMed

    Kleiminger, E

    2012-04-24

    Claw diseases pose a major problem for dairy and sheep farms. As well as systemic treatments of these illnesses by means of drug injection, veterinarians discuss the application of footbaths for the local treatment of dermatitis digitalis or foot rot. On farms footbaths are used with different substances and for various purposes. The author presents the requirements for veterinary medicinal products (marketing authorization and manufacturing authorization) and demonstrates the operation of the "cascade in case of a treatment crisis". In addition, the distinction between veterinary hygiene biocidal products and veterinary medicinal products and substances to care for claws is explained.

  3. Mathematics and Measurement

    PubMed Central

    Boisvert, Ronald F.; Donahue, Michael J.; Lozier, Daniel W.; McMichael, Robert; Rust, Bert W.

    2001-01-01

    In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST’s current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years. PMID:27500024

  4. Antinauseants in Pregnancy: Teratogens or Not?

    PubMed Central

    Biringer, Anne

    1984-01-01

    Nausea and/or vomiting affect 50% of all pregnant women. For most women, this is a self-limited problem which responds well to conservative management. However, there are some situations where the risk to the mother and fetus posed by the illness are greater than the possible risks of teratogenicity of antinauseant drugs. Antihistamines have had the widest testing, and to date, there has been no evidence linking doxylamine, dimenhydrinate or promethazine to congenital malformations. Since no available drugs have official approval for use in nausea and vomiting of pregnancy the physician is left alone to make this difficult decision. PMID:21279128

  5. On the reconstruction of the surface structure of the spotted stars

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.

    2013-07-01

    We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.

  6. The well-posedness of the Kuramoto-Sivashinsky equation

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1984-01-01

    The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without loss of derivatives.

  7. The well-posedness of the Kuramoto-Sivashinsky equation

    NASA Technical Reports Server (NTRS)

    Tadmor, E.

    1986-01-01

    The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without 'loss of derivatives'.

  8. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  9. Stabilization and robustness of non-linear unity-feedback system - Factorization approach

    NASA Technical Reports Server (NTRS)

    Desoer, C. A.; Kabuli, M. G.

    1988-01-01

    The paper is a self-contained discussion of a right factorization approach in the stability analysis of the nonlinear continuous-time or discrete-time, time-invariant or time-varying, well-posed unity-feedback system S1(P, C). It is shown that a well-posed stable feedback system S1(P, C) implies that P and C have right factorizations. In the case where C is stable, P has a normalized right-coprime factorization. The factorization approach is used in stabilization and simultaneous stabilization results.

  10. Problem Posing with the Multiplication Table

    ERIC Educational Resources Information Center

    Dickman, Benjamin

    2014-01-01

    Mathematical problem posing is an important skill for teachers of mathematics, and relates readily to mathematical creativity. This article gives a bit of background information on mathematical problem posing, lists further references to connect problem posing and creativity, and then provides 20 problems based on the multiplication table to be…

  11. Investigation of Problem-Solving and Problem-Posing Abilities of Seventh-Grade Students

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ünal, Hasan

    2015-01-01

    This study aims to examine the effect of multiple problem-solving skills on the problem-posing abilities of gifted and non-gifted students and to assess whether the possession of such skills can predict giftedness or affect problem-posing abilities. Participants' metaphorical images of problem posing were also explored. Participants were 20 gifted…

  12. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  13. Greedy algorithms for diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.

  14. Renal and urologic manifestations of pediatric condition falsification/Munchausen by proxy.

    PubMed

    Feldman, Kenneth W; Feldman, Marc D; Grady, Richard; Burns, Mark W; McDonald, Ruth

    2007-06-01

    Renal and urologic problems in pediatric condition falsification (PCF)/Munchausen by proxy (MBP) can pose frustrating diagnostic and management problems. Five previously unreported victims of PCF/MBP are described. Symptoms included artifactual hematuria, recalcitrant urinary infections, dysfunctional voiding, perineal irritation, glucosuria, and "nutcracker syndrome", in addition to alleged sexual abuse. Falsifications included false or exaggerated history, specimen contamination, and induced illness. Caretakers also intentionally withheld appropriately prescribed treatment. Children underwent invasive diagnostic and surgical procedures because of the falsifications. They developed iatrogenic complications as well as behavioral problems stemming from their abuse. A PCF/MBP database was started in 1995 and includes the characteristics of 135 PCF/MBP victims examined by the first author between 1974 and 2006. Analysis of the database revealed that 25% of the children had renal or urologic issues. They were the presenting/primary issue for five. Diagnosis of PCF/MBP was delayed an average of 4.5 years from symptom onset. Almost all patients were victimized by their mothers, and maternal health falsification and somatization were common. Thirty-one of 34 children had siblings who were also victimized, six of whom died. In conclusion, falsifications of childhood renal and urologic illness are relatively uncommon; however, the deceits are prolonged and tortuous. Early recognition and intervention might limit the harm.

  15. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  16. Application of Wavelet-Based Methods for Accelerating Multi-Time-Scale Simulation of Bistable Heterogeneous Catalysis

    DOE PAGES

    Gur, Sourav; Frantziskonis, George N.; Univ. of Arizona, Tucson, AZ; ...

    2017-02-16

    Here, we report results from a numerical study of multi-time-scale bistable dynamics for CO oxidation on a catalytic surface in a flowing, well-mixed gas stream. The problem is posed in terms of surface and gas-phase submodels that dynamically interact in the presence of stochastic perturbations, reflecting the impact of molecular-scale fluctuations on the surface and turbulence in the gas. Wavelet-based methods are used to encode and characterize the temporal dynamics produced by each submodel and detect the onset of sudden state shifts (bifurcations) caused by nonlinear kinetics. When impending state shifts are detected, a more accurate but computationally expensive integrationmore » scheme can be used. This appears to make it possible, at least in some cases, to decrease the net computational burden associated with simulating multi-time-scale, nonlinear reacting systems by limiting the amount of time in which the more expensive integration schemes are required. Critical to achieving this is being able to detect unstable temporal transitions such as the bistable shifts in the example problem considered here. Lastly, our results indicate that a unique wavelet-based algorithm based on the Lipschitz exponent is capable of making such detections, even under noisy conditions, and may find applications in critical transition detection problems beyond catalysis.« less

  17. Some Reflections on Problem Posing: A Conversation with Marion Walter

    ERIC Educational Resources Information Center

    Baxter, Juliet A.

    2005-01-01

    Marion Walter, an internationally acclaimed mathematics educator discusses about problem posing, focusing on both the merits of problem posing and techniques to encourage problem posing. She believes that playful attitude toward problem variables is an essential part of an inquiring mind and the more opportunities that learners have, to change a…

  18. Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.

    NASA Astrophysics Data System (ADS)

    Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.

    2016-12-01

    Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.

  19. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  20. Fundamental concepts of problem-based learning for the new facilitator.

    PubMed Central

    Kanter, S L

    1998-01-01

    Problem-based learning (PBL) is a powerful small group learning tool that should be part of the armamentarium of every serious educator. Classic PBL uses ill-structured problems to simulate the conditions that occur in the real environment. Students play an active role and use an iterative process of seeking new information based on identified learning issues, restructuring the information in light of the new knowledge, gathering additional information, and so forth. Faculty play a facilitatory role, not a traditional instructional role, by posing metacognitive questions to students. These questions serve to assist in organizing, generalizing, and evaluating knowledge; to probe for supporting evidence; to explore faulty reasoning; to stimulate discussion of attitudes; and to develop self-directed learning and self-assessment skills. Professional librarians play significant roles in the PBL environment extending from traditional service provider to resource person to educator. Students and faculty usually find the learning experience productive and enjoyable. PMID:9681175

  1. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  2. A space-frequency multiplicative regularization for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.

  3. Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI.

    PubMed

    Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi

    2009-01-01

    Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.

  4. Simulation and optimization of an experimental membrane wastewater treatment plant using computational intelligence methods.

    PubMed

    Ludwig, T; Kern, P; Bongards, M; Wolf, C

    2011-01-01

    The optimization of relaxation and filtration times of submerged microfiltration flat modules in membrane bioreactors used for municipal wastewater treatment is essential for efficient plant operation. However, the optimization and control of such plants and their filtration processes is a challenging problem due to the underlying highly nonlinear and complex processes. This paper presents the use of genetic algorithms for this optimization problem in conjunction with a fully calibrated simulation model, as computational intelligence methods are perfectly suited to the nonconvex multi-objective nature of the optimization problems posed by these complex systems. The simulation model is developed and calibrated using membrane modules from the wastewater simulation software GPS-X based on the Activated Sludge Model No.1 (ASM1). Simulation results have been validated at a technical reference plant. They clearly show that filtration process costs for cleaning and energy can be reduced significantly by intelligent process optimization.

  5. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  6. Investigation of learning environment for arithmetic word problems by problem posing as sentence integration in Indonesian language

    NASA Astrophysics Data System (ADS)

    Hasanah, N.; Hayashi, Y.; Hirashima, T.

    2017-02-01

    Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.

  7. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  8. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  9. Examining the Prevalence of Self-Reported Foodborne Illnesses and Food Safety Risks among International College Students in the United States

    ERIC Educational Resources Information Center

    Lyonga, Agnes Ngale; Eighmy, Myron A.; Garden-Robinson, Julie

    2010-01-01

    Foodborne illness and food safety risks pose health threats to everyone, including international college students who live in the United States and encounter new or unfamiliar foods. This study assessed the prevalence of self-reported foodborne illness among international college students by cultural regions and length of time in the United…

  10. GOSA, a simulated annealing-based program for global optimization of nonlinear problems, also reveals transyears

    PubMed Central

    Czaplicki, Jerzy; Cornélissen, Germaine; Halberg, Franz

    2009-01-01

    Summary Transyears in biology have been documented thus far by the extended cosinor approach, including linear-nonlinear rhythmometry. We here confirm the existence of transyears by simulated annealing, a method originally developed for a much broader use, but described and introduced herein for validating its application to time series. The method is illustrated both on an artificial test case with known components and on biological data. We provide a table comparing results by the two methods and trust that the procedure will serve the budding sciences of chronobiology (the study of mechanisms underlying biological time structure), chronomics (the mapping of time structures in and around us), and chronobioethics, using the foregoing disciplines to add to concern for illnesses of individuals, and to budding focus on diseases of nations and civilizations. PMID:20414480

  11. Extended infusion of beta-lactam antibiotics: optimizing therapy in critically-ill patients in the era of antimicrobial resistance.

    PubMed

    Rizk, Nesrine A; Kanafani, Zeina A; Tabaja, Hussam Z; Kanj, Souha S

    2017-07-01

    Beta-lactams are at the cornerstone of therapy in critical care settings, but their clinical efficacy is challenged by the rise in bacterial resistance. Infections with multi-drug resistant organisms are frequent in intensive care units, posing significant therapeutic challenges. The problem is compounded by a dearth in the development of new antibiotics. In addition, critically-ill patients have unique physiologic characteristics that alter the drugs pharmacokinetics and pharmacodynamics. Areas covered: The prolonged infusion of antibiotics (extended infusion [EI] and continuous infusion [CI]) has been the focus of research in the last decade. As beta-lactams have time-dependent killing characteristics that are altered in critically-ill patients, prolonged infusion is an attractive approach to maximize their drug delivery and efficacy. Several studies have compared traditional dosing to EI/CI of beta-lactams with regard to clinical efficacy. Clinical data are primarily composed of retrospective studies and some randomized controlled trials. Several reports show promising results. Expert commentary: Reviewing the currently available evidence, we conclude that EI/CI is probably beneficial in the treatment of critically-ill patients in whom an organism has been identified, particularly those with respiratory infections. Further studies are needed to evaluate the efficacy of EI/CI in the management of infections with resistant organisms.

  12. Are universities preparing nurses to meet the challenges posed by the Australian mental health care system?

    PubMed

    Wynaden, D; Orb, A; McGowan, S; Downie, J

    2000-09-01

    The preparedness of comprehensive nurses to work with the mentally ill is of concern to many mental health professionals. Discussion as to whether current undergraduate nursing programs in Australia prepare a graduate to work as a beginning practitioner in the mental health area has been the centre of debate for most of the 1990s. This, along with the apparent lack of interest and motivation of these nurses to work in the mental health area following graduation, remains a major problem for mental health care providers. With one in five Australians now experiencing the burden of a major mental illness, the preparation of a nurse who is competent to work with the mentally ill would appear to be a priority. The purpose of the present study was to determine third year undergraduate nursing students' perceived level of preparedness to work with mentally ill clients. The results suggested significant differences in students' perceived level of confidence, knowledge and skills prior to and following theoretical and clinical exposure to the mental health area. Pre-testing of students before entering their third year indicated that the philosophy of comprehensive nursing: integration, although aspired to in principle, does not appear to occur in reality.

  13. Problem Posing as a Pedagogical Strategy: A Teacher's Perspective

    ERIC Educational Resources Information Center

    Staebler-Wiseman, Heidi A.

    2011-01-01

    Student problem posing has been advocated for mathematics instruction, and it has been suggested that problem posing can be used to develop students' mathematical content knowledge. But, problem posing has rarely been utilized in university-level mathematics courses. The goal of this teacher-as-researcher study was to develop and investigate…

  14. Estimation of parameters of constant elasticity of substitution production functional model

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Sankar, J. Ravi

    2017-11-01

    Nonlinear model building has become an increasing important powerful tool in mathematical economics. In recent years the popularity of applications of nonlinear models has dramatically been rising up. Several researchers in econometrics are very often interested in the inferential aspects of nonlinear regression models [6]. The present research study gives a distinct method of estimation of more complicated and highly nonlinear model viz Constant Elasticity of Substitution (CES) production functional model. Henningen et.al [5] proposed three solutions to avoid serious problems when estimating CES functions in 2012 and they are i) removing discontinuities by using the limits of the CES function and its derivative. ii) Circumventing large rounding errors by local linear approximations iii) Handling ill-behaved objective functions by a multi-dimensional grid search. Joel Chongeh et.al [7] discussed the estimation of the impact of capital and labour inputs to the gris output agri-food products using constant elasticity of substitution production function in Tanzanian context. Pol Antras [8] presented new estimates of the elasticity of substitution between capital and labour using data from the private sector of the U.S. economy for the period 1948-1998.

  15. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  16. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  17. On the Soil Roughness Parameterization Problem in Soil Moisture Retrieval of Bare Surfaces from Synthetic Aperture Radar

    PubMed Central

    Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco

    2008-01-01

    Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932

  18. Scene analysis in the natural environment

    PubMed Central

    Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740

  19. Students’ Creativity: Problem Posing in Structured Situation

    NASA Astrophysics Data System (ADS)

    Amalina, I. K.; Amirudin, M.; Budiarto, M. T.

    2018-01-01

    This is a qualitative research concerning on students’ creativity on problem posing task. The study aimed at describing the students’ creative thinking ability to pose the mathematics problem in structured situations with varied condition of given problems. In order to find out the students’ creative thinking ability, an analysis of mathematics problem posing test based on fluency, novelty, and flexibility and interview was applied for categorizing students’ responses on that task. The data analysis used the quality of problem posing and categorized in 4 level of creativity. The results revealed from 29 secondary students grade 8, a student in CTL (Creative Thinking Level) 1 met the fluency. A student in CTL 2 met the novelty, while a student in CTL 3 met both fluency and novelty and no one in CTL 4. These results are affected by students’ mathematical experience. The findings of this study highlight that student’s problem posing creativity are dependent on their experience in mathematics learning and from the point of view of which students start to pose problem.

  20. Validating a UAV artificial intelligence control system using an autonomous test case generator

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Huber, Justin

    2013-05-01

    The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.

  1. CREKID: A computer code for transient, gas-phase combustion of kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1984-01-01

    A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.

  2. Assessing Students' Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silver, Edward A.; Cai, Jinfa

    2005-01-01

    Specific examples are used to discuss assessment, an integral part of mathematics instruction, with problem posing and assessment of problem posing. General assessment criteria are suggested to evaluate student-generated problems in terms of their quantity, originality, and complexity.

  3. A Surrogate-based Adaptive Sampling Approach for History Matching and Uncertainty Quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Weixuan; Zhang, Dongxiao; Lin, Guang

    A critical procedure in reservoir simulations is history matching (or data assimilation in a broader sense), which calibrates model parameters such that the simulation results are consistent with field measurements, and hence improves the credibility of the predictions given by the simulations. Often there exist non-unique combinations of parameter values that all yield the simulation results matching the measurements. For such ill-posed history matching problems, Bayesian theorem provides a theoretical foundation to represent different solutions and to quantify the uncertainty with the posterior PDF. Lacking an analytical solution in most situations, the posterior PDF may be characterized with a samplemore » of realizations, each representing a possible scenario. A novel sampling algorithm is presented here for the Bayesian solutions to history matching problems. We aim to deal with two commonly encountered issues: 1) as a result of the nonlinear input-output relationship in a reservoir model, the posterior distribution could be in a complex form, such as multimodal, which violates the Gaussian assumption required by most of the commonly used data assimilation approaches; 2) a typical sampling method requires intensive model evaluations and hence may cause unaffordable computational cost. In the developed algorithm, we use a Gaussian mixture model as the proposal distribution in the sampling process, which is simple but also flexible to approximate non-Gaussian distributions and is particularly efficient when the posterior is multimodal. Also, a Gaussian process is utilized as a surrogate model to speed up the sampling process. Furthermore, an iterative scheme of adaptive surrogate refinement and re-sampling ensures sampling accuracy while keeping the computational cost at a minimum level. The developed approach is demonstrated with an illustrative example and shows its capability in handling the above-mentioned issues. Multimodal posterior of the history matching problem is captured and are used to give a reliable production prediction with uncertainty quantification. The new algorithm reveals a great improvement in terms of computational efficiency comparing previously studied approaches for the sample problem.« less

  4. Acoustic and elastic waveform inversion best practices

    NASA Astrophysics Data System (ADS)

    Modrak, Ryan T.

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence, one or two test cases are not enough to reliably inform such decisions. We identify best practices instead using two global, one regional and four near-surface acoustic test problems. To obtain meaningful quantitative comparisons, we carry out hundreds acoustic inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that L-BFGS provides computational savings over nonlinear conjugate gradient methods in a wide variety of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization, and total variation regularization are effective in different contexts. Besides these issues, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details have a strong effect on computational cost, regardless of the chosen material parameterization or nonlinear optimization algorithm. Building on the acoustic inversion results, we carry out elastic experiments with four test problems, three objective functions, and four material parameterizations. The choice of parameterization for isotropic elastic media is found to be more complicated than previous studies suggests, with "wavespeed-like'' parameters performing well with phase-based objective functions and Lame parameters performing well with amplitude-based objective functions. Reliability and efficiency can be even harder to achieve in transversely isotropic elastic inversions because rotation angle parameters describing fast-axis direction are difficult to recover. Using Voigt or Chen-Tromp parameters avoids the need to include rotation angles explicitly and provides an effective strategy for anisotropic inversion. The need for flexible and portable workflow management tools for seismic inversion also poses a major challenge. In a final chapter, the software used to the carry out the above experiments is described and instructions for reproducing experimental results are given.

  5. A numerical model for modeling microstructure and THM couplings in fault gouges

    NASA Astrophysics Data System (ADS)

    Veveakis, M.; Rattez, H.; Stefanou, I.; Sulem, J.; Poulet, T.

    2017-12-01

    When materials are subjected to large deformations, most of them experience inelastic deformations, accompanied by a localization of these deformations into a narrow zone leading to failure. Localization is seen as an instability from the homogeneous state of deformation. Therefore a first approach to study it consists at looking at the possible critical conditions for which the constitutive equations of the material allow a bifurcation point (Rudnicki & Rice 1975). But in some cases, we would like to know the evolution of the material after the onset of localization. For example, a fault in the crustal part of the lithosphere is a shear band and the study of this localized zone enables to extract information about seismic slip. For that, we need to approximate the solution of a nonlinear boundary value problem numerically. It is a challenging task due to the complications that arise while dealing with a softening behavior. Indeed, the classical continuum theory cannot be used because the governing system of equations is ill-posed (Vardoulakis 1985). This ill-posedness can be tracked back to the fact that constitutive models don't contain material parameters with the dimension of a length. It leads to what is called "mesh dependency" for numerical simulations, as the deformations localize in only one element of the mesh and the behavior of the system depends thus on the mesh size. A way to regularize the problem is to resort to continuum models with microstructure, such as Cosserat continua (Sulem et al. 2011). Cosserat theory is particularly interesting as it can explicitly take into account the size of the microstructure in a fault gouge. Basically, it introduces 3 degrees of freedom of rotation on top of the 3 translations (Godio et al. 2016). The original work of (Mühlhaus & Vardoulakis 1987) is extended in 3D and thermo-hydro mechanical couplings are added to the model to study fault system in the crustal part of the lithosphere. The system of equations is approximated by Finite Element using Redback, an application based on the Moose software (Gaston et al. 2009; Poulet et al. 2016). It enables us to study the weakening effect of the couplings on a fault modelled as an infinite sheared layer and follow the evolution of the shear band thickness in the post-bifurcation regime.

  6. Engaging Pre-Service Middle-School Teacher-Education Students in Mathematical Problem Posing: Development of an Active Learning Framework

    ERIC Educational Resources Information Center

    Ellerton, Nerida F.

    2013-01-01

    Although official curriculum documents make cursory mention of the need for problem posing in school mathematics, problem posing rarely becomes part of the implemented or assessed curriculum. This paper provides examples of how problem posing can be made an integral part of mathematics teacher education programs. It is argued that such programs…

  7. Creativity and Mathematical Problem Posing: An Analysis of High School Students' Mathematical Problem Posing in China and the USA

    ERIC Educational Resources Information Center

    Van Harpen, Xianwei Y.; Sriraman, Bharath

    2013-01-01

    In the literature, problem-posing abilities are reported to be an important aspect/indicator of creativity in mathematics. The importance of problem-posing activities in mathematics is emphasized in educational documents in many countries, including the USA and China. This study was aimed at exploring high school students' creativity in…

  8. Interlocked Problem Posing and Children's Problem Posing Performance in Free Structured Situations

    ERIC Educational Resources Information Center

    Cankoy, Osman

    2014-01-01

    The aim of this study is to explore the mathematical problem posing performance of students in free structured situations. Two classes of fifth grade students (N = 30) were randomly assigned to experimental and control groups. The categories of the problems posed in free structured situations by the 2 groups of students were studied through…

  9. Problem-Posing Strategies Used by Years 8 and 9 Students

    ERIC Educational Resources Information Center

    Stoyanova, Elena

    2005-01-01

    According to Kilpatrick (1987), in the mathematics classrooms problem posing can be applied as a "goal" or as a means of instruction. Using problem posing as a goal of instruction involves asking students to respond to a range of problem-posing prompts. The main goal of this article is a classification of mathematics questions created by Years 8…

  10. 2D deblending using the multi-scale shaping scheme

    NASA Astrophysics Data System (ADS)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  11. When a Problem Is More than a Teacher's Question

    ERIC Educational Resources Information Center

    Olson, Jo Clay; Knott, Libby

    2013-01-01

    Not only are the problems teachers pose throughout their teaching of great importance but also the ways in which they use those problems make this a critical component of teaching. A problem-posing episode includes the problem setup, the statement of the problem, and the follow-up questions. Analysis of problem-posing episodes of precalculus…

  12. An Analysis of Secondary and Middle School Teachers' Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Stickles, Paula R.

    2011-01-01

    This study identifies the kinds of problems teachers pose when they are asked to (a) generate problems from given information and (b) create new problems from ones given to them. To investigate teachers' problem posting, preservice and inservice teachers completed background questionnaires and four problem-posing instruments. Based on previous…

  13. Challenges of caring for children with mental disorders: Experiences and views of caregivers attending the outpatient clinic at Muhimbili National Hospital, Dar es Salaam - Tanzania.

    PubMed

    Ambikile, Joel Semel; Outwater, Anne

    2012-07-05

    It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child's illness. Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges.

  14. Analysis of Problems Posed by Sixth-Grade Middle School Students for the Addition of Fractions in Terms of Semantic Structures

    ERIC Educational Resources Information Center

    Kar, Tugrul

    2015-01-01

    This study aimed to investigate how the semantic structures of problems posed by sixth-grade middle school students for the addition of fractions affect their problem-posing performance. The students were presented with symbolic operations involving the addition of fractions and asked to pose two different problems related to daily-life situations…

  15. PARTICIPANT BLINDING AND GASTROINTESTINAL ILLNESS IN A RANDOMIZED, CONTROLLED TRIAL OF AN IN-HOME DRINKING WATER INTERVENTION

    EPA Science Inventory


    Background. There is no consensus about the level of risk of gastrointestinal illness posed by consumption of drinking water that meets all regulatory requirements. Earlier drinking water intervention trials from Canada suggested that 14% - 40% of such gastrointestinal il...

  16. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  17. Unraveling the Mystery of the Origin of Mathematical Problems: Using a Problem-Posing Framework with Prospective Mathematics Teachers

    ERIC Educational Resources Information Center

    Contreras, Jose

    2007-01-01

    In this article, I model how a problem-posing framework can be used to enhance our abilities to systematically generate mathematical problems by modifying the attributes of a given problem. The problem-posing model calls for the application of the following fundamental mathematical processes: proving, reversing, specializing, generalizing, and…

  18. Super-resolution method for face recognition using nonlinear mappings on coherent features.

    PubMed

    Huang, Hua; He, Huiting

    2011-01-01

    Low-resolution (LR) of face images significantly decreases the performance of face recognition. To address this problem, we present a super-resolution method that uses nonlinear mappings to infer coherent features that favor higher recognition of the nearest neighbor (NN) classifiers for recognition of single LR face image. Canonical correlation analysis is applied to establish the coherent subspaces between the principal component analysis (PCA) based features of high-resolution (HR) and LR face images. Then, a nonlinear mapping between HR/LR features can be built by radial basis functions (RBFs) with lower regression errors in the coherent feature space than in the PCA feature space. Thus, we can compute super-resolved coherent features corresponding to an input LR image according to the trained RBF model efficiently and accurately. And, face identity can be obtained by feeding these super-resolved features to a simple NN classifier. Extensive experiments on the Facial Recognition Technology, University of Manchester Institute of Science and Technology, and Olivetti Research Laboratory databases show that the proposed method outperforms the state-of-the-art face recognition algorithms for single LR image in terms of both recognition rate and robustness to facial variations of pose and expression.

  19. Robust and transferable quantification of NMR spectral quality using IROC analysis

    NASA Astrophysics Data System (ADS)

    Zambrello, Matthew A.; Maciejewski, Mark W.; Schuyler, Adam D.; Weatherby, Gerard; Hoch, Jeffrey C.

    2017-12-01

    Non-Fourier methods are increasingly utilized in NMR spectroscopy because of their ability to handle nonuniformly-sampled data. However, non-Fourier methods present unique challenges due to their nonlinearity, which can produce nonrandom noise and render conventional metrics for spectral quality such as signal-to-noise ratio unreliable. The lack of robust and transferable metrics (i.e. applicable to methods exhibiting different nonlinearities) has hampered comparison of non-Fourier methods and nonuniform sampling schemes, preventing the identification of best practices. We describe a novel method, in situ receiver operating characteristic analysis (IROC), for characterizing spectral quality based on the Receiver Operating Characteristic curve. IROC utilizes synthetic signals added to empirical data as "ground truth", and provides several robust scalar-valued metrics for spectral quality. This approach avoids problems posed by nonlinear spectral estimates, and provides a versatile quantitative means of characterizing many aspects of spectral quality. We demonstrate applications to parameter optimization in Fourier and non-Fourier spectral estimation, critical comparison of different methods for spectrum analysis, and optimization of nonuniform sampling schemes. The approach will accelerate the discovery of optimal approaches to nonuniform sampling experiment design and non-Fourier spectrum analysis for multidimensional NMR.

  20. DAISY: a new software tool to test global identifiability of biological and physiological systems.

    PubMed

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D'Angiò, Leontina

    2007-10-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/.

  1. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  2. A New Problem-Posing Approach Based on Problem-Solving Strategy: Analyzing Pre-Service Primary School Teachers' Performance

    ERIC Educational Resources Information Center

    Kiliç, Çigdem

    2017-01-01

    This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…

  3. Inverse modelling for real-time estimation of radiological consequences in the early stage of an accidental radioactivity release.

    PubMed

    Pecha, Petr; Šmídl, Václav

    2016-11-01

    A stepwise sequential assimilation algorithm is proposed based on an optimisation approach for recursive parameter estimation and tracking of radioactive plume propagation in the early stage of a radiation accident. Predictions of the radiological situation in each time step of the plume propagation are driven by an existing short-term meteorological forecast and the assimilation procedure manipulates the model parameters to match the observations incoming concurrently from the terrain. Mathematically, the task is a typical ill-posed inverse problem of estimating the parameters of the release. The proposed method is designated as a stepwise re-estimation of the source term release dynamics and an improvement of several input model parameters. It results in a more precise determination of the adversely affected areas in the terrain. The nonlinear least-squares regression methodology is applied for estimation of the unknowns. The fast and adequately accurate segmented Gaussian plume model (SGPM) is used in the first stage of direct (forward) modelling. The subsequent inverse procedure infers (re-estimates) the values of important model parameters from the actual observations. Accuracy and sensitivity of the proposed method for real-time forecasting of the accident propagation is studied. First, a twin experiment generating noiseless simulated "artificial" observations is studied to verify the minimisation algorithm. Second, the impact of the measurement noise on the re-estimated source release rate is examined. In addition, the presented method can be used as a proposal for more advanced statistical techniques using, e.g., importance sampling. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Artifacts as Sources for Problem-Posing Activities

    ERIC Educational Resources Information Center

    Bonotto, Cinzia

    2013-01-01

    The problem-posing process represents one of the forms of authentic mathematical inquiry which, if suitably implemented in classroom activities, could move well beyond the limitations of word problems, at least as they are typically utilized. The two exploratory studies presented sought to investigate the impact of "problem-posing" activities when…

  5. The Art of Problem Posing. 3rd Edition

    ERIC Educational Resources Information Center

    Brown, Stephen I.; Walter, Marion I.

    2005-01-01

    The new edition of this classic book describes and provides a myriad of examples of the relationships between problem posing and problem solving, and explores the educational potential of integrating these two activities in classrooms at all levels. "The Art of Problem Posing, Third Edition" encourages readers to shift their thinking…

  6. An Investigation on Chinese Teachers' Realistic Problem Posing and Problem Solving Ability and Beliefs

    ERIC Educational Resources Information Center

    Chen, Limin; Van Dooren, Wim; Chen, Qi; Verschaffel, Lieven

    2011-01-01

    In the present study, which is a part of a research project about realistic word problem solving and problem posing in Chinese elementary schools, a problem solving and a problem posing test were administered to 128 pre-service and in-service elementary school teachers from Tianjin City in China, wherein the teachers were asked to solve 3…

  7. Enhancing students’ mathematical problem posing skill through writing in performance tasks strategy

    NASA Astrophysics Data System (ADS)

    Kadir; Adelina, R.; Fatma, M.

    2018-01-01

    Many researchers have studied the Writing in Performance Task (WiPT) strategy in learning, but only a few paid attention on its relation to the problem-posing skill in mathematics. The problem-posing skill in mathematics covers problem reformulation, reconstruction, and imitation. The purpose of the present study was to examine the effect of WiPT strategy on students’ mathematical problem-posing skill. The research was conducted at a Public Junior Secondary School in Tangerang Selatan. It used a quasi-experimental method with randomized control group post-test. The samples were 64 students consists of 32 students of the experiment group and 32 students of the control. A cluster random sampling technique was used for sampling. The research data were obtained by testing. The research shows that the problem-posing skill of students taught by WiPT strategy is higher than students taught by a conventional strategy. The research concludes that the WiPT strategy is more effective in enhancing the students’ mathematical problem-posing skill compared to the conventional strategy.

  8. Decoupled Method for Reconstruction of Surface Conditions From Internal Temperatures On Ablative Materials With Uncertain Recession Model

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon

    2017-01-01

    Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.

  9. The missions and means framework as an ontology

    NASA Astrophysics Data System (ADS)

    Deitz, Paul H.; Bray, Britt E.; Michaelis, James R.

    2016-05-01

    The analysis of warfare frequently suffers from an absence of logical structure for a] specifying explicitly the military mission and b] quantitatively evaluating the mission utility of alternative products and services. In 2003, the Missions and Means Framework (MMF) was developed to redress these shortcomings. The MMF supports multiple combatants, levels of war and, in fact, is a formal embodiment of the Military Decision-Making Process (MDMP). A major effect of incomplete analytic discipline in military systems analyses is that they frequently fall into the category of ill-posed problems in which they are under-specified, under-determined, or under-constrained. Critical context is often missing. This is frequently the result of incomplete materiel requirements analyses which have unclear linkages to higher levels of warfare, system-of-systems linkages, tactics, techniques and procedures, and the effect of opposition forces. In many instances the capabilities of materiel are assumed to be immutable. This is a result of not assessing how platform components morph over time due to damage, logistics, or repair. Though ill-posed issues can be found many places in military analysis, probably the greatest challenge comes in the disciplines of C4ISR supported by ontologies in which formal naming and definition of the types, properties, and interrelationships of the entities are fundamental to characterizing mission success. Though the MMF was not conceived as an ontology, over the past decade some workers, particularly in the field of communication, have labelled the MMF as such. This connection will be described and discussed.

  10. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  11. Research in nonlinear structural and solid mechanics

    NASA Technical Reports Server (NTRS)

    Mccomb, H. G., Jr. (Compiler); Noor, A. K. (Compiler)

    1980-01-01

    Nonlinear analysis of building structures and numerical solution of nonlinear algebraic equations and Newton's method are discussed. Other topics include: nonlinear interaction problems; solution procedures for nonlinear problems; crash dynamics and advanced nonlinear applications; material characterization, contact problems, and inelastic response; and formulation aspects and special software for nonlinear analysis.

  12. Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid

    NASA Astrophysics Data System (ADS)

    Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong

    2017-11-01

    The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.

  13. An estimate for the thermal photon rate from lattice QCD

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman

    2018-03-01

    We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.

  14. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  15. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  16. Large data well-posedness in the energy space of the Chern-Simons-Schrödinger system

    NASA Astrophysics Data System (ADS)

    Lim, Zhuo Min

    2018-02-01

    We consider the initial-value problem for the Chern-Simons-Schrödinger system, which is a gauge-covariant Schrödinger system in Rt × Rx2 with a long-range electromagnetic field. We show that, in the Coulomb gauge, it is locally well-posed in Hs for s ⩾ 1, and the solution map satisfies a local-in-time weak Lipschitz bound. By energy conservation, we also obtain a global regularity result. The key is to retain the non-perturbative part of the derivative nonlinearity in the principal operator, and exploit the dispersive properties of the resulting paradifferential-type principal operator using adapted Up and Vp spaces.

  17. Application of identification techniques to remote manipulator system flight data

    NASA Technical Reports Server (NTRS)

    Shepard, G. D.; Lepanto, J. A.; Metzinger, R. W.; Fogel, E.

    1983-01-01

    This paper addresses the application of identification techniques to flight data from the Space Shuttle Remote Manipulator System (RMS). A description of the remote manipulator, including structural and control system characteristics, sensors, and actuators is given. A brief overview of system identification procedures is presented, and the practical aspects of implementing system identification algorithms are discussed. In particular, the problems posed by desampling rate, numerical error, and system nonlinearities are considered. Simulation predictions of damping, frequency, and system order are compared with values identified from flight data to support an evaluation of RMS structural and control system models. Finally, conclusions are drawn regarding the application of identification techniques to flight data obtained from a flexible space structure.

  18. Dissecting Success Stories on Mathematical Problem Posing: A Case of the Billiard Task

    ERIC Educational Resources Information Center

    Koichu, Boris; Kontorovich, Igor

    2013-01-01

    "Success stories," i.e., cases in which mathematical problems posed in a controlled setting are perceived by the problem posers or other individuals as interesting, cognitively demanding, or surprising, are essential for understanding the nature of problem posing. This paper analyzes two success stories that occurred with individuals of different…

  19. What Makes a Problem Mathematically Interesting? Inviting Prospective Teachers to Pose Better Problems

    ERIC Educational Resources Information Center

    Crespo, Sandra; Sinclair, Nathalie

    2008-01-01

    School students of all ages, including those who subsequently become teachers, have limited experience posing their own mathematical problems. Yet problem posing, both as an act of mathematical inquiry and of mathematics teaching, is part of the mathematics education reform vision that seeks to promote mathematics as an worthy intellectual…

  20. Helping Young Students to Better Pose an Environmental Problem

    ERIC Educational Resources Information Center

    Pruneau, Diane; Freiman, Viktor; Barbier, Pierre-Yves; Langis, Joanne

    2009-01-01

    Grade 3 students were asked to solve a sedimentation problem in a local river. With scientists, students explored many aspects of the problem and proposed solutions. Graphic representation tools were used to help students to better pose the problem. Using questionnaires and interviews, researchers observed students' capacity to pose the problem…

  1. University Students' Problem Posing Abilities and Attitudes towards Mathematics.

    ERIC Educational Resources Information Center

    Grundmeier, Todd A.

    2002-01-01

    Explores the problem posing abilities and attitudes towards mathematics of students in a university pre-calculus class and a university mathematical proof class. Reports a significant difference in numeric posing versus non-numeric posing ability in both classes. (Author/MM)

  2. Effects of the Problem-Posing Approach on Students' Problem Solving Skills and Metacognitive Awareness in Science Education

    NASA Astrophysics Data System (ADS)

    Akben, Nimet

    2018-05-01

    The interrelationship between mathematics and science education has frequently been emphasized, and common goals and approaches have often been adopted between disciplines. Improving students' problem-solving skills in mathematics and science education has always been given special attention; however, the problem-posing approach which plays a key role in mathematics education has not been commonly utilized in science education. As a result, the purpose of this study was to better determine the effects of the problem-posing approach on students' problem-solving skills and metacognitive awareness in science education. This was a quasi-experimental based study conducted with 61 chemistry and 40 physics students; a problem-solving inventory and a metacognitive awareness inventory were administered to participants both as a pre-test and a post-test. During the 2017-2018 academic year, problem-solving activities based on the problem-posing approach were performed with the participating students during their senior year in various university chemistry and physics departments throughout the Republic of Turkey. The study results suggested that structured, semi-structured, and free problem-posing activities improve students' problem-solving skills and metacognitive awareness. These findings indicated not only the usefulness of integrating problem-posing activities into science education programs but also the need for further research into this question.

  3. Optimal reorientation of asymmetric underactuated spacecraft using differential flatness and receding horizon control

    NASA Astrophysics Data System (ADS)

    Cai, Wei-wei; Yang, Le-ping; Zhu, Yan-wei

    2015-01-01

    This paper presents a novel method integrating nominal trajectory optimization and tracking for the reorientation control of an underactuated spacecraft with only two available control torque inputs. By employing a pseudo input along the uncontrolled axis, the flatness property of a general underactuated spacecraft is extended explicitly, by which the reorientation trajectory optimization problem is formulated into the flat output space with all the differential constraints eliminated. Ultimately, the flat output optimization problem is transformed into a nonlinear programming problem via the Chebyshev pseudospectral method, which is improved by the conformal map and barycentric rational interpolation techniques to overcome the side effects of the differential matrix's ill-conditions on numerical accuracy. Treating the trajectory tracking control as a state regulation problem, we develop a robust closed-loop tracking control law using the receding-horizon control method, and compute the feedback control at each control cycle rapidly via the differential transformation method. Numerical simulation results show that the proposed control scheme is feasible and effective for the reorientation maneuver.

  4. Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction

    NASA Astrophysics Data System (ADS)

    Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.

    2002-11-01

    The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.

  5. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  6. Quantitative imaging of aggregated emulsions.

    PubMed

    Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J

    2006-02-28

    Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.

  7. Multicollinearity in hierarchical linear models.

    PubMed

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  9. SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space

    PubMed Central

    Lustig, Michael; Pauly, John M.

    2010-01-01

    A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790

  10. Limbless undulatory propulsion on land.

    PubMed

    Guo, Z V; Mahadevan, L

    2008-03-04

    We analyze the lateral undulatory motion of a natural or artificial snake or other slender organism that "swims" on land by propagating retrograde flexural waves. The governing equations for the planar lateral undulation of a thin filament that interacts frictionally with its environment lead to an incomplete system. Closures accounting for the forces generated by the internal muscles and the interaction of the filament with its environment lead to a nonlinear boundary value problem, which we solve using a combination of analytical and numerical methods. We find that the primary determinant of the shape of the organism is its interaction with the external environment, whereas the speed of the organism is determined primarily by the internal muscular forces, consistent with prior qualitative observations. Our model also allows us to pose and solve a variety of optimization problems such as those associated with maximum speed and mechanical efficiency, thus defining the performance envelope of this mode of locomotion.

  11. Single photon emission computed tomography-guided Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie

    2012-07-01

    Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.

  12. The inverse problems of wing panel manufacture processes

    NASA Astrophysics Data System (ADS)

    Oleinikov, A. I.; Bormotin, K. S.

    2013-12-01

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendous challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.

  13. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    NASA Astrophysics Data System (ADS)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  14. Fuel-optimal low-thrust formation reconfiguration via Radau pseudospectral method

    NASA Astrophysics Data System (ADS)

    Li, Jing

    2016-07-01

    This paper investigates fuel-optimal low-thrust formation reconfiguration near circular orbit. Based on the Clohessy-Wiltshire equations, first-order necessary optimality conditions are derived from the Pontryagin's maximum principle. The fuel-optimal impulsive solution is utilized to divide the low-thrust trajectory into thrust and coast arcs. By introducing the switching times as optimization variables, the fuel-optimal low-thrust formation reconfiguration is posed as a nonlinear programming problem (NLP) via direct transcription using multiple-phase Radau pseudospectral method (RPM), which is then solved by a sparse nonlinear optimization software SNOPT. To facilitate optimality verification and, if necessary, further refinement of the optimized solution of the NLP, formulas for mass costate estimation and initial costates scaling are presented. Numerical examples are given to show the application of the proposed optimization method. To fix the problem, generic fuel-optimal low-thrust formation reconfiguration can be simplified as reconfiguration without any initial and terminal coast arcs, whose optimal solutions can be efficiently obtained from the multiple-phase RPM at the cost of a slight fuel increment. Finally, influence of the specific impulse and maximum thrust magnitude on the fuel-optimal low-thrust formation reconfiguration is analyzed. Numerical results shown the links and differences between the fuel-optimal impulsive and low-thrust solutions.

  15. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  16. Improved real-time dynamics from imaginary frequency lattice simulations

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  17. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  18. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    NASA Astrophysics Data System (ADS)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  19. Data-driven reverse engineering of signaling pathways using ensembles of dynamic models.

    PubMed

    Henriques, David; Villaverde, Alejandro F; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R

    2017-02-01

    Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM's ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge.

  20. Data-driven reverse engineering of signaling pathways using ensembles of dynamic models

    PubMed Central

    Henriques, David; Villaverde, Alejandro F.; Banga, Julio R.

    2017-01-01

    Despite significant efforts and remarkable progress, the inference of signaling networks from experimental data remains very challenging. The problem is particularly difficult when the objective is to obtain a dynamic model capable of predicting the effect of novel perturbations not considered during model training. The problem is ill-posed due to the nonlinear nature of these systems, the fact that only a fraction of the involved proteins and their post-translational modifications can be measured, and limitations on the technologies used for growing cells in vitro, perturbing them, and measuring their variations. As a consequence, there is a pervasive lack of identifiability. To overcome these issues, we present a methodology called SELDOM (enSEmbLe of Dynamic lOgic-based Models), which builds an ensemble of logic-based dynamic models, trains them to experimental data, and combines their individual simulations into an ensemble prediction. It also includes a model reduction step to prune spurious interactions and mitigate overfitting. SELDOM is a data-driven method, in the sense that it does not require any prior knowledge of the system: the interaction networks that act as scaffolds for the dynamic models are inferred from data using mutual information. We have tested SELDOM on a number of experimental and in silico signal transduction case-studies, including the recent HPN-DREAM breast cancer challenge. We found that its performance is highly competitive compared to state-of-the-art methods for the purpose of recovering network topology. More importantly, the utility of SELDOM goes beyond basic network inference (i.e. uncovering static interaction networks): it builds dynamic (based on ordinary differential equation) models, which can be used for mechanistic interpretations and reliable dynamic predictions in new experimental conditions (i.e. not used in the training). For this task, SELDOM’s ensemble prediction is not only consistently better than predictions from individual models, but also often outperforms the state of the art represented by the methods used in the HPN-DREAM challenge. PMID:28166222

  1. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  2. Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing

    ERIC Educational Resources Information Center

    Kilic, Cigdem

    2015-01-01

    In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…

  3. Students’ Mathematical Creative Thinking through Problem Posing Learning

    NASA Astrophysics Data System (ADS)

    Ulfah, U.; Prabawanto, S.; Jupri, A.

    2017-09-01

    The research aims to investigate the differences in enhancement of students’ mathematical creative thinking ability of those who received problem posing approach assisted by manipulative media and students who received problem posing approach without manipulative media. This study was a quasi experimental research with non-equivalent control group design. Population of this research was third-grade students of a primary school in Bandung city in 2016/2017 academic year. Sample of this research was two classes as experiment class and control class. The instrument used is a test of mathematical creative thinking ability. Based on the results of the research, it is known that the enhancement of the students’ mathematical creative thinking ability of those who received problem posing approach with manipulative media aid is higher than the ability of those who received problem posing approach without manipulative media aid. Students who get learning problem posing learning accustomed in arranging mathematical sentence become matter of story so it can facilitate students to comprehend about story

  4. An Interview Forum on Interlibrary Loan/Document Delivery with Lynn Wiley and Tom Delaney

    ERIC Educational Resources Information Center

    Hasty, Douglas F.

    2003-01-01

    The Virginia Boucher-OCLC Distinguished ILL Librarian Award is the most prestigious commendation given to practitioners in the field. The following questions about ILL were posed to the two most recent recipients of the Boucher Award: Tom Delaney (2002), Coordinator of Interlibrary Loan Services at Colorado State University and Lynn Wiley (2001),…

  5. Deinstitutionalization: Its Impact on Community Mental Health Centers and the Seriously Mentally Ill

    ERIC Educational Resources Information Center

    Kliewer, Stephen P.; McNally Melissa; Trippany, Robyn L.

    2009-01-01

    Deinstitutionalization has had a significant impact on the mental health system, including the client, the agency, and the counselor. For clients with serious mental illness, learning to live in a community setting poses challenges that are often difficult to overcome. Community mental health agencies must respond to these specific needs, thus…

  6. Application of Multi-Hypothesis Sequential Monte Carlo for Breakup Analysis

    NASA Astrophysics Data System (ADS)

    Faber, W. R.; Zaidi, W.; Hussein, I. I.; Roscoe, C. W. T.; Wilkins, M. P.; Schumacher, P. W., Jr.

    As more objects are launched into space, the potential for breakup events and space object collisions is ever increasing. These events create large clouds of debris that are extremely hazardous to space operations. Providing timely, accurate, and statistically meaningful Space Situational Awareness (SSA) data is crucial in order to protect assets and operations in space. The space object tracking problem, in general, is nonlinear in both state dynamics and observations, making it ill-suited to linear filtering techniques such as the Kalman filter. Additionally, given the multi-object, multi-scenario nature of the problem, space situational awareness requires multi-hypothesis tracking and management that is combinatorially challenging in nature. In practice, it is often seen that assumptions of underlying linearity and/or Gaussianity are used to provide tractable solutions to the multiple space object tracking problem. However, these assumptions are, at times, detrimental to tracking data and provide statistically inconsistent solutions. This paper details a tractable solution to the multiple space object tracking problem applicable to space object breakup events. Within this solution, simplifying assumptions of the underlying probability density function are relaxed and heuristic methods for hypothesis management are avoided. This is done by implementing Sequential Monte Carlo (SMC) methods for both nonlinear filtering as well as hypothesis management. This goal of this paper is to detail the solution and use it as a platform to discuss computational limitations that hinder proper analysis of large breakup events.

  7. Process-based Assignment-Setting Change for Support of Overcoming Bottlenecks in Learning by Problem-Posing in Arithmetic Word Problems

    NASA Astrophysics Data System (ADS)

    Supianto, A. A.; Hayashi, Y.; Hirashima, T.

    2017-02-01

    Problem-posing is well known as an effective activity to learn problem-solving methods. Monsakun is an interactive problem-posing learning environment to facilitate arithmetic word problems learning for one operation of addition and subtraction. The characteristic of Monsakun is problem-posing as sentence-integration that lets learners make a problem of three sentences. Monsakun provides learners with five or six sentences including dummies, which are designed through careful considerations by an expert teacher as a meaningful distraction to the learners in order to learn the structure of arithmetic word problems. The results of the practical use of Monsakun in elementary schools show that many learners have difficulties in arranging the proper answer at the high level of assignments. The analysis of the problem-posing process of such learners found that their misconception of arithmetic word problems causes impasses in their thinking and mislead them to use dummies. This study proposes a method of changing assignments as a support for overcoming bottlenecks of thinking. In Monsakun, the bottlenecks are often detected as a frequently repeated use of a specific dummy. If such dummy can be detected, it is the key factor to support learners to overcome their difficulty. This paper discusses how to detect the bottlenecks and to realize such support in learning by problem-posing.

  8. The Problems Posed and Models Employed by Primary School Teachers in Subtraction with Fractions

    ERIC Educational Resources Information Center

    Iskenderoglu, Tuba Aydogdu

    2017-01-01

    Students have difficulties in solving problems of fractions in almost all levels, and in problem posing. Problem posing skills influence the process of development of the behaviors observed at the level of comprehension. That is why it is very crucial for teachers to develop activities for student to have conceptual comprehension of fractions and…

  9. Impact Hazard Monitoring: Theory and Implementation

    NASA Astrophysics Data System (ADS)

    Farnocchia, Davide

    2015-08-01

    Impact monitoring is a crucial component of the mitigation or elimination of the hazard posed by asteroid impacts. Once an asteroid is discovered, it is important to achieve an early detection and an accurate assessment of the risk posed by future Earth encounters. Here we review the most standard impact monitoring techniques. Linear methods are the fastest approach but their applicability regime is limited because of the chaotic dynamics of near-Earth asteroids, whose orbits are often scattered by planetary encounters. Among nonlinear methods, Monte Carlo algorithms are the most reliable ones. However, the large number of near-Earth asteroids and the computational load required to detect low probability impact events make Monte Carlo approaches impractical in the framework of monitoring all near-Earth asteroids. In the last 15 years, the Line of Variations (LOV) method has been the most successful technique as it strikes a remarkable compromise between computational efficiency and the capability of detecting low probability events deep in the nonlinear regime. As a matter of fact, the LOV method is the engine of JPL’s Sentry and University of Pisa’s NEODyS, which the two fully automated impact monitoring systems that routinely search for potential impactors among known near-Earth asteroids. We also present some more recent techniques developed to deal with the new challenges arising in the impact hazard assessment problem. In particular, we describe how to use keyhole maps to go beyond strongly scattering encounters and push forward in time the impact prediction horizon. In these cases asteroids usually have a very well constrained orbit and we often need to account for the action of nongravitational perturbations, especially the Yarkovsky effect. Finally, we discuss the short-term hazard assessment problem for newly discovered asteroids, when only a short observed arc is available. The limited amount of observational data generally leads to severe degeneracies in the orbit estimation process. We overcome these degeneracies by employing ranging techniques, which scan the poorly constrained space of topocentric range and range rate.

  10. Regularity of Solutions of the Nonlinear Sigma Model with Gravitino

    NASA Astrophysics Data System (ADS)

    Jost, Jürgen; Keßler, Enno; Tolksdorf, Jürgen; Wu, Ruijun; Zhu, Miaomiao

    2018-02-01

    We propose a geometric setup to study analytic aspects of a variant of the super symmetric two-dimensional nonlinear sigma model. This functional extends the functional of Dirac-harmonic maps by gravitino fields. The system of Euler-Lagrange equations of the two-dimensional nonlinear sigma model with gravitino is calculated explicitly. The gravitino terms pose additional analytic difficulties to show smoothness of its weak solutions which are overcome using Rivière's regularity theory and Riesz potential theory.

  11. Recursive recovery of Markov transition probabilities from boundary value data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patch, Sarah Kathyrn

    1994-04-01

    In an effort to mathematically describe the anisotropic diffusion of infrared radiation in biological tissue Gruenbaum posed an anisotropic diffusion boundary value problem in 1989. In order to accommodate anisotropy, he discretized the temporal as well as the spatial domain. The probabilistic interpretation of the diffusion equation is retained; radiation is assumed to travel according to a random walk (of sorts). In this random walk the probabilities with which photons change direction depend upon their previous as well as present location. The forward problem gives boundary value data as a function of the Markov transition probabilities. The inverse problem requiresmore » finding the transition probabilities from boundary value data. Problems in the plane are studied carefully in this thesis. Consistency conditions amongst the data are derived. These conditions have two effects: they prohibit inversion of the forward map but permit smoothing of noisy data. Next, a recursive algorithm which yields a family of solutions to the inverse problem is detailed. This algorithm takes advantage of all independent data and generates a system of highly nonlinear algebraic equations. Pluecker-Grassmann relations are instrumental in simplifying the equations. The algorithm is used to solve the 4 x 4 problem. Finally, the smallest nontrivial problem in three dimensions, the 2 x 2 x 2 problem, is solved.« less

  12. Problem-Posing Research in Mathematics Education: Looking Back, Looking Around, and Looking Ahead

    ERIC Educational Resources Information Center

    Silver, Edward A.

    2013-01-01

    In this paper, I comment on the set of papers in this special issue on mathematical problem posing. I offer some observations about the papers in relation to several key issues, and I suggest some productive directions for continued research inquiry on mathematical problem posing.

  13. Depression and decision-making capacity for treatment or research: a systematic review

    PubMed Central

    2013-01-01

    Background Psychiatric disorders can pose problems in the assessment of decision-making capacity (DMC). This is so particularly where psychopathology is seen as the extreme end of a dimension that includes normality. Depression is an example of such a psychiatric disorder. Four abilities (understanding, appreciating, reasoning and ability to express a choice) are commonly assessed when determining DMC in psychiatry and uncertainty exists about the extent to which depression impacts capacity to make treatment or research participation decisions. Methods A systematic review of the medical ethical and empirical literature concerning depression and DMC was conducted. Medline, EMBASE and PsycInfo databases were searched for studies of depression and consent and DMC. Empirical studies and papers containing ethical analysis were extracted and analysed. Results 17 publications were identified. The clinical ethics studies highlighted appreciation of information as the ability that can be impaired in depression, indicating that emotional factors can impact on DMC. The empirical studies reporting decision-making ability scores also highlighted impairment of appreciation but without evidence of strong impact. Measurement problems, however, looked likely. The frequency of clinical judgements of lack of DMC in people with depression varied greatly according to acuity of illness and whether judgements are structured or unstructured. Conclusions Depression can impair DMC especially if severe. Most evidence indicates appreciation as the ability primarily impaired by depressive illness. Understanding and measuring the appreciation ability in depression remains a problem in need of further research. PMID:24330745

  14. DAISY: a new software tool to test global identifiability of biological and physiological systems

    PubMed Central

    Bellu, Giuseppina; Saccomani, Maria Pia; Audoly, Stefania; D’Angiò, Leontina

    2009-01-01

    A priori global identifiability is a structural property of biological and physiological models. It is considered a prerequisite for well-posed estimation, since it concerns the possibility of recovering uniquely the unknown model parameters from measured input-output data, under ideal conditions (noise-free observations and error-free model structure). Of course, determining if the parameters can be uniquely recovered from observed data is essential before investing resources, time and effort in performing actual biomedical experiments. Many interesting biological models are nonlinear but identifiability analysis for nonlinear system turns out to be a difficult mathematical problem. Different methods have been proposed in the literature to test identifiability of nonlinear models but, to the best of our knowledge, so far no software tools have been proposed for automatically checking identifiability of nonlinear models. In this paper, we describe a software tool implementing a differential algebra algorithm to perform parameter identifiability analysis for (linear and) nonlinear dynamic models described by polynomial or rational equations. Our goal is to provide the biological investigator a completely automatized software, requiring minimum prior knowledge of mathematical modelling and no in-depth understanding of the mathematical tools. The DAISY (Differential Algebra for Identifiability of SYstems) software will potentially be useful in biological modelling studies, especially in physiology and clinical medicine, where research experiments are particularly expensive and/or difficult to perform. Practical examples of use of the software tool DAISY are presented. DAISY is available at the web site http://www.dei.unipd.it/~pia/. PMID:17707944

  15. A Human Proximity Operations System test case validation approach

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  16. Chaos, creativity, and substance abuse: the nonlinear dynamics of choice.

    PubMed

    Zausner, Tobi

    2011-04-01

    Artists create their work in conditions of disequilibrium, states of creative chaos that may appear turbulent but are capable of bringing forth new order. By absorbing information from the environment and discharging it negentropically as new work, artists can be modeled as dissipative systems. A characteristic of chaotic systems is a heightened sensitivity to stimuli, which can generate either positive experiences or negative ones that can lead some artists to substance abuse and misguided searches for a creative chaos. Alcohol and drug use along with inadequately addressed co-occurring emotional disorders interfere with artists' quest for the nonlinearity of creativity. Instead, metaphorically modeled by a limit cycle of addiction and then a spiral to disorder, the joys of a creative chaos become an elusive chimera for them rather than a fulfilling experience. Untreated mental illness and addiction to substances have shortened the lives of artists such as Vincent Van Gogh, Frida Kahlo, Henri de Toulouse-Lautrec, and Jackson Pollock, all of whom committed suicide. In contrast Edvard Munch and John Callahan, who chose to address their emotional problems and substance abuse, continued to live and remain creative. Choosing to access previously avoided moments of pain can activate the nonlinear power of self-transformation.

  17. Stochastic hybrid delay population dynamics: well-posed models and extinction.

    PubMed

    Yuan, Chenggui; Mao, Xuerong; Lygeros, John

    2009-01-01

    Nonlinear differential equations have been used for decades for studying fluctuations in the populations of species, interactions of species with the environment, and competition and symbiosis between species. Over the years, the original non-linear models have been embellished with delay terms, stochastic terms and more recently discrete dynamics. In this paper, we investigate stochastic hybrid delay population dynamics (SHDPD), a very general class of population dynamics that comprises all of these phenomena. For this class of systems, we provide sufficient conditions to ensure that SHDPD have global positive, ultimately bounded solutions, a minimum requirement for a realistic, well-posed model. We then study the question of extinction and establish conditions under which an ecosystem modelled by SHDPD is doomed.

  18. Challenges of caring for children with mental disorders: Experiences and views of caregivers attending the outpatient clinic at Muhimbili National Hospital, Dar es Salaam - Tanzania

    PubMed Central

    2012-01-01

    Background It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. Methodology A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. Results The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child’s illness. Conclusion Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges. PMID:22559084

  19. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  20. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  1. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oleinikov, A. I., E-mail: a.i.oleinikov@mail.ru; Bormotin, K. S., E-mail: cvmi@knastu.ru

    It is shown that inverse problems of steady-state creep bending of plates in both the geometrically linear and nonlinear formulations can be represented in a variational formulation. Steady-state values of the obtained functionals corresponding to the solutions of the problems of inelastic deformation and springback are determined by applying a finite element procedure to the functionals. Optimal laws of creep deformation are formulated using the criterion of minimizing damage in the functionals of the inverse problems. The formulated problems are reduced to the problems solved by the finite element method using MSC.Marc software. Currently, forming of light metals poses tremendousmore » challenges due to their low ductility at room temperature and their unusual deformation characteristics at hot-cold work: strong asymmetry between tensile and compressive behavior, and a very pronounced anisotropy. We used the constitutive models of steady-state creep of initially transverse isotropy structural materials the kind of the stress state has influence. The paper gives basics of the developed computer-aided system of design, modeling, and electronic simulation targeting the processes of manufacture of wing integral panels. The modeling results can be used to calculate the die tooling, determine the panel processibility, and control panel rejection in the course of forming.« less

  3. An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups

    ERIC Educational Resources Information Center

    Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi

    2012-01-01

    The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…

  4. Problem Posing at All Levels in the Calculus Classroom

    ERIC Educational Resources Information Center

    Perrin, John Robert

    2007-01-01

    This article explores the use of problem posing in the calculus classroom using investigative projects. Specially, four examples of student work are examined, each one differing in originality of problem posed. By allowing students to explore actual questions that they have about calculus, coming from their own work or class discussion, or…

  5. Critical Inquiry across the Disciplines: Strategies for Student-Generated Problem Posing

    ERIC Educational Resources Information Center

    Nardone, Carroll Ferguson; Lee, Renee Gravois

    2011-01-01

    Problem posing is a higher-order, active-learning task that is important for students to develop. This article describes a series of interdisciplinary learning activities designed to help students strengthen their problem-posing skills, which requires that students become more responsible for their learning and that faculty move to a facilitator…

  6. Developing Teachers' Subject Didactic Competence through Problem Posing

    ERIC Educational Resources Information Center

    Ticha, Marie; Hospesova, Alena

    2013-01-01

    Problem posing (not only in lesson planning but also directly in teaching whenever needed) is one of the attributes of a teacher's subject didactic competence. In this paper, problem posing in teacher education is understood as an educational and a diagnostic tool. The results of the study were gained in pre-service primary school teacher…

  7. The Impact of Problem Posing on Elementary Teachers' Beliefs about Mathematics and Mathematics Teaching

    ERIC Educational Resources Information Center

    Barlow, Angela T.; Cates, Janie M.

    2006-01-01

    This study investigated the impact of incorporating problem posing in elementary classrooms on the beliefs held by elementary teachers about mathematics and mathematics teaching. Teachers participated in a year-long staff development project aimed at facilitating the incorporation of problem posing into their classrooms. Beliefs were examined via…

  8. The Posing of Arithmetic Problems by Mathematically Talented Students

    ERIC Educational Resources Information Center

    Espinoza González, Johan; Lupiáñez Gómez, José Luis; Segovia Alex, Isidoro

    2016-01-01

    Introduction: This paper analyzes the arithmetic problems posed by a group of mathematically talented students when given two problem-posing tasks, and compares these students' responses to those given by a standard group of public school students to the same tasks. Our analysis focuses on characterizing and identifying the differences between the…

  9. Posing Problems to Understand Children's Learning of Fractions

    ERIC Educational Resources Information Center

    Cheng, Lu Pien

    2013-01-01

    In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…

  10. Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Unal, Hasan

    2014-01-01

    The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…

  11. Integrating Worked Examples into Problem Posing in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Hsiao, Ju-Yuan; Hung, Chun-Ling; Lan, Yu-Feng; Jeng, Yoau-Chau

    2013-01-01

    Most students always lack of experience and perceive difficult regarding problem posing. The study hypothesized that worked examples may have benefits for supporting students' problem posing activities. A quasi-experiment was conducted in the context of a business mathematics course for examining the effects of integrating worked examples into…

  12. A New Control Paradigm for Stochastic Differential Equations

    NASA Astrophysics Data System (ADS)

    Schmid, Matthias J. A.

    This study presents a novel comprehensive approach to the control of dynamic systems under uncertainty governed by stochastic differential equations (SDEs). Large Deviations (LD) techniques are employed to arrive at a control law for a large class of nonlinear systems minimizing sample path deviations. Thereby, a paradigm shift is suggested from point-in-time to sample path statistics on function spaces. A suitable formal control framework which leverages embedded Freidlin-Wentzell theory is proposed and described in detail. This includes the precise definition of the control objective and comprises an accurate discussion of the adaptation of the Freidlin-Wentzell theorem to the particular situation. The new control design is enabled by the transformation of an ill-posed control objective into a well-conditioned sequential optimization problem. A direct numerical solution process is presented using quadratic programming, but the emphasis is on the development of a closed-form expression reflecting the asymptotic deviation probability of a particular nominal path. This is identified as the key factor in the success of the new paradigm. An approach employing the second variation and the differential curvature of the effective action is suggested for small deviation channels leading to the Jacobi field of the rate function and the subsequently introduced Jacobi field performance measure. This closed-form solution is utilized in combination with the supplied parametrization of the objective space. For the first time, this allows for an LD based control design applicable to a large class of nonlinear systems. Thus, Minimum Large Deviations (MLD) control is effectively established in a comprehensive structured framework. The construction of the new paradigm is completed by an optimality proof for the Jacobi field performance measure, an interpretive discussion, and a suggestion for efficient implementation. The potential of the new approach is exhibited by its extension to scalar systems subject to state-dependent noise and to systems of higher order. The suggested control paradigm is further advanced when a sequential application of MLD control is considered. This technique yields a nominal path corresponding to the minimum total deviation probability on the entire time domain. It is demonstrated that this sequential optimization concept can be unified in a single objective function which is revealed to be the Jacobi field performance index on the entire domain subject to an endpoint deviation. The emerging closed-form term replaces the previously required nested optimization and, thus, results in a highly efficient application-ready control design. This effectively substantiates Minimum Path Deviation (MPD) control. The proposed control paradigm allows the specific problem of stochastic cost control to be addressed as a special case. This new technique is employed within this study for the stochastic cost problem giving rise to Cost Constrained MPD (CCMPD) as well as to Minimum Quadratic Cost Deviation (MQCD) control. An exemplary treatment of a generic scalar nonlinear system subject to quadratic costs is performed for MQCD control to demonstrate the elementary expandability of the new control paradigm. This work concludes with a numerical evaluation of both MPD and CCMPD control for three exemplary benchmark problems. Numerical issues associated with the simulation of SDEs are briefly discussed and illustrated. The numerical examples furnish proof of the successful design. This study is complemented by a thorough review of statistical control methods, stochastic processes, Large Deviations techniques and the Freidlin-Wentzell theory, providing a comprehensive, self-contained account. The presentation of the mathematical tools and concepts is of a unique character, specifically addressing an engineering audience.

  13. Application of infrared thermography in computer aided diagnosis

    NASA Astrophysics Data System (ADS)

    Faust, Oliver; Rajendra Acharya, U.; Ng, E. Y. K.; Hong, Tan Jen; Yu, Wenwei

    2014-09-01

    The invention of thermography, in the 1950s, posed a formidable problem to the research community: What is the relationship between disease and heat radiation captured with Infrared (IR) cameras? The research community responded with a continuous effort to find this crucial relationship. This effort was aided by advances in processing techniques, improved sensitivity and spatial resolution of thermal sensors. However, despite this progress fundamental issues with this imaging modality still remain. The main problem is that the link between disease and heat radiation is complex and in many cases even non-linear. Furthermore, the change in heat radiation as well as the change in radiation pattern, which indicate disease, is minute. On a technical level, this poses high requirements on image capturing and processing. On a more abstract level, these problems lead to inter-observer variability and on an even more abstract level they lead to a lack of trust in this imaging modality. In this review, we adopt the position that these problems can only be solved through a strict application of scientific principles and objective performance assessment. Computing machinery is inherently objective; this helps us to apply scientific principles in a transparent way and to assess the performance results. As a consequence, we aim to promote thermography based Computer-Aided Diagnosis (CAD) systems. Another benefit of CAD systems comes from the fact that the diagnostic accuracy is linked to the capability of the computing machinery and, in general, computers become ever more potent. We predict that a pervasive application of computers and networking technology in medicine will help us to overcome the shortcomings of any single imaging modality and this will pave the way for integrated health care systems which maximize the quality of patient care.

  14. A hybrid credibility-based fuzzy multiple objective optimisation to differential pricing and inventory policies with arbitrage consideration

    NASA Astrophysics Data System (ADS)

    Ghasemy Yaghin, R.; Fatemi Ghomi, S. M. T.; Torabi, S. A.

    2015-10-01

    In most markets, price differentiation mechanisms enable manufacturers to offer different prices for their products or services in different customer segments; however, the perfect price discrimination is usually impossible for manufacturers. The importance of accounting for uncertainty in such environments spurs an interest to develop appropriate decision-making tools to deal with uncertain and ill-defined parameters in joint pricing and lot-sizing problems. This paper proposes a hybrid bi-objective credibility-based fuzzy optimisation model including both quantitative and qualitative objectives to cope with these issues. Taking marketing and lot-sizing decisions into account simultaneously, the model aims to maximise the total profit of manufacturer and to improve service aspects of retailing simultaneously to set different prices with arbitrage consideration. After applying appropriate strategies to defuzzify the original model, the resulting non-linear multi-objective crisp model is then solved by a fuzzy goal programming method. An efficient stochastic search procedure using particle swarm optimisation is also proposed to solve the non-linear crisp model.

  15. Giving Voice to Study Volunteers: Comparing views of mentally ill, physically ill, and healthy protocol participants on ethical aspects of clinical research

    PubMed Central

    Roberts, Laura Weiss; Kim, Jane Paik

    2014-01-01

    Motivation Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Methods Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Results Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy=−0.04, CI [−0.46, 0.39]; physically ill – healthy= −0.13, CI [−0.62, −.36]). Conclusions Clinical research volunteers and healthy clinical research-“naive” individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. PMID:24931849

  16. Giving voice to study volunteers: comparing views of mentally ill, physically ill, and healthy protocol participants on ethical aspects of clinical research.

    PubMed

    Roberts, Laura Weiss; Kim, Jane Paik

    2014-09-01

    Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy = -0.04, CI [-0.46, 0.39]; physically ill - healthy = -0.13, CI [-0.62, -.36]). Clinical research volunteers and healthy clinical research-"naïve" individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays

    PubMed Central

    Hughes, Alec; Hynynen, Kullervo

    2016-01-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323

  18. A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.

    PubMed

    Hughes, Alec; Hynynen, Kullervo

    2016-12-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.

  19. Applications of Electrical Impedance Tomography (EIT): A Short Review

    NASA Astrophysics Data System (ADS)

    Kanti Bera, Tushar

    2018-03-01

    Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.

  20. [Prevalence of patients with HIV infection in an emergency department].

    PubMed

    Greco, G M; Paparo, R; Ventura, R; Migliardi, C; Tallone, R; Moccia, F

    1995-01-01

    The activity at an ED, primarily aiming at providing rational and qualified support to critically ill patients, is forced to manage very different nosographic entities, including infectious, often contagious, pathologies. In this context the diffusion of HIV infection poses a number of problems concerning both the kind of patients presenting to the ED and the professional risk of health-care workers. In the first four months of 1992 the incidence of patients with recognized or presumed HIV infection at the "Pronto Soccorso Medico" was of 1.78% of 2327 patients admitted. This study aims to contribute to the epidemiologic definition of the risk of HIV infection due to occupational exposure, stressing the peculiar conditions of urgency-emergency often characterizing the activity within the ED.

  1. Two approaches to the care of an elder parent: a study of Robert Anderson's I Never Sang for My Father and Sawako Ariyoshi's Kokotsu no hito [The Twilight Years].

    PubMed

    Donow, H S

    1990-08-01

    Care of an elder patient is often regarded by the children as an unwanted burden. Anderson's 1968 play, I Never Sang for My Father, and Ariyoshi's 1972 novel, Kokotsu no hito [The Twilight years], show how two different families of two different cultures (American and Japanese) respond to this crisis. Both texts arrive at dramatically different conclusions: in one the children, Gene and Alice, prove unwilling or unable to cope with the problems posed by their father's need; in the other Akiko, though nearly overwhelmed by the burden of her father-in-law's illness, emerges richer for the experience.

  2. Improving chemical species tomography of turbulent flows using covariance estimation.

    PubMed

    Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J

    2017-05-01

    Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.

  3. Locating an atmospheric contamination source using slow manifolds

    NASA Astrophysics Data System (ADS)

    Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee

    2009-04-01

    Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.

  4. Developing Pre-Service Teachers Understanding of Fractions through Problem Posing

    ERIC Educational Resources Information Center

    Toluk-Ucar, Zulbiye

    2009-01-01

    This study investigated the effect of problem posing on the pre-service primary teachers' understanding of fraction concepts enrolled in two different versions of a methods course at a university in Turkey. In the experimental version, problem posing was used as a teaching strategy. At the beginning of the study, the pre-service teachers'…

  5. The Effects of Problem Posing on Student Mathematical Learning: A Meta-Analysis

    ERIC Educational Resources Information Center

    Rosli, Roslinda; Capraro, Mary Margaret; Capraro, Robert M.

    2014-01-01

    The purpose of the study was to meta-synthesize research findings on the effectiveness of problem posing and to investigate the factors that might affect the incorporation of problem posing in the teaching and learning of mathematics. The eligibility criteria for inclusion of literature in the meta-analysis was: published between 1989 and 2011,…

  6. Teachers Implementing Mathematical Problem Posing in the Classroom: Challenges and Strategies

    ERIC Educational Resources Information Center

    Leung, Shuk-kwan S.

    2013-01-01

    This paper reports a study about how a teacher educator shared knowledge with teachers when they worked together to implement mathematical problem posing (MPP) in the classroom. It includes feasible methods for getting practitioners to use research-based tasks aligned to the curriculum in order to encourage children to pose mathematical problems.…

  7. Problem-Posing in Education: Transformation of the Practice of the Health Professional.

    ERIC Educational Resources Information Center

    Casagrande, L. D. R.; Caron-Ruffino, M.; Rodrigues, R. A. P.; Vendrusculo, D. M. S.; Takayanagui, A. M. M.; Zago, M. M. F.; Mendes, M. D.

    1998-01-01

    Studied the use of a problem-posing model in health education. The model based on the ideas of Paulo Freire is presented. Four innovative experiences of teaching-learning in environmental and occupational health and patient education are reported. Notes that the problem-posing model has the capability to transform health-education practice.…

  8. Prospective Middle School Mathematics Teachers' Knowledge of Linear Graphs in Context of Problem-Posing

    ERIC Educational Resources Information Center

    Kar, Tugrul

    2016-01-01

    This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…

  9. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  10. An in-home video study and questionnaire survey of food preparation, kitchen sanitation, and hand washing practices.

    PubMed

    Scott, Elizabeth; Herbold, Nancie

    2010-06-01

    Foodborne illnesses pose a problem to all individuals but are especially significant for infants, the elderly, and individuals with compromised immune systems. Personal hygiene is recognized as the number-one way people can lower their risk. The majority of meals in the U.S. are eaten at home. Little is known, however, about the actual application of personal hygiene and sanitation behaviors in the home. The study discussed in this article assessed knowledge of hygiene practices compared to observed behaviors and determined whether knowledge equated to practice. It was a descriptive study involving a convenience sample of 30 households. Subjects were recruited from the Boston area and a researcher and/or a research assistant traveled to the homes of study participants to videotape a standard food preparation procedure preceded by floor mopping. The results highlight the differences between individuals' reported beliefs and actual practice. This information can aid food safety and other health professionals in targeting food safety education so that consumers understand their own critical role in decreasing their risk for foodborne illness.

  11. Under Control

    PubMed Central

    Payne, John

    1971-01-01

    The new film of David Mercer's Family life poses some hard questions for psychiatry to answer and puts the Laingian case for 'schizophrenia' being an illness created within the family unit. PMID:27670980

  12. Mighty Mathematicians: Using Problem Posing and Problem Solving to Develop Mathematical Power

    ERIC Educational Resources Information Center

    McGatha, Maggie B.; Sheffield, Linda J.

    2006-01-01

    This article describes a year-long professional development institute combined with a summer camp for students. Both were designed to help teachers and students develop their problem-solving and problem-posing abilities.

  13. Adaptive Jacobian Fuzzy Attitude Control for Flexible Spacecraft Combined Attitude and Sun Tracking System

    NASA Astrophysics Data System (ADS)

    Chak, Yew-Chung; Varatharajoo, Renuganth

    2016-07-01

    Many spacecraft attitude control systems today use reaction wheels to deliver precise torques to achieve three-axis attitude stabilization. However, irrecoverable mechanical failure of reaction wheels could potentially lead to mission interruption or total loss. The electrically-powered Solar Array Drive Assemblies (SADA) are usually installed in the pitch axis which rotate the solar arrays to track the Sun, can produce torques to compensate for the pitch-axis wheel failure. In addition, the attitude control of a flexible spacecraft poses a difficult problem. These difficulties include the strong nonlinear coupled dynamics between the rigid hub and flexible solar arrays, and the imprecisely known system parameters, such as inertia matrix, damping ratios, and flexible mode frequencies. In order to overcome these drawbacks, the adaptive Jacobian tracking fuzzy control is proposed for the combined attitude and sun-tracking control problem of a flexible spacecraft during attitude maneuvers in this work. For the adaptation of kinematic and dynamic uncertainties, the proposed scheme uses an adaptive sliding vector based on estimated attitude velocity via approximate Jacobian matrix. The unknown nonlinearities are approximated by deriving the fuzzy models with a set of linguistic If-Then rules using the idea of sector nonlinearity and local approximation in fuzzy partition spaces. The uncertain parameters of the estimated nonlinearities and the Jacobian matrix are being adjusted online by an adaptive law to realize feedback control. The attitude of the spacecraft can be directly controlled with the Jacobian feedback control when the attitude pointing trajectory is designed with respect to the spacecraft coordinate frame itself. A significant feature of this work is that the proposed adaptive Jacobian tracking scheme will result in not only the convergence of angular position and angular velocity tracking errors, but also the convergence of estimated angular velocity to the actual angular velocity. Numerical results are presented to demonstrate the effectiveness of the proposed scheme in tracking the desired attitude, as well as suppressing the elastic deflection effects of solar arrays during maneuver.

  14. The Nonlinear Steepest Descent Method to Long-Time Asymptotics of the Coupled Nonlinear Schrödinger Equation

    NASA Astrophysics Data System (ADS)

    Geng, Xianguo; Liu, Huan

    2018-04-01

    The Riemann-Hilbert problem for the coupled nonlinear Schrödinger equation is formulated on the basis of the corresponding 3× 3 matrix spectral problem. Using the nonlinear steepest descent method, we obtain leading-order asymptotics for the Cauchy problem of the coupled nonlinear Schrödinger equation.

  15. Violence by Parents Against Their Children: Reporting of Maltreatment Suspicions, Child Protection, and Risk in Mental Illness.

    PubMed

    McEwan, Miranda; Friedman, Susan Hatters

    2016-12-01

    Psychiatrists are mandated to report suspicions of child abuse in America. Potential for harm to children should be considered when one is treating parents who are at risk. Although it is the commonly held wisdom that mental illness itself is a major risk factor for child abuse, there are methodologic issues with studies purporting to demonstrate this. Rather, the risk from an individual parent must be considered. Substance abuse and personality disorder pose a separate risk than serious mental illness. Violence risk from mental illness is dynamic, rather than static. When severe mental illness is well-treated, the risk is decreased. However, these families are in need of social support. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. An Analysis of Problem-Posing Tasks in Chinese and US Elementary Mathematics Textbooks

    ERIC Educational Resources Information Center

    Cai, Jinfa; Jiang, Chunlian

    2017-01-01

    This paper reports on 2 studies that examine how mathematical problem posing is integrated in Chinese and US elementary mathematics textbooks. Study 1 involved a historical analysis of the problem-posing (PP) tasks in 3 editions of the most widely used elementary mathematics textbook series published by People's Education Press in China over 3…

  17. Fraction Multiplication and Division Word Problems Posed by Different Years of Pre-Service Elementary Mathematics Teachers

    ERIC Educational Resources Information Center

    Aydogdu Iskenderoglu, Tuba

    2018-01-01

    It is important for pre-service teachers to know the conceptual difficulties they have experienced regarding the concepts of multiplication and division in fractions and problem posing is a way to learn these conceptual difficulties. Problem posing is a synthetic activity that fundamentally has multiple answers. The purpose of this study is to…

  18. Generalizability Theory Research on Developing a Scoring Rubric to Assess Primary School Students' Problem Posing Skills

    ERIC Educational Resources Information Center

    Cankoy, Osman; Özder, Hasan

    2017-01-01

    The aim of this study is to develop a scoring rubric to assess primary school students' problem posing skills. The rubric including five dimensions namely solvability, reasonability, mathematical structure, context and language was used. The raters scored the students' problem posing skills both with and without the scoring rubric to test the…

  19. An Investigation of Relationships between Students' Mathematical Problem-Posing Abilities and Their Mathematical Content Knowledge

    ERIC Educational Resources Information Center

    Van Harpen, Xianwei Y.; Presmeg, Norma C.

    2013-01-01

    The importance of students' problem-posing abilities in mathematics has been emphasized in the K-12 curricula in the USA and China. There are claims that problem-posing activities are helpful in developing creative approaches to mathematics. At the same time, there are also claims that students' mathematical content knowledge could be highly…

  20. An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ünal, Hasan

    2015-01-01

    To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…

  1. Mathematical Creative Process Wallas Model in Students Problem Posing with Lesson Study Approach

    ERIC Educational Resources Information Center

    Nuha, Muhammad 'Azmi; Waluya, S. B.; Junaedi, Iwan

    2018-01-01

    Creative thinking is very important in the modern era so that it should be improved by doing efforts such as making a lesson that train students to pose their own problems. The purposes of this research are (1) to give an initial description of students about mathematical creative thinking level in Problem Posing Model with Lesson Study approach…

  2. Problem Posing with Realistic Mathematics Education Approach in Geometry Learning

    NASA Astrophysics Data System (ADS)

    Mahendra, R.; Slamet, I.; Budiyono

    2017-09-01

    One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.

  3. A global approach to kinematic path planning to robots with holonomic and nonholonomic constraints

    NASA Technical Reports Server (NTRS)

    Divelbiss, Adam; Seereeram, Sanjeev; Wen, John T.

    1993-01-01

    Robots in applications may be subject to holonomic or nonholonomic constraints. Examples of holonomic constraints include a manipulator constrained through the contact with the environment, e.g., inserting a part, turning a crank, etc., and multiple manipulators constrained through a common payload. Examples of nonholonomic constraints include no-slip constraints on mobile robot wheels, local normal rotation constraints for soft finger and rolling contacts in grasping, and conservation of angular momentum of in-orbit space robots. The above examples all involve equality constraints; in applications, there are usually additional inequality constraints such as robot joint limits, self collision and environment collision avoidance constraints, steering angle constraints in mobile robots, etc. The problem of finding a kinematically feasible path that satisfies a given set of holonomic and nonholonomic constraints, of both equality and inequality types is addressed. The path planning problem is first posed as a finite time nonlinear control problem. This problem is subsequently transformed to a static root finding problem in an augmented space which can then be iteratively solved. The algorithm has shown promising results in planning feasible paths for redundant arms satisfying Cartesian path following and goal endpoint specifications, and mobile vehicles with multiple trailers. In contrast to local approaches, this algorithm is less prone to problems such as singularities and local minima.

  4. New Interstellar Dust Models Consistent with Interstellar Extinction, Emission and Abundances Constraints

    NASA Technical Reports Server (NTRS)

    Zubko, V.; Dwek, E.; Arendt, R. G.; Oegerle, William (Technical Monitor)

    2001-01-01

    We present new interstellar dust models that are consistent with both, the FUV to near-IR extinction and infrared (IR) emission measurements from the diffuse interstellar medium. The models are characterized by different dust compositions and abundances. The problem we solve consists of determining the size distribution of the various dust components of the model. This problem is a typical ill-posed inversion problem which we solve using the regularization approach. We reproduce the Li Draine (2001, ApJ, 554, 778) results, however their model requires an excessive amount of interstellar silicon (48 ppM of hydrogen compared to the 36 ppM available for an ISM of solar composition) to be locked up in dust. We found that dust models consisting of PAHs, amorphous silicate, graphite, and composite grains made up from silicates, organic refractory, and water ice, provide an improved fit to the extinction and IR emission measurements, while still requiring a subsolar amount of silicon to be in the dust. This research was supported by NASA Astrophysical Theory Program NRA 99-OSS-01.

  5. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  6. Well-posedness, linear perturbations, and mass conservation for the axisymmetric Einstein equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dain, Sergio; Ortiz, Omar E.; Facultad de Matematica, Astronomia y Fisica, FaMAF, Universidad Nacional de Cordoba, Instituto de Fisica Enrique Gaviola, IFEG, CONICET, Ciudad Universitaria

    2010-02-15

    For axially symmetric solutions of Einstein equations there exists a gauge which has the remarkable property that the total mass can be written as a conserved, positive definite, integral on the spacelike slices. The mass integral provides a nonlinear control of the variables along the whole evolution. In this gauge, Einstein equations reduce to a coupled hyperbolic-elliptic system which is formally singular at the axis. As a first step in analyzing this system of equations we study linear perturbations on a flat background. We prove that the linear equations reduce to a very simple system of equations which provide, thoughmore » the mass formula, useful insight into the structure of the full system. However, the singular behavior of the coefficients at the axis makes the study of this linear system difficult from the analytical point of view. In order to understand the behavior of the solutions, we study the numerical evolution of them. We provide strong numerical evidence that the system is well-posed and that its solutions have the expected behavior. Finally, this linear system allows us to formulate a model problem which is physically interesting in itself, since it is connected with the linear stability of black hole solutions in axial symmetry. This model can contribute significantly to solve the nonlinear problem and at the same time it appears to be tractable.« less

  7. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  8. A serial mediation model of workplace social support on work productivity: the role of self-stigma and job tenure self-efficacy in people with severe mental disorders.

    PubMed

    Villotti, Patrizia; Corbière, Marc; Dewa, Carolyn S; Fraccaroli, Franco; Sultan-Taïeb, Hélène; Zaniboni, Sara; Lecomte, Tania

    2017-09-12

    Compared to groups with other disabilities, people with a severe mental illness face the greatest stigma and barriers to employment opportunities. This study contributes to the understanding of the relationship between workplace social support and work productivity in people with severe mental illness working in Social Enterprises by taking into account the mediating role of self-stigma and job tenure self-efficacy. A total of 170 individuals with a severe mental disorder employed in a Social Enterprise filled out questionnaires assessing personal and work-related variables at Phase-1 (baseline) and Phase-2 (6-month follow-up). Process modeling was used to test for serial mediation. In the Social Enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems. When testing serial multiple mediations, the specific indirect effect of high workplace social support on work productivity through both low internalized stigma and high job tenure self-efficacy was significant with a point estimate of 1.01 (95% CI = 0.42, 2.28). Continued work in this area can provide guidance for organizations in the open labor market addressing the challenges posed by the work integration of people with severe mental illness. Implications for Rehabilitation: Work integration of people with severe mental disorders is difficult because of limited access to supportive and nondiscriminatory workplaces. Social enterprise represents an effective model for supporting people with severe mental disorders to integrate the labor market. In the social enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems.

  9. Thyroid Allostasis–Adaptive Responses of Thyrotropic Feedback Control to Conditions of Strain, Stress, and Developmental Programming

    PubMed Central

    Chatzitomaris, Apostolos; Hoermann, Rudolf; Midgley, John E.; Hering, Steffen; Urban, Aline; Dietrich, Barbara; Abood, Assjana; Klein, Harald H.; Dietrich, Johannes W.

    2017-01-01

    The hypothalamus–pituitary–thyroid feedback control is a dynamic, adaptive system. In situations of illness and deprivation of energy representing type 1 allostasis, the stress response operates to alter both its set point and peripheral transfer parameters. In contrast, type 2 allostatic load, typically effective in psychosocial stress, pregnancy, metabolic syndrome, and adaptation to cold, produces a nearly opposite phenotype of predictive plasticity. The non-thyroidal illness syndrome (NTIS) or thyroid allostasis in critical illness, tumors, uremia, and starvation (TACITUS), commonly observed in hospitalized patients, displays a historically well-studied pattern of allostatic thyroid response. This is characterized by decreased total and free thyroid hormone concentrations and varying levels of thyroid-stimulating hormone (TSH) ranging from decreased (in severe cases) to normal or even elevated (mainly in the recovery phase) TSH concentrations. An acute versus chronic stage (wasting syndrome) of TACITUS can be discerned. The two types differ in molecular mechanisms and prognosis. The acute adaptation of thyroid hormone metabolism to critical illness may prove beneficial to the organism, whereas the far more complex molecular alterations associated with chronic illness frequently lead to allostatic overload. The latter is associated with poor outcome, independently of the underlying disease. Adaptive responses of thyroid homeostasis extend to alterations in thyroid hormone concentrations during fetal life, periods of weight gain or loss, thermoregulation, physical exercise, and psychiatric diseases. The various forms of thyroid allostasis pose serious problems in differential diagnosis of thyroid disease. This review article provides an overview of physiological mechanisms as well as major diagnostic and therapeutic implications of thyroid allostasis under a variety of developmental and straining conditions. PMID:28775711

  10. Problem Posing and Solving with Mathematical Modeling

    ERIC Educational Resources Information Center

    English, Lyn D.; Fox, Jillian L.; Watters, James J.

    2005-01-01

    Mathematical modeling is explored as both problem posing and problem solving from two perspectives, that of the child and the teacher. Mathematical modeling provides rich learning experiences for elementary school children and their teachers.

  11. Scalar discrete nonlinear multipoint boundary value problems

    NASA Astrophysics Data System (ADS)

    Rodriguez, Jesus; Taylor, Padraic

    2007-06-01

    In this paper we provide sufficient conditions for the existence of solutions to scalar discrete nonlinear multipoint boundary value problems. By allowing more general boundary conditions and by imposing less restrictions on the nonlinearities, we obtain results that extend previous work in the area of discrete boundary value problems [Debra L. Etheridge, Jesus Rodriguez, Periodic solutions of nonlinear discrete-time systems, Appl. Anal. 62 (1996) 119-137; Debra L. Etheridge, Jesus Rodriguez, Scalar discrete nonlinear two-point boundary value problems, J. Difference Equ. Appl. 4 (1998) 127-144].

  12. Common mental health problems in immigrants and refugees: general approach in primary care

    PubMed Central

    Kirmayer, Laurence J.; Narasiah, Lavanya; Munoz, Marie; Rashid, Meb; Ryder, Andrew G.; Guzder, Jaswant; Hassan, Ghayda; Rousseau, Cécile; Pottie, Kevin

    2011-01-01

    Background: Recognizing and appropriately treating mental health problems among new immigrants and refugees in primary care poses a challenge because of differences in language and culture and because of specific stressors associated with migration and resettlement. We aimed to identify risk factors and strategies in the approach to mental health assessment and to prevention and treatment of common mental health problems for immigrants in primary care. Methods: We searched and compiled literature on prevalence and risk factors for common mental health problems related to migration, the effect of cultural influences on health and illness, and clinical strategies to improve mental health care for immigrants and refugees. Publications were selected on the basis of relevance, use of recent data and quality in consultation with experts in immigrant and refugee mental health. Results: The migration trajectory can be divided into three components: premigration, migration and postmigration resettlement. Each phase is associated with specific risks and exposures. The prevalence of specific types of mental health problems is influenced by the nature of the migration experience, in terms of adversity experienced before, during and after resettlement. Specific challenges in migrant mental health include communication difficulties because of language and cultural differences; the effect of cultural shaping of symptoms and illness behaviour on diagnosis, coping and treatment; differences in family structure and process affecting adaptation, acculturation and intergenerational conflict; and aspects of acceptance by the receiving society that affect employment, social status and integration. These issues can be addressed through specific inquiry, the use of trained interpreters and culture brokers, meetings with families, and consultation with community organizations. Interpretation: Systematic inquiry into patients’ migration trajectory and subsequent follow-up on culturally appropriate indicators of social, vocational and family functioning over time will allow clinicians to recognize problems in adaptation and undertake mental health promotion, disease prevention or treatment interventions in a timely way. PMID:20603342

  13. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  14. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  15. Diagnosis of organic brain syndrome: an emergency department dilemma.

    PubMed

    Dubin, W R; Weiss, K J

    1984-01-01

    Delirium and dementia frequently pose a diagnostic dilemma for clinicians in the emergency department. The overlap of symptoms between organic brain syndrome and functional psychiatric illness, coupled with a dramatic presentation, often leads to a premature psychiatric diagnosis. In this paper, the authors discuss those symptoms of organic brain syndrome that most frequently generate diagnostic confusion in the emergency department and result in a misdiagnosis of functional illness.

  16. Coupled multiview autoencoders with locality sensitivity for three-dimensional human pose estimation

    NASA Astrophysics Data System (ADS)

    Yu, Jialin; Sun, Jifeng; Luo, Shasha; Duan, Bichao

    2017-09-01

    Estimating three-dimensional (3D) human poses from a single camera is usually implemented by searching pose candidates with image descriptors. Existing methods usually suppose that the mapping from feature space to pose space is linear, but in fact, their mapping relationship is highly nonlinear, which heavily degrades the performance of 3D pose estimation. We propose a method to recover 3D pose from a silhouette image. It is based on the multiview feature embedding (MFE) and the locality-sensitive autoencoders (LSAEs). On the one hand, we first depict the manifold regularized sparse low-rank approximation for MFE and then the input image is characterized by a fused feature descriptor. On the other hand, both the fused feature and its corresponding 3D pose are separately encoded by LSAEs. A two-layer back-propagation neural network is trained by parameter fine-tuning and then used to map the encoded 2D features to encoded 3D poses. Our LSAE ensures a good preservation of the local topology of data points. Experimental results demonstrate the effectiveness of our proposed method.

  17. Problem-posing in education: transformation of the practice of the health professional.

    PubMed

    Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D

    1998-02-01

    This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.

  18. Heat-related illness in China, summer of 2013

    NASA Astrophysics Data System (ADS)

    Gu, Shaohua; Huang, Cunrui; Bai, Li; Chu, Cordia; Liu, Qiyong

    2016-01-01

    Extreme heat events have occurred more frequently in China in recent years, leading to serious impacts on human life and the health care system. To identify the characteristics of individuals with heat-related illnesses in China during the summer of 2013, we collected the data from the Heat-related Illness Surveillance System in Chinese Center for Disease Control and Prevention (China CDC). A total of 5758 cases were reported in the summer of 2013, mostly concentrated in urban areas around the middle and lower reaches of the Yangtze River. We found a difference in age distribution of percentage of deaths from heat-related illness between males and females. Severe cases in males mostly occurred in the age group 45-74 years but in females mostly in the age group over 75. A distributed lag non-linear model had been used to identify population vulnerabilities in Ningbo and Chongqing. The results show that there was a clear positive relationship between maximum temperature and heat-related illness, and the heat effect was nonlinear and could last for 3 days. The elderly and males in the range of 45-64 years old might be the most vulnerable people of heat-related illness in China. We also highlighted some deficiencies of the surveillance system, such that the reported data were not accurate, comprehensive, or timely enough at this stage.

  19. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  20. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  1. Källén-Lehmann spectroscopy for (un)physical degrees of freedom

    NASA Astrophysics Data System (ADS)

    Dudal, David; Oliveira, Orlando; Silva, Paulo J.

    2014-01-01

    We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.

  2. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  3. Generation of intervention strategy for a genetic regulatory network represented by a family of Markov Chains.

    PubMed

    Berlow, Noah; Pal, Ranadip

    2011-01-01

    Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.

  4. [Problem-posing as a nutritional education strategy with obese teenagers].

    PubMed

    Rodrigues, Erika Marafon; Boog, Maria Cristina Faber

    2006-05-01

    Obesity is a public health issue with relevant social determinants in its etiology and where interventions with teenagers encounter complex biopsychological conditions. This study evaluated intervention in nutritional education through a problem-posing approach with 22 obese teenagers, treated collectively and individually for eight months. Speech acts were collected through the use of word cards, observer recording, and tape-recording. The study adopted a qualitative methodology, and the approach involved content analysis. Problem-posing facilitated changes in eating behavior, triggering reflections on nutritional practices, family circumstances, social stigma, interaction with health professionals, and religion. Teenagers under individual care posed problems more effectively in relation to eating, while those under collective care posed problems in relation to family and psychological issues, with effective qualitative eating changes in both groups. The intervention helped teenagers understand their life history and determinants of eating behaviors, spontaneously implementing eating changes and making them aware of possibilities for maintaining the new practices and autonomously exercising their role as protagonists in their own health care.

  5. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  6. Fostering Mathematical Creativity through Problem Posing and Modeling Using Dynamic Geometry: Viviani's Problem in the Classroom

    ERIC Educational Resources Information Center

    Contreras, José N.

    2013-01-01

    This paper discusses a classroom experience in which a group of prospective secondary mathematics teachers were asked to create, cooperatively (in class) and individually, problems related to Viviani's problem using a problem-posing framework. When appropriate, students used Sketchpad to explore the problem to better understand its attributes…

  7. Investigating Mathematics Teachers Candidates' Knowledge about Problem Solving Strategies through Problem Posing

    ERIC Educational Resources Information Center

    Ünlü, Melihan

    2017-01-01

    The aim of the study was to determine mathematics teacher candidates' knowledge about problem solving strategies through problem posing. This qualitative research was conducted with 95 mathematics teacher candidates studying at education faculty of a public university during the first term of the 2015-2016 academic year in Turkey. Problem Posing…

  8. The Chronically Ill Child in the School.

    ERIC Educational Resources Information Center

    Sexson, Sandra; Madan-Swain, Avi

    1995-01-01

    Examines the effects of chronic illness on the school-age population. Facilitating successful functioning of chronically ill youths is a growing problem. Focuses on problems encountered by the chronically ill student who has either been diagnosed with a chronic illness or who has survived such an illness. Discusses the role of the school…

  9. Sleep Problems in Children and Adolescents with Common Medical Conditions

    PubMed Central

    Lewandowski, Amy S.; Ward, Teresa M.; Palermo, Tonya M.

    2011-01-01

    Synopsis Sleep is critically important to children’s health and well-being. Untreated sleep disturbances and sleep disorders pose significant adverse daytime consequences and place children at considerable risk for poor health outcomes. Sleep disturbances occur at a greater frequency in children with acute and chronic medical conditions compared to otherwise healthy peers. Sleep disturbances in medically ill children can be associated with sleep disorders (e.g., sleep disordered breathing, restless leg syndrome), co-morbid with acute and chronic conditions (e.g., asthma, arthritis, cancer), or secondary to underlying disease-related mechanisms (e.g. airway restriction, inflammation) treatment regimens, or hospitalization. Clinical management should include a multidisciplinary approach with particular emphasis on routine, regular sleep assessments and prevention of daytime consequences and promotion of healthy sleep habits and health outcomes. PMID:21600350

  10. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  11. DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses

    NASA Astrophysics Data System (ADS)

    Petschke, Danny; Staab, Torsten E. M.

    2018-01-01

    The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.

  12. Continuous properties of the data-to-solution map for a generalized μ-Camassa-Holm integrable equation

    NASA Astrophysics Data System (ADS)

    Yu, Shengqi

    2018-05-01

    This work studies a generalized μ-type integrable equation with both quadratic and cubic nonlinearities; the μ-Camassa-Holm and modified μ-Camassa-Holm equations are members of this family of equations. It has been shown that the Cauchy problem for this generalized μ-Camassa-Holm integrable equation is locally well-posed for initial data u0 ∈ Hs, s > 5/2. In this work, we further investigate the continuity properties to this equation. It is proved in this work that the data-to-solution map of the proposed equation is not uniformly continuous. It is also found that the solution map is Hölder continuous in the Hr-topology when 0 ≤ r < s with Hölder exponent α depending on both s and r.

  13. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  14. Adaptive Leadership Framework for Chronic Illness

    PubMed Central

    Anderson, Ruth A.; Bailey, Donald E.; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S.; Thygeson, N. Marcus; Docherty, Sharron L.

    2015-01-01

    We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care. PMID:25647829

  15. Modified Taylor series method for solving nonlinear differential equations with mixed boundary conditions defined on finite intervals.

    PubMed

    Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel Antonio; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Marin-Hernandez, Antonio; Herrera-May, Agustin Leobardo; Diaz-Sanchez, Alejandro; Huerta-Chua, Jesus

    2014-01-01

    In this article, we propose the application of a modified Taylor series method (MTSM) for the approximation of nonlinear problems described on finite intervals. The issue of Taylor series method with mixed boundary conditions is circumvented using shooting constants and extra derivatives of the problem. In order to show the benefits of this proposal, three different kinds of problems are solved: three-point boundary valued problem (BVP) of third-order with a hyperbolic sine nonlinearity, two-point BVP for a second-order nonlinear differential equation with an exponential nonlinearity, and a two-point BVP for a third-order nonlinear differential equation with a radical nonlinearity. The result shows that the MTSM method is capable to generate easily computable and highly accurate approximations for nonlinear equations. 34L30.

  16. Nonlinear friction modelling and compensation control of hysteresis phenomena for a pair of tendon-sheath actuated surgical robots

    NASA Astrophysics Data System (ADS)

    Do, T. N.; Tjahjowidodo, T.; Lau, M. W. S.; Phee, S. J.

    2015-08-01

    Natural Orifice Transluminal Endoscopic Surgery (NOTES) is a special method that allows surgical operations via natural orifices like mouth, anus, and vagina, without leaving visible scars. The use of flexible tendon-sheath mechanism (TSM) is common in these systems because of its light weight in structure, flexibility, and easy transmission of power. However, nonlinear friction and backlash hysteresis pose many challenges to control of such systems; in addition, they do not provide haptic feedback to assist the surgeon in the operation of the systems. In this paper, we propose a new dynamic friction model and backlash hysteresis nonlinearity for a pair of TSM to deal with these problems. The proposed friction model, unlike current approaches in the literature, is smooth and able to capture the force at near zero velocity when the system is stationary or operates at small motion. This model can be used to estimate the friction force for haptic feedback purpose. To improve the system tracking performances, a backlash hysteresis model will be introduced, which can be used in a feedforward controller scheme. The controller involves a simple computation of the inverse hysteresis model. The proposed models are configuration independent and able to capture the nonlinearities for arbitrary tendon-sheath shapes. A representative experimental setup is used to validate the proposed models and to demonstrate the improvement in position tracking accuracy and the possibility of providing desired force information at the distal end of a pair of TSM slave manipulator for haptic feedback to the surgeons.

  17. Nonlinear dimension reduction and clustering by Minimum Curvilinearity unfold neuropathic pain and tissue embryological classes.

    PubMed

    Cannistraci, Carlo Vittorio; Ravasi, Timothy; Montevecchi, Franco Maria; Ideker, Trey; Alessio, Massimo

    2010-09-15

    Nonlinear small datasets, which are characterized by low numbers of samples and very high numbers of measures, occur frequently in computational biology, and pose problems in their investigation. Unsupervised hybrid-two-phase (H2P) procedures-specifically dimension reduction (DR), coupled with clustering-provide valuable assistance, not only for unsupervised data classification, but also for visualization of the patterns hidden in high-dimensional feature space. 'Minimum Curvilinearity' (MC) is a principle that-for small datasets-suggests the approximation of curvilinear sample distances in the feature space by pair-wise distances over their minimum spanning tree (MST), and thus avoids the introduction of any tuning parameter. MC is used to design two novel forms of nonlinear machine learning (NML): Minimum Curvilinear embedding (MCE) for DR, and Minimum Curvilinear affinity propagation (MCAP) for clustering. Compared with several other unsupervised and supervised algorithms, MCE and MCAP, whether individually or combined in H2P, overcome the limits of classical approaches. High performance was attained in the visualization and classification of: (i) pain patients (proteomic measurements) in peripheral neuropathy; (ii) human organ tissues (genomic transcription factor measurements) on the basis of their embryological origin. MC provides a valuable framework to estimate nonlinear distances in small datasets. Its extension to large datasets is prefigured for novel NMLs. Classification of neuropathic pain by proteomic profiles offers new insights for future molecular and systems biology characterization of pain. Improvements in tissue embryological classification refine results obtained in an earlier study, and suggest a possible reinterpretation of skin attribution as mesodermal. https://sites.google.com/site/carlovittoriocannistraci/home.

  18. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Folk concepts of mental disorders among Chinese-Australian patients and their caregivers.

    PubMed

    Hsiao, Fei-Hsiu; Klimidis, Steven; Minas, Harry I; Tan, Eng S

    2006-07-01

    This paper reports a study of (a) popular conceptions of mental illness throughout history, (b) how current social and cultural knowledge about mental illness influences Chinese-Australian patients' and caregivers' understanding of mental illness and the consequences of this for explaining and labelling patients' problems. According to traditional Chinese cultural knowledge about health and illness, Chinese people believe that psychotic illness is the only type of mental illness, and that non-psychotic illness is a physical illness. Regarding patients' problems as not being due to mental illness may result in delaying use of Western mental health services. Data collection took place in 2001. Twenty-eight Chinese-Australian patients with mental illness and their caregivers were interviewed at home, drawing on Kleinman's explanatory model and studies of cultural transmission. Interviews were tape-recorded and transcribed, and analysed for plots and themes. Chinese-Australians combined traditional knowledge with Western medical knowledge to develop their own labels for various kinds of mental disorders, including 'mental illness', 'physical illness', 'normal problems of living' and 'psychological problems'. As they learnt more about Western conceptions of psychology and psychiatry, their understanding of some disorders changed. What was previously ascribed to non-mental disorders was often re-labelled as 'mental illness' or 'psychological problems'. Educational programmes aimed at introducing Chinese immigrants to counselling and other psychiatric services could be made more effective if designers gave greater consideration to Chinese understanding of mental illness.

  20. A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks

    ERIC Educational Resources Information Center

    Singer, Florence Mihaela; Voica, Cristian

    2013-01-01

    The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…

  1. Opportunities to Pose Problems Using Digital Technology in Problem Solving Environments

    ERIC Educational Resources Information Center

    Aguilar-Magallón, Daniel Aurelio; Fernández, Willliam Enrique Poveda

    2017-01-01

    This article reports and analyzes different types of problems that nine students in a Master's Program in Mathematics Education posed during a course on problem solving. What opportunities (affordances) can a dynamic geometry system (GeoGebra) offer to allow in-service and in-training teachers to formulate and solve problems, and what type of…

  2. Do everyday problems of people with chronic illness interfere with their disease management?

    PubMed

    van Houtum, Lieke; Rijken, Mieke; Groenewegen, Peter

    2015-10-01

    Being chronically ill is a continuous process of balancing the demands of the illness and the demands of everyday life. Understanding how everyday life affects self-management might help to provide better professional support. However, little attention has been paid to the influence of everyday life on self-management. The purpose of this study is to examine to what extent problems in everyday life interfere with the self-management behaviour of people with chronic illness, i.e. their ability to manage their illness. To estimate the effects of having everyday problems on self-management, cross-sectional linear regression analyses with propensity score matching were conducted. Data was used from 1731 patients with chronic disease(s) who participated in a nationwide Dutch panel-study. One third of people with chronic illness encounter basic (e.g. financial, housing, employment) or social (e.g. partner, children, sexual or leisure) problems in their daily life. Younger people, people with poor health and people with physical limitations are more likely to have everyday problems. Experiencing basic problems is related to less active coping behaviour, while experiencing social problems is related to lower levels of symptom management and less active coping behaviour. The extent of everyday problems interfering with self-management of people with chronic illness depends on the type of everyday problems encountered, as well as on the type of self-management activities at stake. Healthcare providers should pay attention to the life context of people with chronic illness during consultations, as patients' ability to manage their illness is related to it.

  3. BOOK REVIEW: Chaos: A Very Short Introduction

    NASA Astrophysics Data System (ADS)

    Klages, R.

    2007-07-01

    This book is a new volume of a series designed to introduce the curious reader to anything from ancient Egypt and Indian philosophy to conceptual art and cosmology. Very handy in pocket size, Chaos promises an introduction to fundamental concepts of nonlinear science by using mathematics that is `no more complicated than X=2. Anyone who ever tried to give a popular science account of research knows that this is a more challenging task than writing an ordinary research article. Lenny Smith brilliantly succeeds to explain in words, in pictures and by using intuitive models the essence of mathematical dynamical systems theory and time series analysis as it applies to the modern world. In a more technical part he introduces the basic terms of nonlinear theory by means of simple mappings. He masterly embeds this analysis into the social, historical and cultural context by using numerous examples, from poems and paintings over chess and rabbits to Olbers' paradox, card games and `phynance'. Fundamental problems of the modelling of nonlinear systems like the weather, sun spots or golf balls falling through an array of nails are discussed from the point of view of mathematics, physics and statistics by touching upon philosophical issues. At variance with Laplace's demon, Smith's 21st century demon makes `real world' observations only with limited precision. This poses a severe problem to predictions derived from complex chaotic models, where small variations of initial conditions typically yield totally different outcomes. As Smith argues, this difficulty has direct implications on decision-making in everyday modern life. However, it also asks for an inherently probabilistic theory, which somewhat reminds us of what we are used to in the microworld. There is little to criticise in this nice little book except that some figures are of poor quality thus not really reflecting the beauty of fractals and other wonderful objects in this field. I feel that occasionally the book is also getting a bit too intricate for the complete layman, and experts may not agree on all details of the more conceptual discussions. Altogether I thoroughly enjoyed reading this book. It was a happy companion while travelling and a nice bedtime literature. It is furthermore an excellent reminder of the `big picture' underlying nonlinear science as it applies to the real world. I will gladly recommend this book as background literature for students in my introductory course on dynamical systems. However, the book will be of interest to anyone who is looking for a very short account on fundamental problems and principles in modern nonlinear science.

  4. Recent advances in reduction methods for nonlinear problems. [in structural mechanics

    NASA Technical Reports Server (NTRS)

    Noor, A. K.

    1981-01-01

    Status and some recent developments in the application of reduction methods to nonlinear structural mechanics problems are summarized. The aspects of reduction methods discussed herein include: (1) selection of basis vectors in nonlinear static and dynamic problems, (2) application of reduction methods in nonlinear static analysis of structures subjected to prescribed edge displacements, and (3) use of reduction methods in conjunction with mixed finite element models. Numerical examples are presented to demonstrate the effectiveness of reduction methods in nonlinear problems. Also, a number of research areas which have high potential for application of reduction methods are identified.

  5. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  6. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    NASA Technical Reports Server (NTRS)

    Baker, Gregory; Siegel, Michael; Tanveer, Saleh

    1995-01-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.

  7. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  8. The Structure of Ill-Structured (and Well-Structured) Problems Revisited

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2016-01-01

    In his 1973 article "The Structure of ill structured problems", Herbert Simon proposed that solving ill-structured problems could be modeled within the same information-processing framework developed for solving well-structured problems. This claim is reexamined within the context of over 40 years of subsequent research and theoretical…

  9. Performance of subjects with and without severe mental illness on a clinical test of problem solving.

    PubMed

    Marshall, R C; McGurk, S R; Karow, C M; Kairy, T J; Flashman, L A

    2006-06-01

    Severe mental illness is associated with impairments in executive functions, such as conceptual reasoning, planning, and strategic thinking all of which impact problem solving. The present study examined the utility of a novel assessment tool for problem solving, the Rapid Assessment of Problem Solving Test (RAPS) in persons with severe mental illness. Subjects were 47 outpatients with severe mental illness and an equal number healthy controls matched for age and gender. Results confirmed all hypotheses with respect to how subjects with severe mental illness would perform on the RAPS. Specifically, the severely mentally ill subjects (1) solved fewer problems on the RAPS, (2) when they did solve problems on the test, they did so far less efficiently than their healthy counterparts, and (3) the two groups differed markedly in the types of questions asked on the RAPS. The healthy control subjects tended to take a systematic, organized, but not always optimal approach to solving problems on the RAPS. The subjects with severe mental illness used some of the problem solving strategies of the healthy controls, but their performance was less consistent and tended to deteriorate when the complexity of the problem solving task increased. This was reflected by a high degree of guessing in lieu of asking constraint questions, particularly if a category-limited question was insufficient to continue the problem solving effort.

  10. Regolith thermal property inversion in the LUNAR-A heat-flow experiment

    NASA Astrophysics Data System (ADS)

    Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.

    2001-11-01

    In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.

  11. The challenge of gun control for mental health advocates.

    PubMed

    Pandya, Anand

    2013-09-01

    Mass shootings, such as the 2012 Newtown massacre, have repeatedly led to political discourse about limiting access to guns for individuals with serious mental illness. Although the political climate after such tragic events poses a considerable challenge to mental health advocates who wish to minimize unsympathetic portrayals of those with mental illness, such media attention may be a rare opportunity to focus attention on risks of victimization of those with serious mental illness and barriers to obtaining psychiatric care. Current federal gun control laws may discourage individuals from seeking psychiatric treatment and describe individuals with mental illness using anachronistic, imprecise, and gratuitously stigmatizing language. This article lays out potential talking points that may be useful after future gun violence.

  12. Phase of Illness in palliative care: Cross-sectional analysis of clinical data from community, hospital and hospice patients.

    PubMed

    Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em

    2018-02-01

    Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2  = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.

  13. Full Waveform Inversion for Seismic Velocity And Anelastic Losses in Heterogeneous Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Askan, A.; /Carnegie Mellon U.; Akcelik, V.

    2009-04-30

    We present a least-squares optimization method for solving the nonlinear full waveform inverse problem of determining the crustal velocity and intrinsic attenuation properties of sedimentary valleys in earthquake-prone regions. Given a known earthquake source and a set of seismograms generated by the source, the inverse problem is to reconstruct the anelastic properties of a heterogeneous medium with possibly discontinuous wave velocities. The inverse problem is formulated as a constrained optimization problem, where the constraints are the partial and ordinary differential equations governing the anelastic wave propagation from the source to the receivers in the time domain. This leads to amore » variational formulation in terms of the material model plus the state variables and their adjoints. We employ a wave propagation model in which the intrinsic energy-dissipating nature of the soil medium is modeled by a set of standard linear solids. The least-squares optimization approach to inverse wave propagation presents the well-known difficulties of ill posedness and multiple minima. To overcome ill posedness, we include a total variation regularization functional in the objective function, which annihilates highly oscillatory material property components while preserving discontinuities in the medium. To treat multiple minima, we use a multilevel algorithm that solves a sequence of subproblems on increasingly finer grids with increasingly higher frequency source components to remain within the basin of attraction of the global minimum. We illustrate the methodology with high-resolution inversions for two-dimensional sedimentary models of the San Fernando Valley, under SH-wave excitation. We perform inversions for both the seismic velocity and the intrinsic attenuation using synthetic waveforms at the observer locations as pseudoobserved data.« less

  14. New Nonlinear Multigrid Analysis

    NASA Technical Reports Server (NTRS)

    Xie, Dexuan

    1996-01-01

    The nonlinear multigrid is an efficient algorithm for solving the system of nonlinear equations arising from the numerical discretization of nonlinear elliptic boundary problems. In this paper, we present a new nonlinear multigrid analysis as an extension of the linear multigrid theory presented by Bramble. In particular, we prove the convergence of the nonlinear V-cycle method for a class of mildly nonlinear second order elliptic boundary value problems which do not have full elliptic regularity.

  15. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    NASA Astrophysics Data System (ADS)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  16. Scarlet fever.

    PubMed

    2016-04-27

    Essential facts Scarlet fever is characterised by a rash that usually accompanies a sore throat and flushed cheeks. It is mainly a childhood illness. While this contagious disease rarely poses a danger to life today, outbreaks in the past led to many deaths.

  17. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  18. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  19. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  20. Ill-defined problem solving in amnestic mild cognitive impairment: linking episodic memory to effective solution generation.

    PubMed

    Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M

    2015-02-01

    It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Legal protection of the right to work and employment for persons with mental health problems: a review of legislation across the world.

    PubMed

    Nardodkar, Renuka; Pathare, Soumitra; Ventriglio, Antonio; Castaldelli-Maia, João; Javate, Kenneth R; Torales, Julio; Bhugra, Dinesh

    2016-08-01

    The right to work and employment is indispensable for social integration of persons with mental health problems. This study examined whether existing laws pose structural barriers in the realization of right to work and employment of persons with mental health problems across the world. It reviewed disability-specific, human rights legislation, and labour laws of all UN Member States in the context of Article 27 of the UN Convention on the Rights of Persons with Disabilities (CRPD). It wes found that laws in 62% of countries explicitly mention mental disability/impairment/illness in the definition of disability. In 64% of countries, laws prohibit discrimination against persons with mental health during recruitment; in one-third of countries laws prohibit discontinuation of employment. More than half (56%) the countries have laws in place which offer access to reasonable accommodation in the workplace. In 59% of countries laws promote employment of persons with mental health problems through different affirmative actions. Nearly 50 years after the adoption of the International Covenant on Economic, Social, and Cultural Rights and 10 years after the adoption of CRPD by the UN General Assembly, legal discrimination against persons with mental health problems continues to exist globally. Countries and policy-makers need to implement legislative measures to ensure non-discrimination of persons with mental health problems during employment.

  2. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  3. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  4. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  5. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  6. Fuzzy distributed cooperative tracking for a swarm of unmanned aerial vehicles with heterogeneous goals

    NASA Astrophysics Data System (ADS)

    Kladis, Georgios P.; Menon, Prathyush P.; Edwards, Christopher

    2016-12-01

    This article proposes a systematic analysis for a tracking problem which ensures cooperation amongst a swarm of unmanned aerial vehicles (UAVs), modelled as nonlinear systems with linear and angular velocity constraints, in order to achieve different goals. A distributed Takagi-Sugeno (TS) framework design is adopted for the representation of the nonlinear model of the dynamics of the UAVs. The distributed control law which is introduced is composed of both node and network level information. Firstly, feedback gains are synthesised using a parallel distributed compensation (PDC) control law structure, for a collection of isolated UAVs; ignoring communications among the swarm. Then secondly, based on an alternation-like procedure, the resulting feedback gains are used to determine Lyapunov matrices which are utilised at network level to incorporate into the control law, the relative differences in the states of the vehicles, and to induce cooperative behaviour. Eventually stability is guaranteed for the entire swarm. The control synthesis is performed using tools from linear control theory: in particular the design criteria are posed as linear matrix inequalities (LMIs). An example based on a UAV tracking scenario is included to outline the efficacy of the approach.

  7. Evaluation of the site effect with Heuristic Methods

    NASA Astrophysics Data System (ADS)

    Torres, N. N.; Ortiz-Aleman, C.

    2017-12-01

    The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.

  8. Compensation of significant parametric uncertainties using sliding mode online learning

    NASA Astrophysics Data System (ADS)

    Schnetter, Philipp; Kruger, Thomas

    An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.

  9. MEG and fMRI Fusion for Non-Linear Estimation of Neural and BOLD Signal Changes

    PubMed Central

    Plis, Sergey M.; Calhoun, Vince D.; Weisend, Michael P.; Eichele, Tom; Lane, Terran

    2010-01-01

    The combined analysis of magnetoencephalography (MEG)/electroencephalography and functional magnetic resonance imaging (fMRI) measurements can lead to improvement in the description of the dynamical and spatial properties of brain activity. In this paper we empirically demonstrate this improvement using simulated and recorded task related MEG and fMRI activity. Neural activity estimates were derived using a dynamic Bayesian network with continuous real valued parameters by means of a sequential Monte Carlo technique. In synthetic data, we show that MEG and fMRI fusion improves estimation of the indirectly observed neural activity and smooths tracking of the blood oxygenation level dependent (BOLD) response. In recordings of task related neural activity the combination of MEG and fMRI produces a result with greater signal-to-noise ratio, that confirms the expectation arising from the nature of the experiment. The highly non-linear model of the BOLD response poses a difficult inference problem for neural activity estimation; computational requirements are also high due to the time and space complexity. We show that joint analysis of the data improves the system's behavior by stabilizing the differential equations system and by requiring fewer computational resources. PMID:21120141

  10. Pre-Service Elementary Teachers' Motivation and Ill-Structured Problem Solving in Korea

    ERIC Educational Resources Information Center

    Kim, Min Kyeong; Cho, Mi Kyung

    2016-01-01

    This article examines the use and application of an ill-structured problem to pre-service elementary teachers in Korea in order to find implications of pre-service teacher education with regard to contextualized problem solving by analyzing experiences of ill-structured problem solving. Participants were divided into small groups depending on the…

  11. Implementation and Validation of an Anisotropic Plasticity Model for Clay and a Two-Scale Micropolar Constitutive Model for Sand

    NASA Astrophysics Data System (ADS)

    Yonten, Karma

    As a multi-phase material, soil exhibits highly nonlinear, anisotropic, and inelastic behavior. While it may be impractical for one constitutive model to address all features of the soil behavior, one can identify the essential aspects of the soil's stress-strainstrength response for a particular class of problems and develop a suitable constitutive model that captures those aspects. Here, attention is given to two important features of the soil stress-strain-strength behavior: anisotropy and post-failure response. An anisotropic soil plasticity model is implemented to investigate the significance of initial and induced anisotropy on the response of geo-structures founded on cohesive soils. The model is shown to produce realistic responses for a variety of over-consolidation ratios. Moreover, the performance of the model is assessed in a boundary value problem in which a cohesive soil is subjected to the weight of a newly constructed soil embankment. Significance of incorporating anisotropy is clearly demonstrated by comparing the results of the simulation using the model with those obtained by using an isotropic plasticity model. To investigate post-failure response of soils, the issue of strain localization in geostructures is considered. Post-failure analysis of geo-structures using numerical techniques such as mesh-based or mesh-free methods is often faced with convergence issues which may, at times, lead to incorrect failure mechanisms. This is due to the fact that majority of existing constitutive models are formulated within the framework of classical continuum mechanics that leads to ill-posed governing equations at the onset of localization. To overcome this challenge, a critical state two-surface plasticity model is extended to incorporate the micro-structural mechanisms that become significant within the shear band. The extended model is implemented to study the strain localization of granular soils in drained and undrained conditions. It is demonstrated that the extended model is capable of capturing salient features of soil behavior in pre- and post-failure regimes. The effects of soil particle size, initial density and confining pressure on the thickness and orientation of shear band are investigated and compared with the observed behavior of soils.

  12. Numerical solution of a coefficient inverse problem with multi-frequency experimental raw data by a globally convergent algorithm

    NASA Astrophysics Data System (ADS)

    Nguyen, Dinh-Liem; Klibanov, Michael V.; Nguyen, Loc H.; Kolesov, Aleksandr E.; Fiddy, Michael A.; Liu, Hui

    2017-09-01

    We analyze in this paper the performance of a newly developed globally convergent numerical method for a coefficient inverse problem for the case of multi-frequency experimental backscatter data associated to a single incident wave. These data were collected using a microwave scattering facility at the University of North Carolina at Charlotte. The challenges for the inverse problem under the consideration are not only from its high nonlinearity and severe ill-posedness but also from the facts that the amount of the measured data is minimal and that these raw data are contaminated by a significant amount of noise, due to a non-ideal experimental setup. This setup is motivated by our target application in detecting and identifying explosives. We show in this paper how the raw data can be preprocessed and successfully inverted using our inversion method. More precisely, we are able to reconstruct the dielectric constants and the locations of the scattering objects with a good accuracy, without using any advanced a priori knowledge of their physical and geometrical properties.

  13. Examining the Preparatory Effects of Problem Generation and Solution Generation on Learning from Instruction

    ERIC Educational Resources Information Center

    Kapur, Manu

    2018-01-01

    The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…

  14. Examining Interactions between Problem Posing and Problem Solving with Prospective Primary Teachers: A Case of Using Fractions

    ERIC Educational Resources Information Center

    Xie, Jinxia; Masingila, Joanna O.

    2017-01-01

    Existing studies have quantitatively evidenced the relatedness between problem posing and problem solving, as well as the magnitude of this relationship. However, the nature and features of this relationship need further qualitative exploration. This paper focuses on exploring the interactions, i.e., mutual effects and supports, between problem…

  15. Adaptive leadership framework for chronic illness: framing a research agenda for transforming care delivery.

    PubMed

    Anderson, Ruth A; Bailey, Donald E; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S; Thygeson, N Marcus; Docherty, Sharron L

    2015-01-01

    We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care.

  16. The influence of initial conditions on dispersion and reactions

    NASA Astrophysics Data System (ADS)

    Wood, B. D.

    2016-12-01

    In various generalizations of the reaction-dispersion problem, researchers have developed frameworks in which the apparent dispersion coefficient can be negative. Such dispersion coefficients raise several difficult questions. Most importantly, the presence of a negative dispersion coefficient at the macroscale leads to a macroscale representation that illustrates an apparent decrease in entropy with increasing time; this, then, appears to be in violation of basic thermodynamic principles. In addition, the proposition of a negative dispersion coefficient leads to an inherently ill-posed mathematical transport equation. The ill-posedness of the problem arises because there is no unique initial condition that corresponds to a later-time concentration distribution (assuming that if discontinuous initial conditions are allowed). In this presentation, we explain how the phenomena of negative dispersion coefficients actually arise because the governing differential equation for early times should, when derived correctly, incorporate a term that depends upon the initial and boundary conditions. The process of reactions introduces a similar phenomena, where the structure of the initial and boundary condition influences the form of the macroscopic balance equations. When upscaling is done properly, new equations are developed that include source terms that are not present in the classical (late-time) reaction-dispersion equation. These source terms depend upon the structure of the initial condition of the reacting species, and they decrease exponentially in time (thus, they converge to the conventional equations at asymptotic times). With this formulation, the resulting dispersion tensor is always positive-semi-definite, and the reaction terms directly incorporate information about the state of mixedness of the system. This formulation avoids many of the problems that would be engendered by defining negative-definite dispersion tensors, and properly represents the effective rate of reaction at early times.

  17. Bayesian tomography by interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Romary, T.

    2017-12-01

    In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.

  18. Optimum oil production planning using infeasibility driven evolutionary algorithm.

    PubMed

    Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul

    2013-01-01

    In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.

  19. Learning Inverse Rig Mappings by Nonlinear Regression.

    PubMed

    Holden, Daniel; Saito, Jun; Komura, Taku

    2017-03-01

    We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.

  20. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

Top