NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Regularization techniques for backward--in--time evolutionary PDE problems
NASA Astrophysics Data System (ADS)
Gustafsson, Jonathan; Protas, Bartosz
2007-11-01
Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
NASA Astrophysics Data System (ADS)
Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich
Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.
Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl
2007-02-01
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
Solving ill-posed inverse problems using iterative deep neural networks
NASA Astrophysics Data System (ADS)
Adler, Jonas; Öktem, Ozan
2017-12-01
We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Auriault, Laurent
1996-01-01
It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.
The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations
NASA Technical Reports Server (NTRS)
Hesthaven, J. S.
1997-01-01
We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation
NASA Astrophysics Data System (ADS)
Wang, Linjun; Han, Xu; Wei, Zhouchao
The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, J.G.
2011-07-01
A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less
1980-02-01
to estimate f -..ell, -noderately ,-ell, or- poorly. 1 ’The sansitivity *of a rec-ilarized estimate of f to the noise is made explicit. After giving the...AD-A 7 .SA92 925 WISCONSIN UN! V-MADISON DEFT OF STATISTICS F /S 11,’ 1 ILL POSED PRORLEMS: NUMERICAL ANn STATISTICAL METHODS FOR MILOL-ETC(U FEB 80 a...estimate f given z. We first define the 1 intrinsic rank of the problem where jK(tit) f (t)dt is known exactly. This 0 definition is used to provide insight
NASA Astrophysics Data System (ADS)
Antokhin, I. I.
2017-06-01
We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
NASA Astrophysics Data System (ADS)
Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele
2008-11-01
Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.
Model-based elastography: a survey of approaches to the inverse elasticity problem
Doyley, M M
2012-01-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
Fast reconstruction of optical properties for complex segmentations in near infrared imaging
NASA Astrophysics Data System (ADS)
Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador
2017-04-01
The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
NASA Astrophysics Data System (ADS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Microbial food-borne illnesses pose a significant health problem in Japan. In 1996 the world's largest outbreak of Escherichia coli food illness occurred in Japan. Since then, new regulatory measures were established, including strict hygiene practices in meat and food processi...
PAN AIR modeling studies. [higher order panel method for aircraft design
NASA Technical Reports Server (NTRS)
Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.
1983-01-01
PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.
NASA Astrophysics Data System (ADS)
Pickard, William F.
2004-10-01
The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.
Robust penalty method for structural synthesis
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1983-01-01
The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.
Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.
Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao
2016-01-01
Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
NASA Astrophysics Data System (ADS)
Nie, Yao; Zheng, Xiaoxin
2018-07-01
We study the Cauchy problem for the 3D incompressible hyperdissipative Navier–Stokes equations and consider the well-posedness and ill-posedness in critical Fourier-Herz spaces . We prove that if and , the system is locally well-posed for large initial data as well as globally well-posed for small initial data. Also, we obtain the same result for and . More importantly, we show that the system is ill-posed in the sense of norm inflation for and q > 2. The proof relies heavily on particular structure of initial data u 0 that we construct, which makes the first iteration of solution inflate. Specifically, the special structure of u 0 transforms an infinite sum into a finite sum in ‘remainder term’, which permits us to control the remainder.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.
NASA Astrophysics Data System (ADS)
Helmers, Michael; Herrmann, Michael
2018-03-01
We consider a lattice regularization for an ill-posed diffusion equation with a trilinear constitutive law and study the dynamics of phase interfaces in the parabolic scaling limit. Our main result guarantees for a certain class of single-interface initial data that the lattice solutions satisfy asymptotically a free boundary problem with a hysteretic Stefan condition. The key challenge in the proof is to control the microscopic fluctuations that are inevitably produced by the backward diffusion when a particle passes the spinodal region.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
Stokes paradox in electronic Fermi liquids
NASA Astrophysics Data System (ADS)
Lucas, Andrew
2017-03-01
The Stokes paradox is the statement that in a viscous two-dimensional fluid, the "linear response" problem of fluid flow around an obstacle is ill posed. We present a simple consequence of this paradox in the hydrodynamic regime of a Fermi liquid of electrons in two-dimensional metals. Using hydrodynamics and kinetic theory, we estimate the contribution of a single cylindrical obstacle to the global electrical resistance of a material, within linear response. Momentum relaxation, present in any realistic electron liquid, resolves the classical paradox. Nonetheless, this paradox imprints itself in the resistance, which can be parametrically larger than predicted by Ohmic transport theory. We find a remarkably rich set of behaviors, depending on whether or not the quasiparticle dynamics in the Fermi liquid should be treated as diffusive, hydrodynamic, or ballistic on the length scale of the obstacle. We argue that all three types of behavior are observable in present day experiments.
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source
NASA Astrophysics Data System (ADS)
Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen
2018-05-01
Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Nie, Chenwei; Yang, Guijun; Xu, Xingang; Jin, Xiuliang; Gu, Xiaohe
2014-10-01
Leaf area index (LAI) and LCC, as the two most important crop growth variables, are major considerations in management decisions, agricultural planning and policy making. Estimation of canopy biophysical variables from remote sensing data was investigated using a radiative transfer model. However, the ill-posed problem is unavoidable for the unique solution of the inverse problem and the uncertainty of measurements and model assumptions. This study focused on the use of agronomy mechanism knowledge to restrict and remove the ill-posed inversion results. For this purpose, the inversion results obtained using the PROSAIL model alone (NAMK) and linked with agronomic mechanism knowledge (AMK) were compared. The results showed that AMK did not significantly improve the accuracy of LAI inversion. LAI was estimated with high accuracy, and there was no significant improvement after considering AMK. The validation results of the determination coefficient (R2) and the corresponding root mean square error (RMSE) between measured LAI and estimated LAI were 0.635 and 1.022 for NAMK, and 0.637 and 0.999 for AMK, respectively. LCC estimation was significantly improved with agronomy mechanism knowledge; the R2 and RMSE values were 0.377 and 14.495 μg cm-2 for NAMK, and 0.503 and 10.661 μg cm-2 for AMK, respectively. Results of the comparison demonstrated the need for agronomy mechanism knowledge in radiative transfer model inversion.
NASA Astrophysics Data System (ADS)
Schaffrin, Burkhard
2008-02-01
In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
Assigning uncertainties in the inversion of NMR relaxation data.
Parker, Robert L; Song, Yi-Qaio
2005-06-01
Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.
Regolith thermal property inversion in the LUNAR-A heat-flow experiment
NASA Astrophysics Data System (ADS)
Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.
2001-11-01
In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.
1980-06-30
4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
ERIC Educational Resources Information Center
Doleck, Tenzin; Jarrell, Amanda; Poitras, Eric G.; Chaouachi, Maher; Lajoie, Susanne P.
2016-01-01
Clinical reasoning is a central skill in diagnosing cases. However, diagnosing a clinical case poses several challenges that are inherent to solving multifaceted ill-structured problems. In particular, when solving such problems, the complexity stems from the existence of multiple paths to arriving at the correct solution (Lajoie, 2003). Moreover,…
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
Rapid optimization of multiple-burn rocket flights.
NASA Technical Reports Server (NTRS)
Brown, K. R.; Harrold, E. F.; Johnson, G. W.
1972-01-01
Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.
Källén-Lehmann spectroscopy for (un)physical degrees of freedom
NASA Astrophysics Data System (ADS)
Dudal, David; Oliveira, Orlando; Silva, Paulo J.
2014-01-01
We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less
Least Squares Computations in Science and Engineering
1994-02-01
iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Chemical approaches to solve mycotoxin problems and improve food safety
USDA-ARS?s Scientific Manuscript database
Foodborne illnesses are experienced by most of the population and are preventable. Agricultural produce can occasionally become contaminated with fungi capable of making mycotoxins that pose health risks and reduce values. Many strategies are employed to keep food safe from mycotoxin contamination. ...
NASA Astrophysics Data System (ADS)
Klibanov, Michael V.; Kuzhuget, Andrey V.; Golubnichiy, Kirill V.
2016-01-01
A new empirical mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock, new initial and new boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. To verify the validity of our model, real market data for 368 randomly selected liquid options are used. A new trading strategy is proposed. Our results indicates that our method is profitable on those options. Furthermore, it is shown that the performance of two simple extrapolation-based techniques is much worse. We conjecture that our method might lead to significant profits of those financial insitutions which trade large amounts of options. We caution, however, that further studies are necessary to verify this conjecture.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
NASA Astrophysics Data System (ADS)
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Tajti, Attila; Szalay, Péter G
2016-11-08
Describing electronically excited states of molecules accurately poses a challenging problem for theoretical methods. Popular second order techniques like Linear Response CC2 (CC2-LR), Partitioned Equation-of-Motion MBPT(2) (P-EOM-MBPT(2)), or Equation-of-Motion CCSD(2) (EOM-CCSD(2)) often produce results that are controversial and are ill-balanced with their accuracy on valence and Rydberg type states. In this study, we connect the theory of these methods and, to investigate the origin of their different behavior, establish a series of intermediate variants. The accuracy of these on excitation energies of singlet valence and Rydberg electronic states is benchmarked on a large sample against high-accuracy Linear Response CC3 references. The results reveal the role of individual terms of the second order similarity transformed Hamiltonian, and the reason for the bad performance of CC2-LR in the description of Rydberg states. We also clarify the importance of the T̂ 1 transformation employed in the CC2 procedure, which is found to be very small for vertical excitation energies.
NASA Astrophysics Data System (ADS)
Sirota, Dmitry; Ivanov, Vadim
2017-11-01
Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
Moving force identification based on modified preconditioned conjugate gradient method
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy
2018-06-01
This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.
NASA Astrophysics Data System (ADS)
Trillon, Adrien
Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann
2011-10-07
The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.
1982-02-01
of them are pre- sented in this paper. As an application, important practical problems similar to the one posed by Gnanadesikan (1977), p. 77 can be... Gnanadesikan and Wilk (1969) to search for a non-linear combination, giving rise to non-linear first principal component. So, a p-dinensional vector can...distribution, Gnanadesikan and Gupta (1970) and earlier Eaton (1967) have considered the problem of ranking the r underlying populations according to the
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao
2018-06-13
Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.
Local well-posedness for dispersion generalized Benjamin-Ono equations in Sobolev spaces
NASA Astrophysics Data System (ADS)
Guo, Zihua
We prove that the Cauchy problem for the dispersion generalized Benjamin-Ono equation ∂u+|∂u+uu=0, u(x,0)=u(x), is locally well-posed in the Sobolev spaces H for s>1-α if 0⩽α⩽1. The new ingredient is that we generalize the methods of Ionescu, Kenig and Tataru (2008) [13] to approach the problem in a less perturbative way, in spite of the ill-posedness results of Molinet, Saut and Tzvetkov (2001) [21]. Moreover, as a bi-product we prove that if 0<α⩽1 the corresponding modified equation (with the nonlinearity ±uuu) is locally well-posed in H for s⩾1/2-α/4.
Do everyday problems of people with chronic illness interfere with their disease management?
van Houtum, Lieke; Rijken, Mieke; Groenewegen, Peter
2015-10-01
Being chronically ill is a continuous process of balancing the demands of the illness and the demands of everyday life. Understanding how everyday life affects self-management might help to provide better professional support. However, little attention has been paid to the influence of everyday life on self-management. The purpose of this study is to examine to what extent problems in everyday life interfere with the self-management behaviour of people with chronic illness, i.e. their ability to manage their illness. To estimate the effects of having everyday problems on self-management, cross-sectional linear regression analyses with propensity score matching were conducted. Data was used from 1731 patients with chronic disease(s) who participated in a nationwide Dutch panel-study. One third of people with chronic illness encounter basic (e.g. financial, housing, employment) or social (e.g. partner, children, sexual or leisure) problems in their daily life. Younger people, people with poor health and people with physical limitations are more likely to have everyday problems. Experiencing basic problems is related to less active coping behaviour, while experiencing social problems is related to lower levels of symptom management and less active coping behaviour. The extent of everyday problems interfering with self-management of people with chronic illness depends on the type of everyday problems encountered, as well as on the type of self-management activities at stake. Healthcare providers should pay attention to the life context of people with chronic illness during consultations, as patients' ability to manage their illness is related to it.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
TOPICAL REVIEW: The stability for the Cauchy problem for elliptic equations
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; Rondi, Luca; Rosset, Edi; Vessella, Sergio
2009-12-01
We discuss the ill-posed Cauchy problem for elliptic equations, which is pervasive in inverse boundary value problems modeled by elliptic equations. We provide essentially optimal stability results, in wide generality and under substantially minimal assumptions. As a general scheme in our arguments, we show that all such stability results can be derived by the use of a single building brick, the three-spheres inequality. Due to the current absence of research funding from the Italian Ministry of University and Research, this work has been completed without any financial support.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
Domain decomposition in time for PDE-constrained optimization
Barker, Andrew T.; Stoll, Martin
2015-08-28
Here, PDE-constrained optimization problems have a wide range of applications, but they lead to very large and ill-conditioned linear systems, especially if the problems are time dependent. In this paper we outline an approach for dealing with such problems by decomposing them in time and applying an additive Schwarz preconditioner in time, so that we can take advantage of parallel computers to deal with the very large linear systems. We then illustrate the performance of our method on a variety of problems.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Assessment of thyroid function in dogs with low plasma thyroxine concentration.
Diaz Espineira, M M; Mol, J A; Peeters, M E; Pollak, Y W E A; Iversen, L; van Dijk, J E; Rijnberk, A; Kooistra, H S
2007-01-01
Differentiation between hypothyroidism and nonthyroidal illness in dogs poses specific problems, because plasma total thyroxine (TT4) concentrations are often low in nonthyroidal illness, and plasma thyroid stimulating hormone (TSH) concentrations are frequently not high in primary hypothyroidism. The serum concentrations of the common basal biochemical variables (TT4, freeT4 [fT4], and TSH) overlap between dogs with hypothyroidism and dogs with nonthyroidal illness, but, with stimulation tests and quantitative measurement of thyroidal 99mTcO4(-) uptake, differentiation will be possible. In 30 dogs with low plasma TT4 concentration, the final diagnosis was based upon histopathologic examination of thyroid tissue obtained by biopsy. Fourteen dogs had primary hypothyroidism, and 13 dogs had nonthyroidal illness. Two dogs had secondary hypothyroidism, and 1 dog had metastatic thyroid cancer. The diagnostic value was assessed for (1) plasma concentrations of TT4, fT4, and TSH; (2) TSH-stimulation test; (3) plasma TSH concentration after stimulation with TSH-releasing hormone (TRH); (4) occurrence of thyroglobulin antibodies (TgAbs); and (5) thyroidal 99mTcO4(-) uptake. Plasma concentrations of TT4, fT4, TSH, and the hormone pairs TT4/TSH and fT4/TSH overlapped in the 2 groups, whereas, with TgAbs, there was 1 false-negative result. Results of the TSH- and TRH-stimulation tests did not meet earlier established diagnostic criteria, overlapped, or both. With a quantitative measurement of thyroidal 99mTcO4(-) uptake, there was no overlap between dogs with primary hypothyroidism and dogs with nonthyroidal illness. The results of this study confirm earlier observations that, in dogs, accurate biochemical diagnosis of primary hypothyroidism poses specific problems. Previous studies, in which the TSH-stimulation test was used as the "gold standard" for the diagnosis of hypothyroidism may have suffered from misclassification. Quantitative measurement of thyroidal 99mTcO- uptake has the highest discriminatory power with regard to the differentiation between primary hypothyroidism and nonthyroidal illness.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
Convex Relaxation For Hard Problem In Data Mining And Sensor Localization
2017-04-13
Drusvyatskiy, S.A. Vavasis, and H. Wolkowicz. Extreme point in- equalities and geometry of the rank sparsity ball. Math . Program., 152(1-2, Ser. A...521–544, 2015. [3] M-H. Lin and H. Wolkowicz. Hiroshima’s theorem and matrix norm inequalities. Acta Sci. Math . (Szeged), 81(1-2):45–53, 2015. [4] D...9867-4. [8] D. Drusvyatskiy, G. Li, and H. Wolkowicz. Alternating projections for ill-posed semidenite feasibility problems. Math . Program., 2016
1977-12-01
exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution
Pose-free structure from motion using depth from motion constraints.
Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G
2011-10-01
Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE
Application of Turchin's method of statistical regularization
NASA Astrophysics Data System (ADS)
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Transition from the labor market: older workers and retirement.
Peterson, Chris L; Murphy, Greg
2010-01-01
The new millennium has seen the projected growth of older populations as a source of many problems, not the least of which is how to sustain this increasingly aging population. Some decades ago, early retirement from work posed few problems for governments, but most nations are now trying to ensure that workers remain in the workforce longer. In this context, the role played by older employees can be affected by at least two factors: their productivity (or perceived productivity) and their acceptance by younger workers and management. If the goal of maintaining employees into older age is to be achieved and sustained, opportunities must be provided, for example, for more flexible work arrangements and more possibilities to pursue bridge employment (work after formal retirement). The retirement experience varies, depending on people's circumstances. Some people, for example, have retirement forced upon them by illness or injury at work, by ill-health (such as chronic illnesses), or by downsizing and associated redundancies. This article focuses on the problems and opportunities associated with working to an older age or leaving the workforce early, particularly due to factors beyond one's control.
Martin, Graeme; Beech, Nic; MacIntosh, Robert; Bushfield, Stacey
2015-01-01
The discourse of leaderism in health care has been a subject of much academic and practical debate. Recently, distributed leadership (DL) has been adopted as a key strand of policy in the UK National Health Service (NHS). However, there is some confusion over the meaning of DL and uncertainty over its application to clinical and non-clinical staff. This article examines the potential for DL in the NHS by drawing on qualitative data from three co-located health-care organisations that embraced DL as part of their organisational strategy. Recent theorising positions DL as a hybrid model combining focused and dispersed leadership; however, our data raise important challenges for policymakers and senior managers who are implementing such a leadership policy. We show that there are three distinct forms of disconnect and that these pose a significant problem for DL. However, we argue that instead of these disconnects posing a significant problem for the discourse of leaderism, they enable a fantasy of leadership that draws on and supports the discourse. © 2014 The Authors. Sociology of Health & Illness © 2014 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd.
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology
NASA Astrophysics Data System (ADS)
Barker, T.; Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.
2017-05-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities.
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology
Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.
2017-01-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities. PMID:28588402
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology.
Barker, T; Schaeffer, D G; Shearer, M; Gray, J M N T
2017-05-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ ( I )-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I -dependent rheology. When the I -dependence comes from a specific friction coefficient μ ( I ), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ ( I ) satisfies certain minimal, physically natural, inequalities.
NASA Astrophysics Data System (ADS)
Winicour, Jeffrey
2017-08-01
An algebraic-hyperbolic method for solving the Hamiltonian and momentum constraints has recently been shown to be well posed for general nonlinear perturbations of the initial data for a Schwarzschild black hole. This is a new approach to solving the constraints of Einstein’s equations which does not involve elliptic equations and has potential importance for the construction of binary black hole data. In order to shed light on the underpinnings of this approach, we consider its application to obtain solutions of the constraints for linearized perturbations of Minkowski space. In that case, we find the surprising result that there are no suitable Cauchy hypersurfaces in Minkowski space for which the linearized algebraic-hyperbolic constraint problem is well posed.
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Celik, Hasan; Bouhrara, Mustapha; Reiter, David A.; Fishbein, Kenneth W.; Spencer, Richard G.
2013-01-01
We propose a new approach to stabilizing the inverse Laplace transform of a multiexponential decay signal, a classically ill-posed problem, in the context of nuclear magnetic resonance relaxometry. The method is based on extension to a second, indirectly detected, dimension, that is, use of the established framework of two-dimensional relaxometry, followed by projection onto the desired axis. Numerical results for signals comprised of discrete T1 and T2 relaxation components and experiments performed on agarose gel phantoms are presented. We find markedly improved accuracy, and stability with respect to noise, as well as insensitivity to regularization in quantifying underlying relaxation components through use of the two-dimensional as compared to the one-dimensional inverse Laplace transform. This improvement is demonstrated separately for two different inversion algorithms, nonnegative least squares and non-linear least squares, to indicate the generalizability of this approach. These results may have wide applicability in approaches to the Fredholm integral equation of the first kind. PMID:24035004
On the breakdown of the curvature perturbation ζ during reheating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Merve Tarman; Kaya, Ali; Kutluk, Emine Seyma, E-mail: merve.tarman@boun.edu.tr, E-mail: ali.kaya@boun.edu.tr, E-mail: seymakutluk@gmail.com
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0φ-dot =, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0φ=, where φ denotes the inflaton perturbation, breaks down when 0φ-dot =. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflatonmore » fluctuation φ in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.« less
On the breakdown of the curvature perturbation ζ during reheating
NASA Astrophysics Data System (ADS)
Tarman Algan, Merve; Kaya, Ali; Seyma Kutluk, Emine
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0dot phi=, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0varphi=, where varphi denotes the inflaton perturbation, breaks down when 0dot phi=. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflaton fluctuation varphi in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.
Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.
Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti
2006-02-01
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.
NASA Astrophysics Data System (ADS)
Vogelgesang, Jonas; Schorr, Christian
2016-12-01
We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
ERIC Educational Resources Information Center
Martin, Robert
1981-01-01
Discusses the problems posed by a semantic analysis of the future tense in French, addressing particularly its double use as a tense and as a mood. The distinction between linear and branching time, or, certainty and possibility, central to this discussion, leads to a comparative analysis of future and conditional. (MES)
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.
Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M
2013-01-01
Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.
Validating an artificial intelligence human proximity operations system with test cases
NASA Astrophysics Data System (ADS)
Huber, Justin; Straub, Jeremy
2013-05-01
An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
Microwave imaging by three-dimensional Born linearization of electromagnetic scattering
NASA Astrophysics Data System (ADS)
Caorsi, S.; Gragnani, G. L.; Pastorino, M.
1990-11-01
An approach to microwave imaging is proposed that uses a three-dimensional vectorial form of the Born approximation to linearize the equation of electromagnetic scattering. The inverse scattering problem is numerically solved for three-dimensional geometries by means of the moment method. A pseudoinversion algorithm is adopted to overcome ill conditioning. Results show that the method is well suited for qualitative imaging purposes, while its capability for exactly reconstructing the complex dielectric permittivity is affected by the limitations inherent in the Born approximation and in ill conditioning.
NASA Technical Reports Server (NTRS)
Lee, Y. M.
1971-01-01
Using a linearized theory of thermally and mechanically interacting mixture of linear elastic solid and viscous fluid, we derive a fundamental relation in an integral form called a reciprocity relation. This reciprocity relation relates the solution of one initial-boundary value problem with a given set of initial and boundary data to the solution of a second initial-boundary value problem corresponding to a different initial and boundary data for a given interacting mixture. From this general integral relation, reciprocity relations are derived for a heat-conducting linear elastic solid, and for a heat-conducting viscous fluid. An initial-boundary value problem is posed and solved for the mixture of linear elastic solid and viscous fluid. With the aid of the Laplace transform and the contour integration, a real integral representation for the displacement of the solid constituent is obtained as one of the principal results of the analysis.
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.
A general approach to regularizing inverse problems with regional data using Slepian wavelets
NASA Astrophysics Data System (ADS)
Michel, Volker; Simons, Frederik J.
2017-12-01
Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.
Treatment of Nuclear Data Covariance Information in Sample Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
NASA Astrophysics Data System (ADS)
Dzuba, Sergei A.
2016-08-01
Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.
Primal Barrier Methods for Linear Programming
1989-06-01
A Theoretical Bound Concerning the difficulties introduced by an ill-conditioned H- 1, Dikin [Dik67] and Stewart [Stew87] show for a full-rank A...Dik67] I. I. Dikin (1967). Iterative solution of problems of linear and quadratic pro- gramming, Doklady Akademii Nauk SSSR, Tom 174, No. 4. [Fia79] A. V
Munir, Fehmidah; Yarker, Joanna; Haslam, Cheryl
2008-01-01
To investigate the organizational perspectives on the effectiveness of their attendance management policies for chronically ill employees. A mixed-method approach was employed involving questionnaire survey with employees and in-depth interviews with key stakeholders of the organizational policies. Participants reported that attendance management polices and the point at which systems were triggered, posed problems for employees managing chronic illness. These systems presented risk to health: employees were more likely to turn up for work despite feeling unwell (presenteeism) to avoid a disciplinary situation but absence-related support was only provided once illness progressed to long-term sick leave. Attendance management polices also raised ethical concerns for 'forced' illness disclosure and immense pressures on line managers to manage attendance. Participants felt their current attendance management polices were unfavourable toward those managing a chronic illness. The policies heavily focused on attendance despite illness and on providing return to work support following long-term sick leave. Drawing on the results, the authors conclude that attendance management should promote job retention rather than merely prevent absence per se. They outline areas of improvement in the attendance management of employees with chronic illness.
Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior
NASA Technical Reports Server (NTRS)
Wilson, T. L.
1984-01-01
The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.
Finell, Eerika; Seppälä, Tuija; Suoninen, Eero
2018-07-01
Suffering from a contested illness poses a serious threat to one's identity. We analyzed the rhetorical identity management strategies respondents used when depicting their health problems and lives in the context of observed or suspected indoor air (IA) problems in the workplace. The data consisted of essays collected by the Finnish Literature Society. We used discourse-oriented methods to interpret a variety of language uses in the construction of identity strategies. Six strategies were identified: respondents described themselves as normal and good citizens with strong characters, and as IA sufferers who received acknowledge from others, offered positive meanings to their in-group, and demanded recognition. These identity strategies located on two continua: (a) individual- and collective-level strategies and (b) dissolved and emphasized (sub)category boundaries. The practical conclusion is that professionals should be aware of these complex coping strategies when aiming to interact effectively with people suffering from contested illnesses.
The use of the Kalman filter in the automated segmentation of EIT lung images.
Zifan, A; Liatsis, P; Chapman, B E
2013-06-01
In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.
History of Physical Terms: "Pressure"
ERIC Educational Resources Information Center
Frontali, Clara
2013-01-01
Scientific terms drawn from common language are often charged with suggestions that may even be inconsistent with their restricted scientific meaning, thus posing didactic problems. The (non-linear) historical journey of the word "pressure" is illustrated here through original quotations from Stevinus, Torricelli, Pascal, Boyle,…
Estimation of the parameters of disturbances on long-range radio-communication paths
NASA Astrophysics Data System (ADS)
Gerasimov, Iu. S.; Gordeev, V. A.; Kristal, V. S.
1982-09-01
Radio propagation on long-range paths is disturbed by such phenomena as ionospheric density fluctuations, meteor trails, and the Faraday effect. In the present paper, the determination of the characteristics of such disturbances on the basis of received-signal parameters is considered as an inverse and ill-posed problem. A method for investigating the indeterminacy which arises in such determinations is proposed, and a quantitative analysis of this indeterminacy is made.
Spotted star mapping by light curve inversion: Tests and application to HD 12545
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2013-06-01
A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.
Dai, W W; Marsili, P M; Martinez, E; Morucci, J P
1994-05-01
This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).
Hollaus, Karl; Rosell-Ferrer, Javier; Merwa, Robert
2006-01-01
Magnetic induction tomography (MIT) is a low-resolution imaging modality for reconstructing the changes of the complex conductivity in an object. MIT is based on determining the perturbation of an alternating magnetic field, which is coupled from several excitation coils to the object. The conductivity distribution is reconstructed from the corresponding voltage changes induced in several receiver coils. Potential medical applications comprise the continuous, non-invasive monitoring of tissue alterations which are reflected in the change of the conductivity, e.g. edema, ventilation disorders, wound healing and ischemic processes. MIT requires the solution of an ill-posed inverse eddy current problem. A linearized version of this problem was solved for 16 excitation coils and 32 receiver coils with a model of two spherical perturbations within a cylindrical phantom. The method was tested with simulated measurement data. Images were reconstructed with a regularized single-step Gauss–Newton approach. Theoretical limits for spatial resolution and contrast/noise ratio were calculated and compared with the empirical results from a Monte-Carlo study. The conductivity perturbations inside a homogeneous cylinder were localized for a SNR between 44 and 64 dB. The results prove the feasibility of difference imaging with MIT and give some quantitative data on the limitations of the method. PMID:17031597
Obstructions to Existence in Fast-Diffusion Equations
NASA Astrophysics Data System (ADS)
Rodriguez, Ana; Vazquez, Juan L.
The study of nonlinear diffusion equations produces a number of peculiar phenomena not present in the standard linear theory. Thus, in the sub-field of very fast diffusion it is known that the Cauchy problem can be ill-posed, either because of non-uniqueness, or because of non-existence of solutions with small data. The equations we consider take the general form ut=( D( u, ux) ux) x or its several-dimension analogue. Fast diffusion means that D→∞ at some values of the arguments, typically as u→0 or ux→0. Here, we describe two different types of non-existence phenomena. Some fast-diffusion equations with very singular D do not allow for solutions with sign changes, while other equations admit only monotone solutions, no oscillations being allowed. The examples we give for both types of anomaly are closely related. The most typical examples are vt=( vx/∣ v∣) x and ut= uxx/∣ ux∣. For these equations, we investigate what happens to the Cauchy problem when we take incompatible initial data and perform a standard regularization. It is shown that the limit gives rise to an initial layer where the data become admissible (positive or monotone, respectively), followed by a standard evolution for all t>0, once the obstruction has been removed.
Stochastic static fault slip inversion from geodetic data with non-negativity and bound constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-07-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modelling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a truncated multivariate normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulae for the single, 2-D or n-D marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations. Posterior mean and covariance can also be efficiently derived. I show that the maximum posterior (MAP) can be obtained using a non-negative least-squares algorithm for the single truncated case or using the bounded-variable least-squares algorithm for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modelling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC-based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the MAP is extremely fast.
Ill-posedness in modeling mixed sediment river morphodynamics
NASA Astrophysics Data System (ADS)
Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid
2018-04-01
In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.
Inverse solutions for electrical impedance tomography based on conjugate gradients methods
NASA Astrophysics Data System (ADS)
Wang, M.
2002-01-01
A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.
Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source
NASA Astrophysics Data System (ADS)
Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.
2017-11-01
Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.
NASA Astrophysics Data System (ADS)
Tian, Yu-Kun; Zhou, Hui; Chen, Han-Ming; Zou, Ya-Ming; Guan, Shou-Jun
2013-12-01
Seismic inversion is a highly ill-posed problem, due to many factors such as the limited seismic frequency bandwidth and inappropriate forward modeling. To obtain a unique solution, some smoothing constraints, e.g., the Tikhonov regularization are usually applied. The Tikhonov method can maintain a global smooth solution, but cause a fuzzy structure edge. In this paper we use Huber-Markov random-field edge protection method in the procedure of inverting three parameters, P-velocity, S-velocity and density. The method can avoid blurring the structure edge and resist noise. For the parameter to be inverted, the Huber-Markov random-field constructs a neighborhood system, which further acts as the vertical and lateral constraints. We use a quadratic Huber edge penalty function within the layer to suppress noise and a linear one on the edges to avoid a fuzzy result. The effectiveness of our method is proved by inverting the synthetic data without and with noises. The relationship between the adopted constraints and the inversion results is analyzed as well.
Sim, Juhyun; Kim, Eunmi; Yang, Wonkyung; Woo, Sanghee; In, Sangwhan
2017-05-01
In recent years, the inappropriate use of antipsychotics by young Korean men has become a social problem. As military service exemptions are given for mental illness, some men pose as mental health patients to avoid military service. In order to verify the authenticity of mental illnesses, we developed simultaneous analytical methods for the detection of 15 antipsychotics and 2 of their metabolites in hair using liquid chromatography-tandem mass spectrometry (LC-MS/MS) analysis. The target drugs were modafinil, atomoxetine, aripiprazole, benztropine, buspirone, duloxetine, gabapentin, oxcarbazepine, topiramate, escitalopram, paliperidone, ziprasidone, lamotrigine, clonazepam, levetiracetam, and metabolites of oxcarbazepine and clonazepam. To remove possible contaminants on the hair surface, hair samples were washed twice with methanol and distilled water, and then were extracted with methanol overnight at 38°C. Desipramine-d 3 was used as an internal standard. LC-MS/MS analysis was performed on an Agilent 1290 Infinity UHPLC coupled to an AB Sciex Qtrap ® 5500 MS/MS. The total chromatographic run time was 14min. The following validation parameters were evaluated: selectivity, linearity, limit of detection (LOD), limit of quantification (LOQ), precision, accuracy, matrix effect, and recovery. The LOD and LOQ values for all analytes, except modafinil, ranged from 0.2 to 10pg/mg hair and from 0.2 to 20pg/mg hair, respectively. Good linearity was achieved for most of the analytes in the range of 20-200pg/mg hair. The method showed acceptable precision and accuracy, which were less than 15%, as well as satisfactory matrix effects and recoveries. Furthermore, this method was also applied to the analysis of rat hair samples. The study in rats showed that the concentrations of atomoxetine and aripiprazole in pigmented hair were significantly higher than those in non-pigmented hair. However, no significant difference was observed in the concentration of topiramate between pigmented and non-pigmented hair. This method will be useful in monitoring the inappropriate use of antipsychotics in suspects posing as mental health patients. However, further research is necessary before applying this method to authentic hair samples from mental health patients. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
[Multidisciplinary approach in public health research. The example of accidents and safety at work].
Lert, F; Thebaud, A; Dassa, S; Goldberg, M
1982-01-01
This article critically analyses the various scientific approaches taken to industrial accidents, particularly in epidemiology, ergonomie and sociology, by attempting to outline the epistemological limitations in each respective field. An occupational accident is by its very nature not only a physical injury but also an economic, social and legal phenomenon, which more so than illness, enables us to examine the problems posed by the need for a multidisciplinary approach in Public Health research.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
NASA Technical Reports Server (NTRS)
Voorhies, Coerte V.
1993-01-01
The problem of estimating a steady fluid velocity field near the top of Earth's core which induces the secular variation (SV) indicated by models of the observed geomagnetic field is examined in the source-free mantle/frozen-flux core (SFI/VFFC) approximation. This inverse problem is non-linear because solutions of the forward problem are deterministically chaotic. The SFM/FFC approximation is inexact, and neither the models nor the observations they represent are either complete or perfect. A method is developed for solving the non-linear inverse motional induction problem posed by the hypothesis of (piecewise, statistically) steady core surface flow and the supposition of a complete initial geomagnetic condition. The method features iterative solution of the weighted, linearized least-squares problem and admits optional biases favoring surficially geostrophic flow and/or spatially simple flow. Two types of weights are advanced radial field weights for fitting the evolution of the broad-scale portion of the radial field component near Earth's surface implied by the models, and generalized weights for fitting the evolution of the broad-scale portion of the scalar potential specified by the models.
Siemonsma, Petra C; Stuvie, Ilse; Roorda, Leo D; Vollebregt, Joke A; Lankhorst, Gustaaf J; Lettinga, Ant T
2011-04-01
The aim of this study was to identify treatment-specific predictors of the effectiveness of a method of evidence-based treatment: cognitive treatment of illness perceptions. This study focuses on what treatment works for whom, whereas most prognostic studies focusing on chronic non-specific low back pain rehabilitation aim to reduce the heterogeneity of the population of patients who are suitable for rehabilitation treatment in general. Three treatment-specific predictors were studied in patients with chronic non-specific low back pain receiving cognitive treatment of illness perceptions: a rational approach to problem-solving, discussion skills and verbal skills. Hierarchical linear regression analysis was used to assess their predictive value. Short-term changes in physical activity, measured with the Patient-Specific Functioning List, were the outcome measure for cognitive treatment of illness perceptions effect. A total of 156 patients with chronic non-specific low back pain participated in the study. Rational problem-solving was found to be a significant predictor for the change in physical activity. Discussion skills and verbal skills were non-significant. Rational problem-solving explained 3.9% of the total variance. The rational problem-solving scale results are encouraging, because chronic non-specific low back pain problems are complex by nature and can be influenced by a variety of factors. A minimum score of 44 points on the rational problem-solving scale may assist clinicians in selecting the most appropriate candidates for cognitive treatment of illness perceptions.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
A globally well-posed finite element algorithm for aerodynamics applications
NASA Technical Reports Server (NTRS)
Iannelli, G. S.; Baker, A. J.
1991-01-01
A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.
Tezaur, Irina K.; Tuminaro, Raymond S.; Perego, Mauro; ...
2015-01-01
We examine the scalability of the recently developed Albany/FELIX finite-element based code for the first-order Stokes momentum balance equations for ice flow. We focus our analysis on the performance of two possible preconditioners for the iterative solution of the sparse linear systems that arise from the discretization of the governing equations: (1) a preconditioner based on the incomplete LU (ILU) factorization, and (2) a recently-developed algebraic multigrid (AMG) preconditioner, constructed using the idea of semi-coarsening. A strong scalability study on a realistic, high resolution Greenland ice sheet problem reveals that, for a given number of processor cores, the AMG preconditionermore » results in faster linear solve times but the ILU preconditioner exhibits better scalability. In addition, a weak scalability study is performed on a realistic, moderate resolution Antarctic ice sheet problem, a substantial fraction of which contains floating ice shelves, making it fundamentally different from the Greenland ice sheet problem. We show that as the problem size increases, the performance of the ILU preconditioner deteriorates whereas the AMG preconditioner maintains scalability. This is because the linear systems are extremely ill-conditioned in the presence of floating ice shelves, and the ill-conditioning has a greater negative effect on the ILU preconditioner than on the AMG preconditioner.« less
Finding Strong Bridges and Strong Articulation Points in Linear Time
NASA Astrophysics Data System (ADS)
Italiano, Giuseppe F.; Laura, Luigi; Santaroni, Federico
Given a directed graph G, an edge is a strong bridge if its removal increases the number of strongly connected components of G. Similarly, we say that a vertex is a strong articulation point if its removal increases the number of strongly connected components of G. In this paper, we present linear-time algorithms for computing all the strong bridges and all the strong articulation points of directed graphs, solving an open problem posed in [2].
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
[Ethical questions related to nutrition and hidration: basic aspects].
Collazo Chao, E; Girela, E
2011-01-01
Conditions that pose ethical problems related to nutrition and hydration are very common nowdays, particularly within Hospitals among terminally ill patients and other patients who require nutrition and hydration. In this article we intend to analyze some circumstances, according to widely accepted ethical values, in order to outline a clear action model to help clinicians in making such difficult decisions. The problematic situations analyzed include: should hydration and nutrition be considered basic care or therapeutic measures?, and the ethical aspects of enteral versus parenteral nutrition.
Evaluation of global equal-area mass grid solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron
2015-04-01
The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.
A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound
NASA Astrophysics Data System (ADS)
Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.
2012-10-01
A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.
2011-12-15
the measured porosity values can be taken as equivalent to effective porosity values for this aquifer with the risk of only very limited overestimation...information to constrain/control an increasingly ill-posed problem, and (3) risk estimation of a model with more heterogeneity than is needed to explain...coarse fluvial deposits: Boise Hydrogeophysical Research Site, Geological Society of America Bulletin, 116(9–10), 1059–1073. Barrash, W., T. Clemo
Robust head pose estimation via supervised manifold learning.
Wang, Chao; Song, Xubo
2014-05-01
Head poses can be automatically estimated using manifold learning algorithms, with the assumption that with the pose being the only variable, the face images should lie in a smooth and low-dimensional manifold. However, this estimation approach is challenging due to other appearance variations related to identity, head location in image, background clutter, facial expression, and illumination. To address the problem, we propose to incorporate supervised information (pose angles of training samples) into the process of manifold learning. The process has three stages: neighborhood construction, graph weight computation and projection learning. For the first two stages, we redefine inter-point distance for neighborhood construction as well as graph weight by constraining them with the pose angle information. For Stage 3, we present a supervised neighborhood-based linear feature transformation algorithm to keep the data points with similar pose angles close together but the data points with dissimilar pose angles far apart. The experimental results show that our method has higher estimation accuracy than the other state-of-art algorithms and is robust to identity and illumination variations. Copyright © 2014 Elsevier Ltd. All rights reserved.
Iterative updating of model error for Bayesian inversion
NASA Astrophysics Data System (ADS)
Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew
2018-02-01
In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.
Stuart, Heather
2004-01-01
This paper addresses what is known about workplace stigma and employment inequity for people with mental and emotional problems. For people with serious mental disorders, studies show profound consequences of stigma, including diminished employability, lack of career advancement and poor quality of working life. People with serious mental illnesses are more likely to be unemployed or to be under-employed in inferior positions that are incommensurate with their skills or training. If they return to work following an illness, they often face hostility and reduced responsibilities. The result may be self-stigma and increased disability. Little is yet known about how workplace stigma affects those with less disabling psychological or emotional problems, even though these are likely to be more prevalent in workplace settings. Despite the heavy burden posed by poor mental health in the workplace, there is no regular source of population data relating to workplace stigma, and no evidence base to support the development of best-practice solutions for workplace anti-stigma programs. Suggestions for research are made in light of these gaps.
Silver, Eric; Wolff, Nancy
2010-01-01
The problems posed by persons with mental illness involved with the criminal justice system are vexing ones that have received attention at the local, state and national levels. The conceptual model currently guiding research and social action around these problems is shaped by the “criminalization” perspective and the associated belief that reconnecting individuals with mental health services will by itself reduce risk for arrest. This paper argues that such efforts are necessary but possibly not sufficient to achieve that reduction. Arguing for the need to develop a services research framework that identifies a broader range of risk factors for arrest, we describe three potentially useful criminological frameworks—the “life course,” “local life circumstances” and “routine activities” perspectives. Their utility as platforms for research in a population of persons with mental illness is discussed and suggestions are provided with regard to how services research guided by these perspectives might inform the development of community-based services aimed at reducing risk of arrest. PMID:16791518
Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.
Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang
2016-08-01
We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.
Accommodation of practical constraints by a linear programming jet select. [for Space Shuttle
NASA Technical Reports Server (NTRS)
Bergmann, E.; Weiler, P.
1983-01-01
An experimental spacecraft control system will be incorporated into the Space Shuttle flight software and exercised during a forthcoming mission to evaluate its performance and handling qualities. The control system incorporates a 'phase space' control law to generate rate change requests and a linear programming jet select to compute jet firings. Posed as a linear programming problem, jet selection must represent the rate change request as a linear combination of jet acceleration vectors where the coefficients are the jet firing times, while minimizing the fuel expended in satisfying that request. This problem is solved in real time using a revised Simplex algorithm. In order to implement the jet selection algorithm in the Shuttle flight control computer, it was modified to accommodate certain practical features of the Shuttle such as limited computer throughput, lengthy firing times, and a large number of control jets. To the authors' knowledge, this is the first such application of linear programming. It was made possible by careful consideration of the jet selection problem in terms of the properties of linear programming and the Simplex algorithm. These modifications to the jet select algorithm may by useful for the design of reaction controlled spacecraft.
Experimental and Theoretical Results in Output Trajectory Redesign for Flexible Structures
NASA Technical Reports Server (NTRS)
Dewey, J. S.; Leang, K.; Devasia, S.
1998-01-01
In this paper we study the optimal redesign of output trajectories for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectores, that achieve tracking of the required output may cause excessive vibrations in the structure. We pose and solve this problem, in the context of linear systems, as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.
Gilgen, D; Maeusezahl, D; Salis Gross, C; Battegay, E; Flubacher, P; Tanner, M; Weiss, M G; Hatz, C
2005-09-01
Migration, particularly among refugees and asylum seekers, poses many challenges to the health system of host countries. This study examined the impact of migration history on illness experience, its meaning and help-seeking strategies of migrant patients from Bosnia and Turkey with a range of common health problems in general practice in Basel, Switzerland. The Explanatory Model Interview Catalogue, a data collection instrument for cross-cultural research which combines epidemiological and ethnographic research approaches, was used in semi-structured one-to-one patient interviews. Bosnian patients (n=36) who had more traumatic migration experiences than Turkish/Kurdish (n=62) or Swiss internal migrants (n=48) reported a larger number of health problems than the other groups. Psychological distress was reported most frequently by all three groups in response to focussed queries, but spontaneously reported symptoms indicated the prominence of somatic, rather than psychological or psychosocial, problems. Among Bosnians, 78% identified traumatic migration experiences as a cause of their illness, in addition to a range of psychological and biomedical causes. Help-seeking strategies for the current illness included a wide range of treatments, such as basic medical care at private surgeries, outpatients department in hospitals as well as alternative medical treatments among all groups. Findings provide a useful guide to clinicians who work with migrants and should inform policy in medical care, information and health promotion for migrants in Switzerland as well as further education of health professionals on issues concerning migrants health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalashnikova, Irina
2012-05-01
A numerical study aimed to evaluate different preconditioners within the Trilinos Ifpack and ML packages for the Quantum Computer Aided Design (QCAD) non-linear Poisson problem implemented within the Albany code base and posed on the Ottawa Flat 270 design geometry is performed. This study led to some new development of Albany that allows the user to select an ML preconditioner with Zoltan repartitioning based on nodal coordinates, which is summarized. Convergence of the numerical solutions computed within the QCAD computational suite with successive mesh refinement is examined in two metrics, the mean value of the solution (an L{sup 1} norm)more » and the field integral of the solution (L{sup 2} norm).« less
An ill-posed parabolic evolution system for dispersive deoxygenation-reaeration in water
NASA Astrophysics Data System (ADS)
Azaïez, M.; Ben Belgacem, F.; Hecht, F.; Le Bot, C.
2014-01-01
We consider an inverse problem that arises in the management of water resources and pertains to the analysis of surface water pollution by organic matter. Most physically relevant models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor’s dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any conditions. In fact, for real-life concerns, measurements on the DO are easy to obtain and to save. On the contrary, collecting data on the BOD is a sensitive task and turns out to be a lengthy process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only is this problem plainly worth studying for its own interest but it could also be a mandatory step in other applications such as the identification of the location of pollution sources. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are the cause of the ill-posed data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can search for; it is the central purpose of the paper. Finally, we have performed some computational experiments to assess the capability of the mixed finite element in missing data recovery.
Locally linear regression for pose-invariant face recognition.
Chai, Xiujuan; Shan, Shiguang; Chen, Xilin; Gao, Wen
2007-07-01
The variation of facial appearance due to the viewpoint (/pose) degrades face recognition systems considerably, which is one of the bottlenecks in face recognition. One of the possible solutions is generating virtual frontal view from any given nonfrontal view to obtain a virtual gallery/probe face. Following this idea, this paper proposes a simple, but efficient, novel locally linear regression (LLR) method, which generates the virtual frontal view from a given nonfrontal face image. We first justify the basic assumption of the paper that there exists an approximate linear mapping between a nonfrontal face image and its frontal counterpart. Then, by formulating the estimation of the linear mapping as a prediction problem, we present the regression-based solution, i.e., globally linear regression. To improve the prediction accuracy in the case of coarse alignment, LLR is further proposed. In LLR, we first perform dense sampling in the nonfrontal face image to obtain many overlapped local patches. Then, the linear regression technique is applied to each small patch for the prediction of its virtual frontal patch. Through the combination of all these patches, the virtual frontal view is generated. The experimental results on the CMU PIE database show distinct advantage of the proposed method over Eigen light-field method.
Local search heuristic for the discrete leader-follower problem with multiple follower objectives
NASA Astrophysics Data System (ADS)
Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand
2016-10-01
We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Stochastic static fault slip inversion from geodetic data with non-negativity and bounds constraints
NASA Astrophysics Data System (ADS)
Nocquet, J.-M.
2018-04-01
Despite surface displacements observed by geodesy are linear combinations of slip at faults in an elastic medium, determining the spatial distribution of fault slip remains a ill-posed inverse problem. A widely used approach to circumvent the illness of the inversion is to add regularization constraints in terms of smoothing and/or damping so that the linear system becomes invertible. However, the choice of regularization parameters is often arbitrary, and sometimes leads to significantly different results. Furthermore, the resolution analysis is usually empirical and cannot be made independently of the regularization. The stochastic approach of inverse problems (Tarantola & Valette 1982; Tarantola 2005) provides a rigorous framework where the a priori information about the searched parameters is combined with the observations in order to derive posterior probabilities of the unkown parameters. Here, I investigate an approach where the prior probability density function (pdf) is a multivariate Gaussian function, with single truncation to impose positivity of slip or double truncation to impose positivity and upper bounds on slip for interseismic modeling. I show that the joint posterior pdf is similar to the linear untruncated Gaussian case and can be expressed as a Truncated Multi-Variate Normal (TMVN) distribution. The TMVN form can then be used to obtain semi-analytical formulas for the single, two-dimensional or n-dimensional marginal pdf. The semi-analytical formula involves the product of a Gaussian by an integral term that can be evaluated using recent developments in TMVN probabilities calculations (e.g. Genz & Bretz 2009). Posterior mean and covariance can also be efficiently derived. I show that the Maximum Posterior (MAP) can be obtained using a Non-Negative Least-Squares algorithm (Lawson & Hanson 1974) for the single truncated case or using the Bounded-Variable Least-Squares algorithm (Stark & Parker 1995) for the double truncated case. I show that the case of independent uniform priors can be approximated using TMVN. The numerical equivalence to Bayesian inversions using Monte Carlo Markov Chain (MCMC) sampling is shown for a synthetic example and a real case for interseismic modeling in Central Peru. The TMVN method overcomes several limitations of the Bayesian approach using MCMC sampling. First, the need of computer power is largely reduced. Second, unlike Bayesian MCMC based approach, marginal pdf, mean, variance or covariance are obtained independently one from each other. Third, the probability and cumulative density functions can be obtained with any density of points. Finally, determining the Maximum Posterior (MAP) is extremely fast.
Arat, Seher; Verschueren, Patrick; De Langhe, Ellen; Smith, Vanessa; Vanthuyne, Marie; Diya, Luwis; Van den Heede, Koen; Blockmans, Daniel; De Keyser, Filip; Houssiau, Frédéric A; Westhovens, René
2012-03-01
The aim of the present study was to evaluate the association between illness perceptions and the ability to cope with physical and mental health problems in a large cohort of systemic sclerosis (SSc) patients. This was a cross-sectional study in 217 systemic sclerosis patients from the Belgian Systemic Sclerosis Cohort. Illness perception and coping were measured by the Revised Illness Perception Questionnaire and a coping questionnaire--the Coping Orientation of Problem Experience inventory (COPE). Physical and mental health-related quality of life was measured by the 36-item short-form health survey (SF-36), as were disease activity and several severity parameters. The relationship between illness perceptions and the ability to cope with physical/mental health problems was examined using multiple linear regression analysis. According to LeRoy's classification, 49 patients had limited SSc (lSSc), 129 had limited cutaneous SSc (lcSSc) and 39 had diffuse cutaneous SSc (dcSSc). Median disease duration was five years and the modified Rodnan skin score was 4. Good physical health was significantly associated with the lcSSc subtype and low disease activity (p < 0.01 and p < 0.05, respectively). The perception of 'serious consequences' and strong 'illness identity' correlated with poor physical health (p < 0.001). Good mental health was associated with low illness identity scores and low 'emotional response' scores (p < 0.001). Coping variables were less significantly correlated with physical and mental health compared with the illness perception items. Illness representations contribute more than classical disease characteristics to physical and mental health. Copyright © 2011 John Wiley & Sons, Ltd.
Space structures insulating material's thermophysical and radiation properties estimation
NASA Astrophysics Data System (ADS)
Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.
2007-11-01
In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
Chopping Time of the FPU {α }-Model
NASA Astrophysics Data System (ADS)
Carati, A.; Ponno, A.
2018-03-01
We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.
A Toolbox for Imaging Stellar Surfaces
NASA Astrophysics Data System (ADS)
Young, John
2018-04-01
In this talk I will review the available algorithms for synthesis imaging at visible and infrared wavelengths, including both gray and polychromatic methods. I will explain state-of-the-art approaches to constraining the ill-posed image reconstruction problem, and selecting an appropriate regularisation function and strength of regularisation. The reconstruction biases that can follow from non-optimal choices will be discussed, including their potential impact on the physical interpretation of the results. This discussion will be illustrated with example stellar surface imaging results from real VLTI and COAST datasets.
Boisvert, R F; Donahue, M J; Lozier, D W; McMichael, R; Rust, B W
2001-01-01
In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST's current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years.
Computing motion using resistive networks
NASA Technical Reports Server (NTRS)
Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James
1988-01-01
Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
Alignment of the Stanford Linear Collider Arcs: Concepts and results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pitthan, R.; Bell, B.; Friedsam, H.
1987-02-01
The alignment of the Arcs for the Stanford Linear Collider at SLAC has posed problems in accelerator survey and alignment not encountered before. These problems come less from the tight tolerances of 0.1 mm, although reaching such a tight statistically defined accuracy in a controlled manner is difficult enough, but from the absence of a common reference plane for the Arcs. Traditional circular accelerators, including HERA and LEP, have been designed in one plane referenced to local gravity. For the SLC Arcs no such single plane exists. Methods and concepts developed to solve these and other problems, connected with themore » unique design of SLC, range from the first use of satellites for accelerator alignment, use of electronic laser theodolites for placement of components, computer control of the manual adjustment process, complete automation of the data flow incorporating the most advanced concepts of geodesy, strict separation of survey and alignment, to linear principal component analysis for the final statistical smoothing of the mechanical components.« less
ERIC Educational Resources Information Center
Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven
2013-01-01
The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonsson, Jacob C.; Branden, Henrik
2006-10-19
This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less
Lassa fever: the challenges of curtailing a deadly disease.
Ibekwe, Titus
2012-01-01
Today Lassa fever is mainly a disease of the developing world, however several imported cases have been reported in different parts of the world and there are growing concerns of the potentials of Lassa fever Virus as a biological weapon. Yet no tangible solution to this problem has been developed nearly half a decade after its identification. Hence, the paper is aimed at appraising the problems associated with LAF illness; the challenges in curbing the epidemic and recommendations on important focal points. A Review based on the documents from the EFAS conference 2011 and literature search on PubMed, Scopus and Science direct. The retrieval of relevant papers was via the University of British Columbia and University of Toronto Libraries. The two major search engines returned 61 and 920 articles respectively. Out of these, the final 26 articles that met the criteria were selected. Relevant information on epidemiology, burden of management and control were obtained. Prompt and effective containment of the Lassa fever disease in Lassa village four decades ago could have saved the West African sub-region and indeed the entire globe from the devastating effect and threats posed by this illness. That was a hard lesson calling for much more proactive measures towards the eradication of the illness at primary, secondary and tertiary levels of health care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
A Neural Network Aero Design System for Advanced Turbo-Engines
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1999-01-01
An inverse design method calculates the blade shape that produces a prescribed input pressure distribution. By controlling this input pressure distribution the aerodynamic design objectives can easily be met. Because of the intrinsic relationship between pressure distribution and airfoil physical properties, a Neural Network can be trained to choose the optimal pressure distribution that would meet a set of physical requirements. Neural network systems have been attempted in the context of direct design methods. From properties ascribed to a set of blades the neural network is trained to infer the properties of an 'interpolated' blade shape. The problem is that, especially in transonic regimes where we deal with intrinsically non linear and ill posed problems, small perturbations of the blade shape can produce very large variations of the flow parameters. It is very unlikely that, under these circumstances, a neural network will be able to find the proper solution. The unique situation in the present method is that the neural network can be trained to extract the required input pressure distribution from a database of pressure distributions while the inverse method will still compute the exact blade shape that corresponds to this 'interpolated' input pressure distribution. In other words, the interpolation process is transferred to a smoother problem, namely, finding what pressure distribution would produce the required flow conditions and, once this is done, the inverse method will compute the exact solution for this problem. The use of neural network is, in this context, highly related to the use of proper optimization techniques. The optimization is used essentially as an automation procedure to force the input pressure distributions to achieve the required aero and structural design parameters. A multilayered feed forward network with back-propagation is used to train the system for pattern association and classification.
Pre-Service Teachers' Free and Structured Mathematical Problem Posing
ERIC Educational Resources Information Center
Silber, Steven; Cai, Jinfa
2017-01-01
This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Stiffness optimization of non-linear elastic structures
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
2017-11-13
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
The Fisher-KPP problem with doubly nonlinear diffusion
NASA Astrophysics Data System (ADS)
Audrito, Alessandro; Vázquez, Juan Luis
2017-12-01
The famous Fisher-KPP reaction-diffusion model combines linear diffusion with the typical KPP reaction term, and appears in a number of relevant applications in biology and chemistry. It is remarkable as a mathematical model since it possesses a family of travelling waves that describe the asymptotic behaviour of a large class solutions 0 ≤ u (x , t) ≤ 1 of the problem posed in the real line. The existence of propagation waves with finite speed has been confirmed in some related models and disproved in others. We investigate here the corresponding theory when the linear diffusion is replaced by the "slow" doubly nonlinear diffusion and we find travelling waves that represent the wave propagation of more general solutions even when we extend the study to several space dimensions. A similar study is performed in the critical case that we call "pseudo-linear", i.e., when the operator is still nonlinear but has homogeneity one. With respect to the classical model and the "pseudo-linear" case, the "slow" travelling waves exhibit free boundaries.
Stiffness optimization of non-linear elastic structures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wallin, Mathias; Ivarsson, Niklas; Tortorelli, Daniel
Our paper revisits stiffness optimization of non-linear elastic structures. Due to the non-linearity, several possible stiffness measures can be identified and in this work conventional compliance, i.e. secant stiffness designs are compared to tangent stiffness designs. The optimization problem is solved by the method of moving asymptotes and the sensitivities are calculated using the adjoint method. And for the tangent cost function it is shown that although the objective involves the third derivative of the strain energy an efficient formulation for calculating the sensitivity can be obtained. Loss of convergence due to large deformations in void regions is addressed bymore » using a fictitious strain energy such that small strain linear elasticity is approached in the void regions. We formulate a well-posed topology optimization problem by using restriction which is achieved via a Helmholtz type filter. The numerical examples provided show that for low load levels, the designs obtained from the different stiffness measures coincide whereas for large deformations significant differences are observed.« less
Coupled variational formulations of linear elasticity and the DPG methodology
NASA Astrophysics Data System (ADS)
Fuentes, Federico; Keith, Brendan; Demkowicz, Leszek; Le Tallec, Patrick
2017-11-01
This article presents a general approach akin to domain-decomposition methods to solve a single linear PDE, but where each subdomain of a partitioned domain is associated to a distinct variational formulation coming from a mutually well-posed family of broken variational formulations of the original PDE. It can be exploited to solve challenging problems in a variety of physical scenarios where stability or a particular mode of convergence is desired in a part of the domain. The linear elasticity equations are solved in this work, but the approach can be applied to other equations as well. The broken variational formulations, which are essentially extensions of more standard formulations, are characterized by the presence of mesh-dependent broken test spaces and interface trial variables at the boundaries of the elements of the mesh. This allows necessary information to be naturally transmitted between adjacent subdomains, resulting in coupled variational formulations which are then proved to be globally well-posed. They are solved numerically using the DPG methodology, which is especially crafted to produce stable discretizations of broken formulations. Finally, expected convergence rates are verified in two different and illustrative examples.
Creativity of Field-dependent and Field-independent Students in Posing Mathematical Problems
NASA Astrophysics Data System (ADS)
Azlina, N.; Amin, S. M.; Lukito, A.
2018-01-01
This study aims at describing the creativity of elementary school students with different cognitive styles in mathematical problem-posing. The posed problems were assessed based on three components of creativity, namely fluency, flexibility, and novelty. The free-type problem posing was used in this study. This study is a descriptive research with qualitative approach. Data collections were conducted through written task and task-based interviews. The subjects were two elementary students. One of them is Field Dependent (FD) and the other is Field Independent (FI) which were measured by GEFT (Group Embedded Figures Test). Further, the data were analyzed based on creativity components. The results show thatFD student’s posed problems have fulfilled the two components of creativity namely fluency, in which the subject posed at least 3 mathematical problems, and flexibility, in whichthe subject posed problems with at least 3 different categories/ideas. Meanwhile,FI student’s posed problems have fulfilled all three components of creativity, namely fluency, in which thesubject posed at least 3 mathematical problems, flexibility, in which thesubject posed problems with at least 3 different categories/ideas, and novelty, in which the subject posed problems that are purely the result of her own ideas and different from problems they have known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielecki, J.; Scholz, M.; Drozdowicz, K.
A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less
Skill Levels of Prospective Physics Teachers on Problem Posing
ERIC Educational Resources Information Center
Cildir, Sema; Sezen, Nazan
2011-01-01
Problem posing is one of the topics which the educators thoroughly accentuate. Problem posing skill is defined as an introvert activity of a student's learning. In this study, skill levels of prospective physics teachers on problem posing were determined and their views on problem posing were evaluated. To this end, prospective teachers were given…
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Kirst, Maritt; Zerger, Suzanne; Misir, Vachan; Hwang, Stephen; Stergiopoulos, Vicky
2015-01-01
There is strong evidence that Housing First interventions are effective in improving housing stability and quality of life among homeless people with mental illness and addictions. However, there is very little evidence on the effectiveness of Housing First in improving substance use-related outcomes in this population. This study uses a randomized control design to examine the effects of scatter-site Housing First on substance use outcomes in a large urban centre. Substance use outcomes were compared between a Housing First intervention and treatment as usual group in a sample of 575 individuals experiencing homelessness and mental illness, with or without a co-occurring substance use problem, in the At Home/Chez Soi trial in Toronto, Canada. Generalized linear models were used to compare study arms with respect to change in substance use outcomes over time (baseline, 6, 12, 18 and 24 month). At 24 months, participants in the Housing First intervention had significantly greater reductions in number of days experiencing alcohol problems and amount of money spent on alcohol than participants in the Treatment as Usual group. No differences between the study arms in illicit drug outcomes were found at 24 months. These findings show that a Housing First intervention can contribute to reductions in alcohol problems over time. However, the lack of effect of the intervention on illicit drug problems suggests that individuals experiencing homelessness, mental illness and drug problems may need additional supports to reduce use. Current controlled trials ISRCTN42520374. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Fundamentals of diffusion MRI physics.
Kiselev, Valerij G
2017-03-01
Diffusion MRI is commonly considered the "engine" for probing the cellular structure of living biological tissues. The difficulty of this task is threefold. First, in structurally heterogeneous media, diffusion is related to structure in quite a complicated way. The challenge of finding diffusion metrics for a given structure is equivalent to other problems in physics that have been known for over a century. Second, in most cases the MRI signal is related to diffusion in an indirect way dependent on the measurement technique used. Third, finding the cellular structure given the MRI signal is an ill-posed inverse problem. This paper reviews well-established knowledge that forms the basis for responding to the first two challenges. The inverse problem is briefly discussed and the reader is warned about a number of pitfalls on the way. Copyright © 2017 John Wiley & Sons, Ltd.
WASP (Write a Scientific Paper): Special cases of selective non-treatment and/or DNR.
Mallia, Pierre
2018-05-03
Fetuses at low gestational age limit of viability, neonates with life threatening or life limiting congenital anomalies and deteriorating acutely ill newborn babies in intensive care, pose taxing ethical questions on whether to forego or stop treatment and allow them to die naturally. Although there is essentially no ethical difference between end of life decision between neonates and other children and adults, in the former, the fact that we are dealing with a new life, may pose greater problems to staff and parents. Good communication skills and involvement of all the team and the parents should start from the beginning to see which treatment can be foregone or stopped in the best interests of the child. This article deals with the importance of clinical ethics to avoid legal and moral showdowns and discusses accepted moral practice in this difficult area. Copyright © 2018. Published by Elsevier B.V.
Determining the Performances of Pre-Service Primary School Teachers in Problem Posing Situations
ERIC Educational Resources Information Center
Kilic, Cigdem
2013-01-01
This study examined the problem posing strategies of pre-service primary school teachers in different problem posing situations (PPSs) and analysed the issues they encounter while posing problems. A problem posing task consisting of six PPSs (two free, two structured, and two semi-structured situations) was delivered to 40 participants.…
Multimodal Deep Autoencoder for Human Pose Recovery.
Hong, Chaoqun; Yu, Jun; Wan, Jian; Tao, Dacheng; Wang, Meng
2015-12-01
Video-based human pose recovery is usually conducted by retrieving relevant poses using image features. In the retrieving process, the mapping between 2D images and 3D poses is assumed to be linear in most of the traditional methods. However, their relationships are inherently non-linear, which limits recovery performance of these methods. In this paper, we propose a novel pose recovery method using non-linear mapping with multi-layered deep neural network. It is based on feature extraction with multimodal fusion and back-propagation deep learning. In multimodal fusion, we construct hypergraph Laplacian with low-rank representation. In this way, we obtain a unified feature description by standard eigen-decomposition of the hypergraph Laplacian matrix. In back-propagation deep learning, we learn a non-linear mapping from 2D images to 3D poses with parameter fine-tuning. The experimental results on three data sets show that the recovery error has been reduced by 20%-25%, which demonstrates the effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1985-01-01
The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.
Multistatic aerosol-cloud lidar in space: A theoretical perspective
NASA Astrophysics Data System (ADS)
Mishchenko, M. I.; Alexandrov, M. D.; Brian, C.; Travis, L. D.
2016-12-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170° can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
Multistatic Aerosol Cloud Lidar in Space: A Theoretical Perspective
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Alexandrov, Mikhail D.; Cairns, Brian; Travis, Larry D.
2016-01-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170deg can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning
ERIC Educational Resources Information Center
Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting
2012-01-01
A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…
Characteristics of Problem Posing of Grade 9 Students on Geometric Tasks
ERIC Educational Resources Information Center
Chua, Puay Huat; Wong, Khoon Yoong
2012-01-01
This is an exploratory study into the individual problem-posing characteristics of 480 Grade 9 Singapore students who were novice problem posers working on two geometric tasks. The students were asked to pose a problem for their friends to solve. Analyses of solvable posed problems were based on the problem type, problem information, solution type…
[Legal aspects of the use of footbaths for cattle and sheep].
Kleiminger, E
2012-04-24
Claw diseases pose a major problem for dairy and sheep farms. As well as systemic treatments of these illnesses by means of drug injection, veterinarians discuss the application of footbaths for the local treatment of dermatitis digitalis or foot rot. On farms footbaths are used with different substances and for various purposes. The author presents the requirements for veterinary medicinal products (marketing authorization and manufacturing authorization) and demonstrates the operation of the "cascade in case of a treatment crisis". In addition, the distinction between veterinary hygiene biocidal products and veterinary medicinal products and substances to care for claws is explained.
Boisvert, Ronald F.; Donahue, Michael J.; Lozier, Daniel W.; McMichael, Robert; Rust, Bert W.
2001-01-01
In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST’s current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years. PMID:27500024
Antinauseants in Pregnancy: Teratogens or Not?
Biringer, Anne
1984-01-01
Nausea and/or vomiting affect 50% of all pregnant women. For most women, this is a self-limited problem which responds well to conservative management. However, there are some situations where the risk to the mother and fetus posed by the illness are greater than the possible risks of teratogenicity of antinauseant drugs. Antihistamines have had the widest testing, and to date, there has been no evidence linking doxylamine, dimenhydrinate or promethazine to congenital malformations. Since no available drugs have official approval for use in nausea and vomiting of pregnancy the physician is left alone to make this difficult decision. PMID:21279128
On the reconstruction of the surface structure of the spotted stars
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.
2013-07-01
We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.
NASA Astrophysics Data System (ADS)
Baronian, Vahan; Bourgeois, Laurent; Chapuis, Bastien; Recoquillay, Arnaud
2018-07-01
This paper presents an application of the linear sampling method to ultrasonic non destructive testing of an elastic waveguide. In particular, the NDT context implies that both the solicitations and the measurements are located on the surface of the waveguide and are given in the time domain. Our strategy consists in using a modal formulation of the linear sampling method at multiple frequencies, such modal formulation being justified theoretically in Bourgeois et al (2011 Inverse Problems 27 055001) for rigid obstacles and in Bourgeois and Lunéville (2013 Inverse Problems 29 025017) for cracks. Our strategy requires the inversion of some emission and reception matrices which deserve some special attention due to potential ill-conditioning. The feasibility of our method is proved with the help of artificial data as well as real data.
Problem Posing with the Multiplication Table
ERIC Educational Resources Information Center
Dickman, Benjamin
2014-01-01
Mathematical problem posing is an important skill for teachers of mathematics, and relates readily to mathematical creativity. This article gives a bit of background information on mathematical problem posing, lists further references to connect problem posing and creativity, and then provides 20 problems based on the multiplication table to be…
Investigation of Problem-Solving and Problem-Posing Abilities of Seventh-Grade Students
ERIC Educational Resources Information Center
Arikan, Elif Esra; Ünal, Hasan
2015-01-01
This study aims to examine the effect of multiple problem-solving skills on the problem-posing abilities of gifted and non-gifted students and to assess whether the possession of such skills can predict giftedness or affect problem-posing abilities. Participants' metaphorical images of problem posing were also explored. Participants were 20 gifted…
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sun, Jingliang; Liu, Chunsheng
2018-01-01
In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.
A boundary-value problem for a first-order hyperbolic system in a two-dimensional domain
NASA Astrophysics Data System (ADS)
Zhura, N. A.; Soldatov, A. P.
2017-06-01
We consider a strictly hyperbolic first-order system of three equations with constant coefficients in a bounded piecewise-smooth domain. The boundary of the domain is assumed to consist of six smooth non-characteristic arcs. A boundary-value problem in this domain is posed by alternately prescribing one or two linear combinations of the components of the solution on these arcs. We show that this problem has a unique solution under certain additional conditions on the coefficients of these combinations, the boundary of the domain and the behaviour of the solution near the characteristics passing through the corner points of the domain.
NASA Technical Reports Server (NTRS)
Turner, L. R.
1960-01-01
The problem of solving systems of nonlinear equations has been relatively neglected in the mathematical literature, especially in the textbooks, in comparison to the corresponding linear problem. Moreover, treatments that have an appearance of generality fail to discuss the nature of the solutions and the possible pitfalls of the methods suggested. Probably it is unrealistic to expect that a unified and comprehensive treatment of the subject will evolve, owing to the great variety of situations possible, especially in the applied field where some requirement of human or mechanical efficiency is always present. Therefore we attempt here simply to pose the problem and to describe and partially appraise the methods of solution currently in favor.
Total-variation based velocity inversion with Bregmanized operator splitting algorithm
NASA Astrophysics Data System (ADS)
Zand, Toktam; Gholami, Ali
2018-04-01
Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.
Ward, Earlise; Wiltshire, Jacqueline C.; Detry, Michelle A.; Brown, R. L.
2014-01-01
Background Although research focused on African Americans with mental illness has been increasing, few researchers have addressed gender and age differences in beliefs, attitudes, and coping. Objective To examine African Americans' beliefs about mental illness, attitudes toward seeking mental health services, preferred coping behaviors, and whether these variables differ by gender and age. Method An exploratory, cross-sectional survey design was used. Participants were 272 community-dwelling African Americans aged 25-72 years. Data analysis included descriptive statistics and general linear regression models. Results Depression was the most common mental illness and there were no gender differences in prevalence. Both men and women believed they knew some of the symptoms and causal factors of mental illness. Their attitudes suggested they are not very open to acknowledging psychological problems, are very concerned about stigma associated with mental illness, and are somewhat open to seeking mental health services, but they prefer religious coping. Significant gender and age differences were evident in attitudes and preferred coping. Discussion Our findings have implications for gender and age-specific psychoeducation interventions and future research. For instance, psychoeducation or community awareness programs designed to increase openness to psychological problems and reducing stigma are needed. Also, exploration of partnerships between faith-based organizations and mental health services could be helpful to African Americans. PMID:23328705
Renal and urologic manifestations of pediatric condition falsification/Munchausen by proxy.
Feldman, Kenneth W; Feldman, Marc D; Grady, Richard; Burns, Mark W; McDonald, Ruth
2007-06-01
Renal and urologic problems in pediatric condition falsification (PCF)/Munchausen by proxy (MBP) can pose frustrating diagnostic and management problems. Five previously unreported victims of PCF/MBP are described. Symptoms included artifactual hematuria, recalcitrant urinary infections, dysfunctional voiding, perineal irritation, glucosuria, and "nutcracker syndrome", in addition to alleged sexual abuse. Falsifications included false or exaggerated history, specimen contamination, and induced illness. Caretakers also intentionally withheld appropriately prescribed treatment. Children underwent invasive diagnostic and surgical procedures because of the falsifications. They developed iatrogenic complications as well as behavioral problems stemming from their abuse. A PCF/MBP database was started in 1995 and includes the characteristics of 135 PCF/MBP victims examined by the first author between 1974 and 2006. Analysis of the database revealed that 25% of the children had renal or urologic issues. They were the presenting/primary issue for five. Diagnosis of PCF/MBP was delayed an average of 4.5 years from symptom onset. Almost all patients were victimized by their mothers, and maternal health falsification and somatization were common. Thirty-one of 34 children had siblings who were also victimized, six of whom died. In conclusion, falsifications of childhood renal and urologic illness are relatively uncommon; however, the deceits are prolonged and tortuous. Early recognition and intervention might limit the harm.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Life stress and illness: a systems approach.
Christie-Seely, J
1983-03-01
The link between stress and illness has been forged by researchers like Holmes and Rahe whose Social Readjustment Rating Scale can be used by family physicians to assess their patients' stress. The concept of stress has been clarified by the systems approach to illness. Stress and illness are embedded in a biopsychosocial matrix of several systems levels, each of which may be a source of stress as well as a support system. Stress is not the end result of a linear chain of causes and effects, but part of a feedback system in a community or family. The family is the major source of lifestyle and personality, the health belief system and modes of problem solving and coping, as well as of stress and support. The family physician can have a major role in educating the individual and family about stress and illness, and in altering the meaning of stress from catastrophe to challenge and source of growth. Anticipatory guidance for the normal crises of the life cycle and the crises of illness, loss and death can help prevent further family dysfunction and illness.
Some Reflections on Problem Posing: A Conversation with Marion Walter
ERIC Educational Resources Information Center
Baxter, Juliet A.
2005-01-01
Marion Walter, an internationally acclaimed mathematics educator discusses about problem posing, focusing on both the merits of problem posing and techniques to encourage problem posing. She believes that playful attitude toward problem variables is an essential part of an inquiring mind and the more opportunities that learners have, to change a…
Negative probability of random multiplier in turbulence
NASA Astrophysics Data System (ADS)
Bai, Xuan; Su, Weidong
2017-11-01
The random multiplicative process (RMP), which has been proposed for over 50 years, is a convenient phenomenological ansatz of turbulence cascade. In the RMP, the fluctuation in a large scale is statistically mapped to the one in a small scale by the linear action of an independent random multiplier (RM). Simple as it is, the RMP is powerful enough since all of the known scaling laws can be included in this model. So far as we know, however, a direct extraction for the probability density function (PDF) of RM has been absent yet. The reason is the deconvolution during the process is ill-posed. Nevertheless, with the progress in the studies of inverse problems, the situation can be changed. By using some new regularization techniques, for the first time we recover the PDFs of the RMs in some turbulent flows. All the consistent results from various methods point to an amazing observation-the PDFs can attain negative values in some intervals; and this can also be justified by some properties of infinitely divisible distributions. Despite the conceptual unconventionality, the present study illustrates the implications of negative probability in turbulence in several aspects, with emphasis on its role in describing the interaction between fluctuations at different scales. This work is supported by the NSFC (No. 11221062 and No. 11521091).
Towards adjoint-based inversion for rheological parameters in nonlinear viscous mantle flow
NASA Astrophysics Data System (ADS)
Worthen, Jennifer; Stadler, Georg; Petra, Noemi; Gurnis, Michael; Ghattas, Omar
2014-09-01
We address the problem of inferring mantle rheological parameter fields from surface velocity observations and instantaneous nonlinear mantle flow models. We formulate this inverse problem as an infinite-dimensional nonlinear least squares optimization problem governed by nonlinear Stokes equations. We provide expressions for the gradient of the cost functional of this optimization problem with respect to two spatially-varying rheological parameter fields: the viscosity prefactor and the exponent of the second invariant of the strain rate tensor. Adjoint (linearized) Stokes equations, which are characterized by a 4th order anisotropic viscosity tensor, facilitates efficient computation of the gradient. A quasi-Newton method for the solution of this optimization problem is presented, which requires the repeated solution of both nonlinear forward Stokes and linearized adjoint Stokes equations. For the solution of the nonlinear Stokes equations, we find that Newton’s method is significantly more efficient than a Picard fixed point method. Spectral analysis of the inverse operator given by the Hessian of the optimization problem reveals that the numerical eigenvalues collapse rapidly to zero, suggesting a high degree of ill-posedness of the inverse problem. To overcome this ill-posedness, we employ Tikhonov regularization (favoring smooth parameter fields) or total variation (TV) regularization (favoring piecewise-smooth parameter fields). Solution of two- and three-dimensional finite element-based model inverse problems show that a constant parameter in the constitutive law can be recovered well from surface velocity observations. Inverting for a spatially-varying parameter field leads to its reasonable recovery, in particular close to the surface. When inferring two spatially varying parameter fields, only an effective viscosity field and the total viscous dissipation are recoverable. Finally, a model of a subducting plate shows that a localized weak zone at the plate boundary can be partially recovered, especially with TV regularization.
Good News for Borehole Climatology
NASA Astrophysics Data System (ADS)
Rath, Volker; Fidel Gonzalez-Rouco, J.; Goosse, Hugues
2010-05-01
Though the investigation of observed borehole temperatures has proved to be a valuable tool for the reconstruction of ground surface temperature histories, there are many open questions concerning the significance and accuracy of the reconstructions from these data. In particular, the temperature signal of the warming after the Last glacial Maximum (LGM) is still present in borehole temperature profiles. It influences the relatively shallow boreholes used in current paleoclimate inversions to estimate temperature changes in the last centuries. This is shown using Monte Carlo experiments on past surface temperature change, using plausible distributions for the most important parameters, i.e.,amplitude and timing of the glacial-interglacial transition, the prior average temperature, and petrophysical properties. It has been argued that the signature of the last glacial-interglacial transition could be responsible for the high amplitudes of millennial temperature reconstructions. However, in shallow boreholes the additional effect of past climate can reasonably approximated by a linear variation of temperature with depth, and thus be accommodated by a "biased" background heat flow. This is good news for borehole climate, but implies that the geological heat flow values have to be interpreted accordingly. Borehole climate reconstructions from these shallow are most probably underestimating past variability due to the diffusive character of the heat conduction process, and the smoothness constraints necessary for obtaining stable solutions of this ill-posed inverse problem. A simple correction based on subtracting an appropriate prior surface temperature history shows promising results reducing these errors considerably, also with deeper boreholes, where the heat flow signal can not be approximated linearly, and improves the comparisons with AOGCM modeling results.
NASA Astrophysics Data System (ADS)
Barnoud, Anne; Coutant, Olivier; Bouligand, Claire; Gunawan, Hendra; Deroussi, Sébastien
2016-04-01
We use a Bayesian formalism combined with a grid node discretization for the linear inversion of gravimetric data in terms of 3-D density distribution. The forward modelling and the inversion method are derived from seismological inversion techniques in order to facilitate joint inversion or interpretation of density and seismic velocity models. The Bayesian formulation introduces covariance matrices on model parameters to regularize the ill-posed problem and reduce the non-uniqueness of the solution. This formalism favours smooth solutions and allows us to specify a spatial correlation length and to perform inversions at multiple scales. We also extract resolution parameters from the resolution matrix to discuss how well our density models are resolved. This method is applied to the inversion of data from the volcanic island of Basse-Terre in Guadeloupe, Lesser Antilles. A series of synthetic tests are performed to investigate advantages and limitations of the methodology in this context. This study results in the first 3-D density models of the island of Basse-Terre for which we identify: (i) a southward decrease of densities parallel to the migration of volcanic activity within the island, (ii) three dense anomalies beneath Petite Plaine Valley, Beaugendre Valley and the Grande-Découverte-Carmichaël-Soufrière Complex that may reflect the trace of former major volcanic feeding systems, (iii) shallow low-density anomalies in the southern part of Basse-Terre, especially around La Soufrière active volcano, Piton de Bouillante edifice and along the western coast, reflecting the presence of hydrothermal systems and fractured and altered rocks.
Noordraven, Ernst L; Wierdsma, André I; Blanken, Peter; Bloemendaal, Anthony Ft; Mulder, Cornelis L
2016-01-01
Noncompliance is a major problem for patients with a psychotic disorder. Two important risk factors for noncompliance that have a severe negative impact on treatment outcomes are impaired illness insight and lack of motivation. Our cross-sectional study explored how they are related to each other and their compliance with depot medication. Interviews were conducted in 169 outpatients with a psychotic disorder taking depot medication. Four patient groups were defined based on low or high illness insight and on low or high motivation. The associations between depot-medication compliance, motivation, and insight were illustrated using generalized linear models. Generalized linear model showed a significant interaction effect between motivation and insight. Patients with poor insight and high motivation for treatment were more compliant (94%) (95% confidence interval [CI]: 1.821, 3.489) with their depot medication than patients with poor insight and low motivation (61%) (95% CI: 0.288, 0.615). Patients with both insight and high motivation for treatment were less compliant (73%) (95% CI: 0.719, 1.315) than those with poor insight and high motivation. Motivation for treatment was more strongly associated with depot-medication compliance than with illness insight. Being motivated to take medication, whether to get better or for other reasons, may be a more important factor than having illness insight in terms of improving depot-medication compliance. Possible implications for clinical practice are discussed.
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Fundamental concepts of problem-based learning for the new facilitator.
Kanter, S L
1998-01-01
Problem-based learning (PBL) is a powerful small group learning tool that should be part of the armamentarium of every serious educator. Classic PBL uses ill-structured problems to simulate the conditions that occur in the real environment. Students play an active role and use an iterative process of seeking new information based on identified learning issues, restructuring the information in light of the new knowledge, gathering additional information, and so forth. Faculty play a facilitatory role, not a traditional instructional role, by posing metacognitive questions to students. These questions serve to assist in organizing, generalizing, and evaluating knowledge; to probe for supporting evidence; to explore faulty reasoning; to stimulate discussion of attitudes; and to develop self-directed learning and self-assessment skills. Professional librarians play significant roles in the PBL environment extending from traditional service provider to resource person to educator. Students and faculty usually find the learning experience productive and enjoyable. PMID:9681175
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
NASA Astrophysics Data System (ADS)
Hasanah, N.; Hayashi, Y.; Hirashima, T.
2017-02-01
Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
ERIC Educational Resources Information Center
Lyonga, Agnes Ngale; Eighmy, Myron A.; Garden-Robinson, Julie
2010-01-01
Foodborne illness and food safety risks pose health threats to everyone, including international college students who live in the United States and encounter new or unfamiliar foods. This study assessed the prevalence of self-reported foodborne illness among international college students by cultural regions and length of time in the United…
Maximal likelihood correspondence estimation for face recognition across pose.
Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang
2014-10-01
Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.
Comparisons of linear and nonlinear pyramid schemes for signal and image processing
NASA Astrophysics Data System (ADS)
Morales, Aldo W.; Ko, Sung-Jea
1997-04-01
Linear filters banks are being used extensively in image and video applications. New research results in wavelet applications for compression and de-noising are constantly appearing in the technical literature. On the other hand, non-linear filter banks are also being used regularly in image pyramid algorithms. There are some inherent advantages in using non-linear filters instead of linear filters when non-Gaussian processes are present in images. However, a consistent way of comparing performance criteria between these two schemes has not been fully developed yet. In this paper a recently discovered tool, sample selection probabilities, is used to compare the behavior of linear and non-linear filters. In the conversion from weights of order statistics (OS) filters to coefficients of the impulse response is obtained through these probabilities. However, the reverse problem: the conversion from coefficients of the impulse response to the weights of OS filters is not yet fully understood. One of the reasons for this difficulty is the highly non-linear nature of the partitions and generating function used. In the present paper the problem is posed as an optimization of integer linear programming subject to constraints directly obtained from the coefficients of the impulse response. Although the technique to be presented in not completely refined, it certainly appears to be promising. Some results will be shown.
NASA Astrophysics Data System (ADS)
Seo, Jongmin; Schiavazzi, Daniele; Marsden, Alison
2017-11-01
Cardiovascular simulations are increasingly used in clinical decision making, surgical planning, and disease diagnostics. Patient-specific modeling and simulation typically proceeds through a pipeline from anatomic model construction using medical image data to blood flow simulation and analysis. To provide confidence intervals on simulation predictions, we use an uncertainty quantification (UQ) framework to analyze the effects of numerous uncertainties that stem from clinical data acquisition, modeling, material properties, and boundary condition selection. However, UQ poses a computational challenge requiring multiple evaluations of the Navier-Stokes equations in complex 3-D models. To achieve efficiency in UQ problems with many function evaluations, we implement and compare a range of iterative linear solver and preconditioning techniques in our flow solver. We then discuss applications to patient-specific cardiovascular simulation and how the problem/boundary condition formulation in the solver affects the selection of the most efficient linear solver. Finally, we discuss performance improvements in the context of uncertainty propagation. Support from National Institute of Health (R01 EB018302) is greatly appreciated.
Rizk, Nesrine A; Kanafani, Zeina A; Tabaja, Hussam Z; Kanj, Souha S
2017-07-01
Beta-lactams are at the cornerstone of therapy in critical care settings, but their clinical efficacy is challenged by the rise in bacterial resistance. Infections with multi-drug resistant organisms are frequent in intensive care units, posing significant therapeutic challenges. The problem is compounded by a dearth in the development of new antibiotics. In addition, critically-ill patients have unique physiologic characteristics that alter the drugs pharmacokinetics and pharmacodynamics. Areas covered: The prolonged infusion of antibiotics (extended infusion [EI] and continuous infusion [CI]) has been the focus of research in the last decade. As beta-lactams have time-dependent killing characteristics that are altered in critically-ill patients, prolonged infusion is an attractive approach to maximize their drug delivery and efficacy. Several studies have compared traditional dosing to EI/CI of beta-lactams with regard to clinical efficacy. Clinical data are primarily composed of retrospective studies and some randomized controlled trials. Several reports show promising results. Expert commentary: Reviewing the currently available evidence, we conclude that EI/CI is probably beneficial in the treatment of critically-ill patients in whom an organism has been identified, particularly those with respiratory infections. Further studies are needed to evaluate the efficacy of EI/CI in the management of infections with resistant organisms.
Wynaden, D; Orb, A; McGowan, S; Downie, J
2000-09-01
The preparedness of comprehensive nurses to work with the mentally ill is of concern to many mental health professionals. Discussion as to whether current undergraduate nursing programs in Australia prepare a graduate to work as a beginning practitioner in the mental health area has been the centre of debate for most of the 1990s. This, along with the apparent lack of interest and motivation of these nurses to work in the mental health area following graduation, remains a major problem for mental health care providers. With one in five Australians now experiencing the burden of a major mental illness, the preparation of a nurse who is competent to work with the mentally ill would appear to be a priority. The purpose of the present study was to determine third year undergraduate nursing students' perceived level of preparedness to work with mentally ill clients. The results suggested significant differences in students' perceived level of confidence, knowledge and skills prior to and following theoretical and clinical exposure to the mental health area. Pre-testing of students before entering their third year indicated that the philosophy of comprehensive nursing: integration, although aspired to in principle, does not appear to occur in reality.
Problem Posing as a Pedagogical Strategy: A Teacher's Perspective
ERIC Educational Resources Information Center
Staebler-Wiseman, Heidi A.
2011-01-01
Student problem posing has been advocated for mathematics instruction, and it has been suggested that problem posing can be used to develop students' mathematical content knowledge. But, problem posing has rarely been utilized in university-level mathematics courses. The goal of this teacher-as-researcher study was to develop and investigate…
Performance issues for iterative solvers in device simulation
NASA Technical Reports Server (NTRS)
Fan, Qing; Forsyth, P. A.; Mcmacken, J. R. F.; Tang, Wei-Pai
1994-01-01
Due to memory limitations, iterative methods have become the method of choice for large scale semiconductor device simulation. However, it is well known that these methods still suffer from reliability problems. The linear systems which appear in numerical simulation of semiconductor devices are notoriously ill-conditioned. In order to produce robust algorithms for practical problems, careful attention must be given to many implementation issues. This paper concentrates on strategies for developing robust preconditioners. In addition, effective data structures and convergence check issues are also discussed. These algorithms are compared with a standard direct sparse matrix solver on a variety of problems.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco
2008-01-01
Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932
Source localization in an ocean waveguide using supervised machine learning.
Niu, Haiqiang; Reeves, Emma; Gerstoft, Peter
2017-09-01
Source localization in ocean acoustics is posed as a machine learning problem in which data-driven methods learn source ranges directly from observed acoustic data. The pressure received by a vertical linear array is preprocessed by constructing a normalized sample covariance matrix and used as the input for three machine learning methods: feed-forward neural networks (FNN), support vector machines (SVM), and random forests (RF). The range estimation problem is solved both as a classification problem and as a regression problem by these three machine learning algorithms. The results of range estimation for the Noise09 experiment are compared for FNN, SVM, RF, and conventional matched-field processing and demonstrate the potential of machine learning for underwater source localization.
Scene analysis in the natural environment
Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740
Students’ Creativity: Problem Posing in Structured Situation
NASA Astrophysics Data System (ADS)
Amalina, I. K.; Amirudin, M.; Budiarto, M. T.
2018-01-01
This is a qualitative research concerning on students’ creativity on problem posing task. The study aimed at describing the students’ creative thinking ability to pose the mathematics problem in structured situations with varied condition of given problems. In order to find out the students’ creative thinking ability, an analysis of mathematics problem posing test based on fluency, novelty, and flexibility and interview was applied for categorizing students’ responses on that task. The data analysis used the quality of problem posing and categorized in 4 level of creativity. The results revealed from 29 secondary students grade 8, a student in CTL (Creative Thinking Level) 1 met the fluency. A student in CTL 2 met the novelty, while a student in CTL 3 met both fluency and novelty and no one in CTL 4. These results are affected by students’ mathematical experience. The findings of this study highlight that student’s problem posing creativity are dependent on their experience in mathematics learning and from the point of view of which students start to pose problem.
Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai
2015-12-01
Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.
Validating a UAV artificial intelligence control system using an autonomous test case generator
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Huber, Justin
2013-05-01
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
CREKID: A computer code for transient, gas-phase combustion of kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1984-01-01
A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.
Assessing Students' Mathematical Problem Posing
ERIC Educational Resources Information Center
Silver, Edward A.; Cai, Jinfa
2005-01-01
Specific examples are used to discuss assessment, an integral part of mathematics instruction, with problem posing and assessment of problem posing. General assessment criteria are suggested to evaluate student-generated problems in terms of their quantity, originality, and complexity.
NASA Astrophysics Data System (ADS)
Corbard, T.; Berthomieu, G.; Provost, J.; Blanc-Feraud, L.
Inferring the solar rotation from observed frequency splittings represents an ill-posed problem in the sense of Hadamard and the traditional approach used to override this difficulty consists in regularizing the problem by adding some a priori information on the global smoothness of the solution defined as the norm of its first or second derivative. Nevertheless, inversions of rotational splittings (e.g. Corbard et al., 1998; Schou et al., 1998) have shown that the surface layers and the so-called solar tachocline (Spiegel & Zahn 1992) at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. %there exist high gradients in the solar rotation profile near %the surface and at the base of the convection zone (e.g. Corbard et al. 1998) %in the so-called solar tachocline (Spiegel & Zahn 1992). Therefore, the global smoothness a-priori which tends to smooth out every high gradient in the solution may not be appropriate for the study of a zone like the tachocline which is of particular interest for the study of solar dynamics (e.g. Elliot 1997). In order to infer the fine structure of such regions with high gradients by inverting helioseismic data, we have to find a way to preserve these zones in the inversion process. Setting a more adapted constraint on the solution leads to non-linear regularization methods that are in current use for edge-preserving regularization in computed imaging (e.g. Blanc-Feraud et al. 1995). In this work, we investigate their use in the helioseismic context of rotational inversions.
ERIC Educational Resources Information Center
Ellerton, Nerida F.
2013-01-01
Although official curriculum documents make cursory mention of the need for problem posing in school mathematics, problem posing rarely becomes part of the implemented or assessed curriculum. This paper provides examples of how problem posing can be made an integral part of mathematics teacher education programs. It is argued that such programs…
ERIC Educational Resources Information Center
Van Harpen, Xianwei Y.; Sriraman, Bharath
2013-01-01
In the literature, problem-posing abilities are reported to be an important aspect/indicator of creativity in mathematics. The importance of problem-posing activities in mathematics is emphasized in educational documents in many countries, including the USA and China. This study was aimed at exploring high school students' creativity in…
Interlocked Problem Posing and Children's Problem Posing Performance in Free Structured Situations
ERIC Educational Resources Information Center
Cankoy, Osman
2014-01-01
The aim of this study is to explore the mathematical problem posing performance of students in free structured situations. Two classes of fifth grade students (N = 30) were randomly assigned to experimental and control groups. The categories of the problems posed in free structured situations by the 2 groups of students were studied through…
Problem-Posing Strategies Used by Years 8 and 9 Students
ERIC Educational Resources Information Center
Stoyanova, Elena
2005-01-01
According to Kilpatrick (1987), in the mathematics classrooms problem posing can be applied as a "goal" or as a means of instruction. Using problem posing as a goal of instruction involves asking students to respond to a range of problem-posing prompts. The main goal of this article is a classification of mathematics questions created by Years 8…
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
When a Problem Is More than a Teacher's Question
ERIC Educational Resources Information Center
Olson, Jo Clay; Knott, Libby
2013-01-01
Not only are the problems teachers pose throughout their teaching of great importance but also the ways in which they use those problems make this a critical component of teaching. A problem-posing episode includes the problem setup, the statement of the problem, and the follow-up questions. Analysis of problem-posing episodes of precalculus…
An Analysis of Secondary and Middle School Teachers' Mathematical Problem Posing
ERIC Educational Resources Information Center
Stickles, Paula R.
2011-01-01
This study identifies the kinds of problems teachers pose when they are asked to (a) generate problems from given information and (b) create new problems from ones given to them. To investigate teachers' problem posting, preservice and inservice teachers completed background questionnaires and four problem-posing instruments. Based on previous…
Ambikile, Joel Semel; Outwater, Anne
2012-07-05
It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child's illness. Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges.
Lee, Sungkyu; Rothbard, Aileen; Choi, Sunha
2016-08-01
Little is known about the incremental cost burden associated with treating comorbid health conditions among people with severe mental illness (SMI). This study compares the extent to which each individual medical condition increases healthcare expenditures between people with SMI and people without mental illness. Data were obtained from the 2011 Medical Expenditure Panel Survey (MEPS; N = 17 764). Mental illness and physical health conditions were identified through ICD-9 codes. Guided by the Andersen's behavioral model of health services utilization, generalized linear models were conducted. Total healthcare expenditures among individuals with SMI were approximately 3.3 times greater than expenditures by individuals without mental illness ($11 399 vs. $3449, respectively). Each additional physical health condition increased the total healthcare expenditure by 17.4% for individuals with SMI compared to the 44.8% increase for individuals without mental illness. The cost effect of having additional health conditions on the total healthcare expenditures among individuals with SMI is smaller than those individuals without mental illness. Whether this is due to limited access to healthcare for the medical problems or better coordination between medical and mental health providers, which reduces duplicated medical procedures or visits, requires future investigation.
Noordraven, Ernst L; Wierdsma, André I; Blanken, Peter; Bloemendaal, Anthony FT; Mulder, Cornelis L
2016-01-01
Background Noncompliance is a major problem for patients with a psychotic disorder. Two important risk factors for noncompliance that have a severe negative impact on treatment outcomes are impaired illness insight and lack of motivation. Our cross-sectional study explored how they are related to each other and their compliance with depot medication. Methods Interviews were conducted in 169 outpatients with a psychotic disorder taking depot medication. Four patient groups were defined based on low or high illness insight and on low or high motivation. The associations between depot-medication compliance, motivation, and insight were illustrated using generalized linear models. Results Generalized linear model showed a significant interaction effect between motivation and insight. Patients with poor insight and high motivation for treatment were more compliant (94%) (95% confidence interval [CI]: 1.821, 3.489) with their depot medication than patients with poor insight and low motivation (61%) (95% CI: 0.288, 0.615). Patients with both insight and high motivation for treatment were less compliant (73%) (95% CI: 0.719, 1.315) than those with poor insight and high motivation. Conclusion Motivation for treatment was more strongly associated with depot-medication compliance than with illness insight. Being motivated to take medication, whether to get better or for other reasons, may be a more important factor than having illness insight in terms of improving depot-medication compliance. Possible implications for clinical practice are discussed. PMID:26893565
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
ERIC Educational Resources Information Center
Kar, Tugrul
2015-01-01
This study aimed to investigate how the semantic structures of problems posed by sixth-grade middle school students for the addition of fractions affect their problem-posing performance. The students were presented with symbolic operations involving the addition of fractions and asked to pose two different problems related to daily-life situations…
Background. There is no consensus about the level of risk of gastrointestinal illness posed by consumption of drinking water that meets all regulatory requirements. Earlier drinking water intervention trials from Canada suggested that 14% - 40% of such gastrointestinal il...
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
ERIC Educational Resources Information Center
Contreras, Jose
2007-01-01
In this article, I model how a problem-posing framework can be used to enhance our abilities to systematically generate mathematical problems by modifying the attributes of a given problem. The problem-posing model calls for the application of the following fundamental mathematical processes: proving, reversing, specializing, generalizing, and…
Anomaly General Circulation Models.
NASA Astrophysics Data System (ADS)
Navarra, Antonio
The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the dominant response. The most sensitive areas are identified; they correspond to north Japan, the Pole and Greenland regions. A limited set of higher resolution (R15) experiments indicate that this situation is still present and enhanced at higher resolution. The linear anomaly model is also applied to a realistic case. (Abstract shortened with permission of author.).
Total variation superiorized conjugate gradient method for image reconstruction
NASA Astrophysics Data System (ADS)
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
Oscillation criteria for half-linear dynamic equations on time scales
NASA Astrophysics Data System (ADS)
Hassan, Taher S.
2008-09-01
This paper is concerned with oscillation of the second-order half-linear dynamic equation(r(t)(x[Delta])[gamma])[Delta]+p(t)x[gamma](t)=0, on a time scale where [gamma] is the quotient of odd positive integers, r(t) and p(t) are positive rd-continuous functions on . Our results solve a problem posed by [R.P. Agarwal, D. O'Regan, S.H. Saker, Philos-type oscillation criteria for second-order half linear dynamic equations, Rocky Mountain J. Math. 37 (2007) 1085-1104; S.H. Saker, Oscillation criteria of second order half-linear dynamic equations on time scales, J. Comput. Appl. Math. 177 (2005) 375-387] and our results in the special cases when and involve and improve some oscillation results for second-order differential and difference equations; and when , and , etc., our oscillation results are essentially newE Some examples illustrating the importance of our results are also included.
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
NASA Astrophysics Data System (ADS)
Nourifar, Raheleh; Mahdavi, Iraj; Mahdavi-Amiri, Nezam; Paydar, Mohammad Mahdi
2017-09-01
Decentralized supply chain management is found to be significantly relevant in today's competitive markets. Production and distribution planning is posed as an important optimization problem in supply chain networks. Here, we propose a multi-period decentralized supply chain network model with uncertainty. The imprecision related to uncertain parameters like demand and price of the final product is appropriated with stochastic and fuzzy numbers. We provide mathematical formulation of the problem as a bi-level mixed integer linear programming model. Due to problem's convolution, a structure to solve is developed that incorporates a novel heuristic algorithm based on Kth-best algorithm, fuzzy approach and chance constraint approach. Ultimately, a numerical example is constructed and worked through to demonstrate applicability of the optimization model. A sensitivity analysis is also made.
ERIC Educational Resources Information Center
Kiliç, Çigdem
2017-01-01
This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…
Approximations of thermoelastic and viscoelastic control systems
NASA Technical Reports Server (NTRS)
Burns, J. A.; Liu, Z. Y.; Miller, R. E.
1990-01-01
Well-posed models and computational algorithms are developed and analyzed for control of a class of partial differential equations that describe the motions of thermo-viscoelastic structures. An abstract (state space) framework and a general well-posedness result are presented that can be applied to a large class of thermo-elastic and thermo-viscoelastic models. This state space framework is used in the development of a computational scheme to be used in the solution of a linear quadratic regulator (LQR) control problem. A detailed convergence proof is provided for the viscoelastic model and several numerical results are presented to illustrate the theory and to analyze problems for which the theory is incomplete.
Artifacts as Sources for Problem-Posing Activities
ERIC Educational Resources Information Center
Bonotto, Cinzia
2013-01-01
The problem-posing process represents one of the forms of authentic mathematical inquiry which, if suitably implemented in classroom activities, could move well beyond the limitations of word problems, at least as they are typically utilized. The two exploratory studies presented sought to investigate the impact of "problem-posing" activities when…
The Art of Problem Posing. 3rd Edition
ERIC Educational Resources Information Center
Brown, Stephen I.; Walter, Marion I.
2005-01-01
The new edition of this classic book describes and provides a myriad of examples of the relationships between problem posing and problem solving, and explores the educational potential of integrating these two activities in classrooms at all levels. "The Art of Problem Posing, Third Edition" encourages readers to shift their thinking…
NASA Astrophysics Data System (ADS)
Guchhait, Shyamal; Banerjee, Biswanath
2018-04-01
In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.
ERIC Educational Resources Information Center
Chen, Limin; Van Dooren, Wim; Chen, Qi; Verschaffel, Lieven
2011-01-01
In the present study, which is a part of a research project about realistic word problem solving and problem posing in Chinese elementary schools, a problem solving and a problem posing test were administered to 128 pre-service and in-service elementary school teachers from Tianjin City in China, wherein the teachers were asked to solve 3…
A trade-off solution between model resolution and covariance in surface-wave inversion
Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.
2010-01-01
Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.
NASA Astrophysics Data System (ADS)
Flynn, Brendan P.; DSouza, Alisha V.; Kanick, Stephen C.; Davis, Scott C.; Pogue, Brian W.
2013-04-01
Subsurface fluorescence imaging is desirable for medical applications, including protoporphyrin-IX (PpIX)-based skin tumor diagnosis, surgical guidance, and dosimetry in photodynamic therapy. While tissue optical properties and heterogeneities make true subsurface fluorescence mapping an ill-posed problem, ultrasound-guided fluorescence-tomography (USFT) provides regional fluorescence mapping. Here USFT is implemented with spectroscopic decoupling of fluorescence signals (auto-fluorescence, PpIX, photoproducts), and white light spectroscopy-determined bulk optical properties. Segmented US images provide a priori spatial information for fluorescence reconstruction using region-based, diffuse FT. The method was tested in simulations, tissue homogeneous and inclusion phantoms, and an injected-inclusion animal model. Reconstructed fluorescence yield was linear with PpIX concentration, including the lowest concentration used, 0.025 μg/ml. White light spectroscopy informed optical properties, which improved fluorescence reconstruction accuracy compared to the use of fixed, literature-based optical properties, reduced reconstruction error and reconstructed fluorescence standard deviation by factors of 8.9 and 2.0, respectively. Recovered contrast-to-background error was 25% and 74% for inclusion phantoms without and with a 2-mm skin-like layer, respectively. Preliminary mouse-model imaging demonstrated system feasibility for subsurface fluorescence measurement in vivo. These data suggest that this implementation of USFT is capable of regional PpIX mapping in human skin tumors during photodynamic therapy, to be used in dosimetric evaluations.
A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise
NASA Astrophysics Data System (ADS)
Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno
2017-09-01
While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.
Enhancing students’ mathematical problem posing skill through writing in performance tasks strategy
NASA Astrophysics Data System (ADS)
Kadir; Adelina, R.; Fatma, M.
2018-01-01
Many researchers have studied the Writing in Performance Task (WiPT) strategy in learning, but only a few paid attention on its relation to the problem-posing skill in mathematics. The problem-posing skill in mathematics covers problem reformulation, reconstruction, and imitation. The purpose of the present study was to examine the effect of WiPT strategy on students’ mathematical problem-posing skill. The research was conducted at a Public Junior Secondary School in Tangerang Selatan. It used a quasi-experimental method with randomized control group post-test. The samples were 64 students consists of 32 students of the experiment group and 32 students of the control. A cluster random sampling technique was used for sampling. The research data were obtained by testing. The research shows that the problem-posing skill of students taught by WiPT strategy is higher than students taught by a conventional strategy. The research concludes that the WiPT strategy is more effective in enhancing the students’ mathematical problem-posing skill compared to the conventional strategy.
Asymptotic stability of a nonlinear Korteweg-de Vries equation with critical lengths
NASA Astrophysics Data System (ADS)
Chu, Jixun; Coron, Jean-Michel; Shang, Peipei
2015-10-01
We study an initial-boundary-value problem of a nonlinear Korteweg-de Vries equation posed on the finite interval (0, 2 kπ) where k is a positive integer. The whole system has Dirichlet boundary condition at the left end-point, and both of Dirichlet and Neumann homogeneous boundary conditions at the right end-point. It is known that the origin is not asymptotically stable for the linearized system around the origin. We prove that the origin is (locally) asymptotically stable for the nonlinear system if the integer k is such that the kernel of the linear Korteweg-de Vries stationary equation is of dimension 1. This is for example the case if k = 1.
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
The missions and means framework as an ontology
NASA Astrophysics Data System (ADS)
Deitz, Paul H.; Bray, Britt E.; Michaelis, James R.
2016-05-01
The analysis of warfare frequently suffers from an absence of logical structure for a] specifying explicitly the military mission and b] quantitatively evaluating the mission utility of alternative products and services. In 2003, the Missions and Means Framework (MMF) was developed to redress these shortcomings. The MMF supports multiple combatants, levels of war and, in fact, is a formal embodiment of the Military Decision-Making Process (MDMP). A major effect of incomplete analytic discipline in military systems analyses is that they frequently fall into the category of ill-posed problems in which they are under-specified, under-determined, or under-constrained. Critical context is often missing. This is frequently the result of incomplete materiel requirements analyses which have unclear linkages to higher levels of warfare, system-of-systems linkages, tactics, techniques and procedures, and the effect of opposition forces. In many instances the capabilities of materiel are assumed to be immutable. This is a result of not assessing how platform components morph over time due to damage, logistics, or repair. Though ill-posed issues can be found many places in military analysis, probably the greatest challenge comes in the disciplines of C4ISR supported by ontologies in which formal naming and definition of the types, properties, and interrelationships of the entities are fundamental to characterizing mission success. Though the MMF was not conceived as an ontology, over the past decade some workers, particularly in the field of communication, have labelled the MMF as such. This connection will be described and discussed.
Estimating uncertainty of Full Waveform Inversion with Ensemble-based methods
NASA Astrophysics Data System (ADS)
Thurin, J.; Brossier, R.; Métivier, L.
2017-12-01
Uncertainty estimation is one key feature of tomographic applications for robust interpretation. However, this information is often missing in the frame of large scale linearized inversions, and only the results at convergence are shown, despite the ill-posed nature of the problem. This issue is common in the Full Waveform Inversion community.While few methodologies have already been proposed in the literature, standard FWI workflows do not include any systematic uncertainty quantifications methods yet, but often try to assess the result's quality through cross-comparison with other results from seismic or comparison with other geophysical data. With the development of large seismic networks/surveys, the increase in computational power and the more and more systematic application of FWI, it is crucial to tackle this problem and to propose robust and affordable workflows, in order to address the uncertainty quantification problem faced for near surface targets, crustal exploration, as well as regional and global scales.In this work (Thurin et al., 2017a,b), we propose an approach which takes advantage of the Ensemble Transform Kalman Filter (ETKF) proposed by Bishop et al., (2001), in order to estimate a low-rank approximation of the posterior covariance matrix of the FWI problem, allowing us to evaluate some uncertainty information of the solution. Instead of solving the FWI problem through a Bayesian inversion with the ETKF, we chose to combine a conventional FWI, based on local optimization, and the ETKF strategies. This scheme allows combining the efficiency of local optimization for solving large scale inverse problems and make the sampling of the local solution space possible thanks to its embarrassingly parallel property. References:Bishop, C. H., Etherton, B. J. and Majumdar, S. J., 2001. Adaptive sampling with the ensemble transform Kalman filter. Part I: Theoretical aspects. Monthly weather review, 129(3), 420-436.Thurin, J., Brossier, R. and Métivier, L. 2017,a.: Ensemble-Based Uncertainty Estimation in Full Waveform Inversion. 79th EAGE Conference and Exhibition 2017, (12 - 15 June, 2017)Thurin, J., Brossier, R. and Métivier, L. 2017,b.: An Ensemble-Transform Kalman Filter - Full Waveform Inversion scheme for Uncertainty estimation; SEG Technical Program Expanded Abstracts 2012
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid
NASA Astrophysics Data System (ADS)
Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong
2017-11-01
The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.
An estimate for the thermal photon rate from lattice QCD
NASA Astrophysics Data System (ADS)
Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman
2018-03-01
We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation
NASA Astrophysics Data System (ADS)
Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe
2018-04-01
In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.
Dissecting Success Stories on Mathematical Problem Posing: A Case of the Billiard Task
ERIC Educational Resources Information Center
Koichu, Boris; Kontorovich, Igor
2013-01-01
"Success stories," i.e., cases in which mathematical problems posed in a controlled setting are perceived by the problem posers or other individuals as interesting, cognitively demanding, or surprising, are essential for understanding the nature of problem posing. This paper analyzes two success stories that occurred with individuals of different…
ERIC Educational Resources Information Center
Crespo, Sandra; Sinclair, Nathalie
2008-01-01
School students of all ages, including those who subsequently become teachers, have limited experience posing their own mathematical problems. Yet problem posing, both as an act of mathematical inquiry and of mathematics teaching, is part of the mathematics education reform vision that seeks to promote mathematics as an worthy intellectual…
Helping Young Students to Better Pose an Environmental Problem
ERIC Educational Resources Information Center
Pruneau, Diane; Freiman, Viktor; Barbier, Pierre-Yves; Langis, Joanne
2009-01-01
Grade 3 students were asked to solve a sedimentation problem in a local river. With scientists, students explored many aspects of the problem and proposed solutions. Graphic representation tools were used to help students to better pose the problem. Using questionnaires and interviews, researchers observed students' capacity to pose the problem…
van Houtum, L; Heijmans, M; Rijken, M; Groenewegen, P
2016-04-01
Healthcare providers are increasingly expected to help chronically ill patients understand their own central role in managing their illness. The aim of this study was to determine whether experiencing high-quality chronic illness care and having a nurse involved in their care relate to chronically ill people's self-management. Survey data from 699 people diagnosed with chronic diseases who participated in a nationwide Dutch panel-study were analysed using linear regression analysis, to estimate the association between chronic illness care and various aspects of patients' self-management, while controlling for their socio-demographic and illness characteristics. Chronically ill patients reported that the care they received was of high quality to some extent. Patients who had contact with a practise nurse or specialised nurse perceived the quality of the care they received as better than patients who only had contact with a GP or medical specialist. Patients' perceptions of the quality of care were positively related to all aspects of their self-management, whereas contact with a practise nurse or specialised nurse in itself was not. Chronically ill patients who have the experience to receive high-quality chronic illness care that focusses on patient activation, decision support, goal setting, problem solving, and coordination of care are better self-managers. Having a nurse involved in their care seems to be positively valued by chronically ill patients, but does not automatically imply better self-management. Copyright © 2016. Published by Elsevier Ireland Ltd.
University Students' Problem Posing Abilities and Attitudes towards Mathematics.
ERIC Educational Resources Information Center
Grundmeier, Todd A.
2002-01-01
Explores the problem posing abilities and attitudes towards mathematics of students in a university pre-calculus class and a university mathematical proof class. Reports a significant difference in numeric posing versus non-numeric posing ability in both classes. (Author/MM)
NASA Astrophysics Data System (ADS)
Akben, Nimet
2018-05-01
The interrelationship between mathematics and science education has frequently been emphasized, and common goals and approaches have often been adopted between disciplines. Improving students' problem-solving skills in mathematics and science education has always been given special attention; however, the problem-posing approach which plays a key role in mathematics education has not been commonly utilized in science education. As a result, the purpose of this study was to better determine the effects of the problem-posing approach on students' problem-solving skills and metacognitive awareness in science education. This was a quasi-experimental based study conducted with 61 chemistry and 40 physics students; a problem-solving inventory and a metacognitive awareness inventory were administered to participants both as a pre-test and a post-test. During the 2017-2018 academic year, problem-solving activities based on the problem-posing approach were performed with the participating students during their senior year in various university chemistry and physics departments throughout the Republic of Turkey. The study results suggested that structured, semi-structured, and free problem-posing activities improve students' problem-solving skills and metacognitive awareness. These findings indicated not only the usefulness of integrating problem-posing activities into science education programs but also the need for further research into this question.
Well-posedness, linear perturbations, and mass conservation for the axisymmetric Einstein equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dain, Sergio; Ortiz, Omar E.; Facultad de Matematica, Astronomia y Fisica, FaMAF, Universidad Nacional de Cordoba, Instituto de Fisica Enrique Gaviola, IFEG, CONICET, Ciudad Universitaria
2010-02-15
For axially symmetric solutions of Einstein equations there exists a gauge which has the remarkable property that the total mass can be written as a conserved, positive definite, integral on the spacelike slices. The mass integral provides a nonlinear control of the variables along the whole evolution. In this gauge, Einstein equations reduce to a coupled hyperbolic-elliptic system which is formally singular at the axis. As a first step in analyzing this system of equations we study linear perturbations on a flat background. We prove that the linear equations reduce to a very simple system of equations which provide, thoughmore » the mass formula, useful insight into the structure of the full system. However, the singular behavior of the coefficients at the axis makes the study of this linear system difficult from the analytical point of view. In order to understand the behavior of the solutions, we study the numerical evolution of them. We provide strong numerical evidence that the system is well-posed and that its solutions have the expected behavior. Finally, this linear system allows us to formulate a model problem which is physically interesting in itself, since it is connected with the linear stability of black hole solutions in axial symmetry. This model can contribute significantly to solve the nonlinear problem and at the same time it appears to be tractable.« less
2012-01-01
Background Although brief intervention (BI) for alcohol and other drug problems has been associated with subsequent decreased levels of self-reported substance use, there is little information in the extant literature as to whether individuals with co-occurring hazardous substance use and mental illness would benefit from BI to the same extent as those without mental illness. This is an important question, as mental illness is estimated to co-occur in 37% of individuals with an alcohol use disorder and in more than 50% of individuals with a drug use disorder. The goal of this study was to explore differences in self-reported alcohol and/or drug use in patients with and without mental illness diagnoses six months after receiving BI in a hospital emergency department (ED). Methods This study took advantage of a naturalistic situation where a screening, brief intervention, and referral to treatment (SBIRT) program had been implemented in nine large EDs in the US state of Washington as part of a national SBIRT initiative. A subset of patients who received BI was interviewed six months later about current alcohol and drug use. Linear regression was used to assess whether change in substance use measures differed among patients with a mental illness diagnosis compared with those without. Data were analyzed for both a statewide (n = 828) and single-hospital (n = 536) sample. Results No significant differences were found between mentally ill and non-mentally ill subgroups in either sample with regard to self-reported hazardous substance use at six-month follow-up. Conclusion These results suggest that BI may not have a differing impact based on the presence of a mental illness diagnosis. Given the high prevalence of mental illness among individuals with alcohol and other drug problems, this finding may have important public health implications. PMID:23186062
Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction
NASA Astrophysics Data System (ADS)
Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.
2002-11-01
The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.
A frequency-domain seismic blind deconvolution based on Gini correlations
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing
2018-02-01
In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
NASA Astrophysics Data System (ADS)
Huang, Maosong; Qu, Xie; Lü, Xilin
2017-11-01
By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.
A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).
Wang, Qi; Wang, Huaxiang
2011-04-01
During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Single photon emission computed tomography-guided Cerenkov luminescence tomography
NASA Astrophysics Data System (ADS)
Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie
2012-07-01
Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.
Improved real-time dynamics from imaginary frequency lattice simulations
NASA Astrophysics Data System (ADS)
Pawlowski, Jan M.; Rothkopf, Alexander
2018-03-01
The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.
Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf
2018-05-12
We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.
Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing
ERIC Educational Resources Information Center
Kilic, Cigdem
2015-01-01
In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…
Students’ Mathematical Creative Thinking through Problem Posing Learning
NASA Astrophysics Data System (ADS)
Ulfah, U.; Prabawanto, S.; Jupri, A.
2017-09-01
The research aims to investigate the differences in enhancement of students’ mathematical creative thinking ability of those who received problem posing approach assisted by manipulative media and students who received problem posing approach without manipulative media. This study was a quasi experimental research with non-equivalent control group design. Population of this research was third-grade students of a primary school in Bandung city in 2016/2017 academic year. Sample of this research was two classes as experiment class and control class. The instrument used is a test of mathematical creative thinking ability. Based on the results of the research, it is known that the enhancement of the students’ mathematical creative thinking ability of those who received problem posing approach with manipulative media aid is higher than the ability of those who received problem posing approach without manipulative media aid. Students who get learning problem posing learning accustomed in arranging mathematical sentence become matter of story so it can facilitate students to comprehend about story
An Interview Forum on Interlibrary Loan/Document Delivery with Lynn Wiley and Tom Delaney
ERIC Educational Resources Information Center
Hasty, Douglas F.
2003-01-01
The Virginia Boucher-OCLC Distinguished ILL Librarian Award is the most prestigious commendation given to practitioners in the field. The following questions about ILL were posed to the two most recent recipients of the Boucher Award: Tom Delaney (2002), Coordinator of Interlibrary Loan Services at Colorado State University and Lynn Wiley (2001),…
Deinstitutionalization: Its Impact on Community Mental Health Centers and the Seriously Mentally Ill
ERIC Educational Resources Information Center
Kliewer, Stephen P.; McNally Melissa; Trippany, Robyn L.
2009-01-01
Deinstitutionalization has had a significant impact on the mental health system, including the client, the agency, and the counselor. For clients with serious mental illness, learning to live in a community setting poses challenges that are often difficult to overcome. Community mental health agencies must respond to these specific needs, thus…
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
NASA Astrophysics Data System (ADS)
Supianto, A. A.; Hayashi, Y.; Hirashima, T.
2017-02-01
Problem-posing is well known as an effective activity to learn problem-solving methods. Monsakun is an interactive problem-posing learning environment to facilitate arithmetic word problems learning for one operation of addition and subtraction. The characteristic of Monsakun is problem-posing as sentence-integration that lets learners make a problem of three sentences. Monsakun provides learners with five or six sentences including dummies, which are designed through careful considerations by an expert teacher as a meaningful distraction to the learners in order to learn the structure of arithmetic word problems. The results of the practical use of Monsakun in elementary schools show that many learners have difficulties in arranging the proper answer at the high level of assignments. The analysis of the problem-posing process of such learners found that their misconception of arithmetic word problems causes impasses in their thinking and mislead them to use dummies. This study proposes a method of changing assignments as a support for overcoming bottlenecks of thinking. In Monsakun, the bottlenecks are often detected as a frequently repeated use of a specific dummy. If such dummy can be detected, it is the key factor to support learners to overcome their difficulty. This paper discusses how to detect the bottlenecks and to realize such support in learning by problem-posing.
The Problems Posed and Models Employed by Primary School Teachers in Subtraction with Fractions
ERIC Educational Resources Information Center
Iskenderoglu, Tuba Aydogdu
2017-01-01
Students have difficulties in solving problems of fractions in almost all levels, and in problem posing. Problem posing skills influence the process of development of the behaviors observed at the level of comprehension. That is why it is very crucial for teachers to develop activities for student to have conceptual comprehension of fractions and…
Covariance Matrix Estimation for the Cryo-EM Heterogeneity Problem*
Katsevich, E.; Katsevich, A.; Singer, A.
2015-01-01
In cryo-electron microscopy (cryo-EM), a microscope generates a top view of a sample of randomly oriented copies of a molecule. The problem of single particle reconstruction (SPR) from cryo-EM is to use the resulting set of noisy two-dimensional projection images taken at unknown directions to reconstruct the three-dimensional (3D) structure of the molecule. In some situations, the molecule under examination exhibits structural variability, which poses a fundamental challenge in SPR. The heterogeneity problem is the task of mapping the space of conformational states of a molecule. It has been previously suggested that the leading eigenvectors of the covariance matrix of the 3D molecules can be used to solve the heterogeneity problem. Estimating the covariance matrix is challenging, since only projections of the molecules are observed, but not the molecules themselves. In this paper, we formulate a general problem of covariance estimation from noisy projections of samples. This problem has intimate connections with matrix completion problems and high-dimensional principal component analysis. We propose an estimator and prove its consistency. When there are finitely many heterogeneity classes, the spectrum of the estimated covariance matrix reveals the number of classes. The estimator can be found as the solution to a certain linear system. In the cryo-EM case, the linear operator to be inverted, which we term the projection covariance transform, is an important object in covariance estimation for tomographic problems involving structural variation. Inverting it involves applying a filter akin to the ramp filter in tomography. We design a basis in which this linear operator is sparse and thus can be tractably inverted despite its large size. We demonstrate via numerical experiments on synthetic datasets the robustness of our algorithm to high levels of noise. PMID:25699132
Low-sensitivity H ∞ filter design for linear delta operator systems with sampling time jitter
NASA Astrophysics Data System (ADS)
Guo, Xiang-Gui; Yang, Guang-Hong
2012-04-01
This article is concerned with the problem of designing H ∞ filters for a class of linear discrete-time systems with low-sensitivity to sampling time jitter via delta operator approach. Delta-domain model is used to avoid the inherent numerical ill-condition resulting from the use of the standard shift-domain model at high sampling rates. Based on projection lemma in combination with the descriptor system approach often used to solve problems related to delay, a novel bounded real lemma with three slack variables for delta operator systems is presented. A sensitivity approach based on this novel lemma is proposed to mitigate the effects of sampling time jitter on system performance. Then, the problem of designing a low-sensitivity filter can be reduced to a convex optimisation problem. An important consideration in the design of correlation filters is the optimal trade-off between the standard H ∞ criterion and the sensitivity of the transfer function with respect to sampling time jitter. Finally, a numerical example demonstrating the validity of the proposed design method is given.
Problem-Posing Research in Mathematics Education: Looking Back, Looking Around, and Looking Ahead
ERIC Educational Resources Information Center
Silver, Edward A.
2013-01-01
In this paper, I comment on the set of papers in this special issue on mathematical problem posing. I offer some observations about the papers in relation to several key issues, and I suggest some productive directions for continued research inquiry on mathematical problem posing.
Depression and decision-making capacity for treatment or research: a systematic review
2013-01-01
Background Psychiatric disorders can pose problems in the assessment of decision-making capacity (DMC). This is so particularly where psychopathology is seen as the extreme end of a dimension that includes normality. Depression is an example of such a psychiatric disorder. Four abilities (understanding, appreciating, reasoning and ability to express a choice) are commonly assessed when determining DMC in psychiatry and uncertainty exists about the extent to which depression impacts capacity to make treatment or research participation decisions. Methods A systematic review of the medical ethical and empirical literature concerning depression and DMC was conducted. Medline, EMBASE and PsycInfo databases were searched for studies of depression and consent and DMC. Empirical studies and papers containing ethical analysis were extracted and analysed. Results 17 publications were identified. The clinical ethics studies highlighted appreciation of information as the ability that can be impaired in depression, indicating that emotional factors can impact on DMC. The empirical studies reporting decision-making ability scores also highlighted impairment of appreciation but without evidence of strong impact. Measurement problems, however, looked likely. The frequency of clinical judgements of lack of DMC in people with depression varied greatly according to acuity of illness and whether judgements are structured or unstructured. Conclusions Depression can impair DMC especially if severe. Most evidence indicates appreciation as the ability primarily impaired by depressive illness. Understanding and measuring the appreciation ability in depression remains a problem in need of further research. PMID:24330745
A Human Proximity Operations System test case validation approach
NASA Astrophysics Data System (ADS)
Huber, Justin; Straub, Jeremy
A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.
2012-01-01
Background It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. Methodology A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. Results The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child’s illness. Conclusion Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges. PMID:22559084
Radiation Source Mapping with Bayesian Inverse Methods
Hykes, Joshua M.; Azmy, Yousry Y.
2017-03-22
In this work, we present a method to map the spectral and spatial distributions of radioactive sources using a limited number of detectors. Locating and identifying radioactive materials is important for border monitoring, in accounting for special nuclear material in processing facilities, and in cleanup operations following a radioactive material spill. Most methods to analyze these types of problems make restrictive assumptions about the distribution of the source. In contrast, the source mapping method presented here allows an arbitrary three-dimensional distribution in space and a gamma peak distribution in energy. To apply the method, the problem is cast as anmore » inverse problem where the system’s geometry and material composition are known and fixed, while the radiation source distribution is sought. A probabilistic Bayesian approach is used to solve the resulting inverse problem since the system of equations is ill-posed. The posterior is maximized with a Newton optimization method. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint, discrete ordinates flux solutions, obtained in this work by the Denovo code, is required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes form the linear mapping from the state space to the response space. The test of the method’s success is simultaneously locating a set of 137Cs and 60Co gamma sources in a room. This test problem is solved using experimental measurements that we collected for this purpose. Because of the weak sources available for use in the experiment, some of the expected photopeaks were not distinguishable from the Compton continuum. However, by supplanting 14 flawed measurements (out of a total of 69) with synthetic responses computed by MCNP, the proof-of-principle source mapping was successful. The locations of the sources were predicted within 25 cm for two of the sources and 90 cm for the third, in a room with an ~4-x 4-m floor plan. Finally, the predicted source intensities were within a factor of ten of their true value.« less
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
Perceptual asymmetry in texture perception.
Williams, D; Julesz, B
1992-07-15
A fundamental property of human visual perception is our ability to distinguish between textures. A concerted effort has been made to account for texture segregation in terms of linear spatial filter models and their nonlinear extensions. However, for certain texture pairs the ease of discrimination changes when the role of figure and ground are reversed. This asymmetry poses a problem for both linear and nonlinear models. We have isolated a property of texture perception that can account for this asymmetry in discrimination: subjective closure. This property, which is also responsible for visual illusions, appears to be explainable by early visual processes alone. Our results force a reexamination of the process of human texture segregation and of some recent models that were introduced to explain it.
An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups
ERIC Educational Resources Information Center
Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi
2012-01-01
The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…
Problem Posing at All Levels in the Calculus Classroom
ERIC Educational Resources Information Center
Perrin, John Robert
2007-01-01
This article explores the use of problem posing in the calculus classroom using investigative projects. Specially, four examples of student work are examined, each one differing in originality of problem posed. By allowing students to explore actual questions that they have about calculus, coming from their own work or class discussion, or…
Critical Inquiry across the Disciplines: Strategies for Student-Generated Problem Posing
ERIC Educational Resources Information Center
Nardone, Carroll Ferguson; Lee, Renee Gravois
2011-01-01
Problem posing is a higher-order, active-learning task that is important for students to develop. This article describes a series of interdisciplinary learning activities designed to help students strengthen their problem-posing skills, which requires that students become more responsible for their learning and that faculty move to a facilitator…
Developing Teachers' Subject Didactic Competence through Problem Posing
ERIC Educational Resources Information Center
Ticha, Marie; Hospesova, Alena
2013-01-01
Problem posing (not only in lesson planning but also directly in teaching whenever needed) is one of the attributes of a teacher's subject didactic competence. In this paper, problem posing in teacher education is understood as an educational and a diagnostic tool. The results of the study were gained in pre-service primary school teacher…
ERIC Educational Resources Information Center
Barlow, Angela T.; Cates, Janie M.
2006-01-01
This study investigated the impact of incorporating problem posing in elementary classrooms on the beliefs held by elementary teachers about mathematics and mathematics teaching. Teachers participated in a year-long staff development project aimed at facilitating the incorporation of problem posing into their classrooms. Beliefs were examined via…
The Posing of Arithmetic Problems by Mathematically Talented Students
ERIC Educational Resources Information Center
Espinoza González, Johan; Lupiáñez Gómez, José Luis; Segovia Alex, Isidoro
2016-01-01
Introduction: This paper analyzes the arithmetic problems posed by a group of mathematically talented students when given two problem-posing tasks, and compares these students' responses to those given by a standard group of public school students to the same tasks. Our analysis focuses on characterizing and identifying the differences between the…
Posing Problems to Understand Children's Learning of Fractions
ERIC Educational Resources Information Center
Cheng, Lu Pien
2013-01-01
In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…
Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions
ERIC Educational Resources Information Center
Arikan, Elif Esra; Unal, Hasan
2014-01-01
The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…
Integrating Worked Examples into Problem Posing in a Web-Based Learning Environment
ERIC Educational Resources Information Center
Hsiao, Ju-Yuan; Hung, Chun-Ling; Lan, Yu-Feng; Jeng, Yoau-Chau
2013-01-01
Most students always lack of experience and perceive difficult regarding problem posing. The study hypothesized that worked examples may have benefits for supporting students' problem posing activities. A quasi-experiment was conducted in the context of a business mathematics course for examining the effects of integrating worked examples into…
Modified Chapman-Enskog moment approach to diffusive phonon heat transport.
Banach, Zbigniew; Larecki, Wieslaw
2008-12-01
A detailed treatment of the Chapman-Enskog method for a phonon gas is given within the framework of an infinite system of moment equations obtained from Callaway's model of the Boltzmann-Peierls equation. Introducing no limitations on the magnitudes of the individual components of the drift velocity or the heat flux, this method is used to derive various systems of hydrodynamic equations for the energy density and the drift velocity. For one-dimensional flow problems, assuming that normal processes dominate over resistive ones, it is found that the first three levels of the expansion (i.e., the zeroth-, first-, and second-order approximations) yield the equations of hydrodynamics which are linearly stable at all wavelengths. This result can be achieved either by examining the dispersion relations for linear plane waves or by constructing the explicit quadratic Lyapunov entropy functionals for the linear perturbation equations. The next order in the Chapman-Enskog expansion leads to equations which are unstable to some perturbations. Precisely speaking, the linearized equations of motion that describe the propagation of small disturbances in the flow have unstable plane-wave solutions in the short-wavelength limit of the dispersion relations. This poses no problem if the equations are used in their proper range of validity.
Quantum Linear System Algorithm for Dense Matrices.
Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam
2018-02-02
Solving linear systems of equations is a frequently encountered problem in machine learning and optimization. Given a matrix A and a vector b the task is to find the vector x such that Ax=b. We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of O(κ^{2}sqrt[n]polylog(n)/ε) for an n×n dimensional A with bounded spectral norm, where κ denotes the condition number of A, and ε is the desired precision parameter. This amounts to a polynomial improvement over known quantum linear system algorithms when applied to dense matrices, and poses a new state of the art for solving dense linear systems on a quantum computer. Furthermore, an exponential improvement is achievable if the rank of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows of A and the vector of Euclidean norms of the rows of A.
NASA Astrophysics Data System (ADS)
Bona, J. L.; Chen, M.; Saut, J.-C.
2004-05-01
In part I of this work (Bona J L, Chen M and Saut J-C 2002 Boussinesq equations and other systems for small-amplitude long waves in nonlinear dispersive media I: Derivation and the linear theory J. Nonlinear Sci. 12 283-318), a four-parameter family of Boussinesq systems was derived to describe the propagation of surface water waves. Similar systems are expected to arise in other physical settings where the dominant aspects of propagation are a balance between the nonlinear effects of convection and the linear effects of frequency dispersion. In addition to deriving these systems, we determined in part I exactly which of them are linearly well posed in various natural function classes. It was argued that linear well-posedness is a natural necessary requirement for the possible physical relevance of the model in question. In this paper, it is shown that the first-order correct models that are linearly well posed are in fact locally nonlinearly well posed. Moreover, in certain specific cases, global well-posedness is established for physically relevant initial data. In part I, higher-order correct models were also derived. A preliminary analysis of a promising subclass of these models shows them to be well posed.
Sun, Liang; Huo, Wei; Jiao, Zongxia
2017-03-01
This paper studies relative pose control for a rigid spacecraft with parametric uncertainties approaching to an unknown tumbling target in disturbed space environment. State feedback controllers for relative translation and relative rotation are designed in an adaptive nonlinear robust control framework. The element-wise and norm-wise adaptive laws are utilized to compensate the parametric uncertainties of chaser and target spacecraft, respectively. External disturbances acting on two spacecraft are treated as a lumped and bounded perturbation input for system. To achieve the prescribed disturbance attenuation performance index, feedback gains of controllers are designed by solving linear matrix inequality problems so that lumped disturbance attenuation with respect to the controlled output is ensured in the L 2 -gain sense. Moreover, in the absence of lumped disturbance input, asymptotical convergence of relative pose are proved by using the Lyapunov method. Numerical simulations are performed to show that position tracking and attitude synchronization are accomplished in spite of the presence of couplings and uncertainties. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Roberts, Laura Weiss; Kim, Jane Paik
2014-01-01
Motivation Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Methods Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Results Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy=−0.04, CI [−0.46, 0.39]; physically ill – healthy= −0.13, CI [−0.62, −.36]). Conclusions Clinical research volunteers and healthy clinical research-“naive” individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. PMID:24931849
Roberts, Laura Weiss; Kim, Jane Paik
2014-09-01
Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy = -0.04, CI [-0.46, 0.39]; physically ill - healthy = -0.13, CI [-0.62, -.36]). Clinical research volunteers and healthy clinical research-"naïve" individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. Copyright © 2014 Elsevier Ltd. All rights reserved.
Linear solutions to metamaterial volume hologram design using a variational approach.
Marks, Daniel L; Smith, David R
2018-04-01
Multiplex volume holograms are conventionally constructed by the repeated exposure of a photosensitive medium to a sequence of external fields, each field typically being the superposition of a reference wave that reconstructs the hologram and the other being a desired signal wave. Because there are no sources of radiation internal to the hologram, the pattern of material modulation is limited to the solutions to Helmholtz's equation in the medium. If the three-dimensional structure of the medium could be engineered at each point rather than limited to the patterns produced by standing waves, more versatile structures may result that can overcome the typical limitations to hologram dynamic range imposed by sequentially superimposing holograms. Metamaterial structures and other synthetic electromagnetic materials offer the possibility of achieving high medium contrast engineered at the subwavelength scale. By posing the multiplex volume holography problem as a linear medium design problem, we explore the potential improvements that such engineered synthetic media may provide over conventional multiplex volume holograms.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
Applications of Electrical Impedance Tomography (EIT): A Short Review
NASA Astrophysics Data System (ADS)
Kanti Bera, Tushar
2018-03-01
Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.
[Prevalence of patients with HIV infection in an emergency department].
Greco, G M; Paparo, R; Ventura, R; Migliardi, C; Tallone, R; Moccia, F
1995-01-01
The activity at an ED, primarily aiming at providing rational and qualified support to critically ill patients, is forced to manage very different nosographic entities, including infectious, often contagious, pathologies. In this context the diffusion of HIV infection poses a number of problems concerning both the kind of patients presenting to the ED and the professional risk of health-care workers. In the first four months of 1992 the incidence of patients with recognized or presumed HIV infection at the "Pronto Soccorso Medico" was of 1.78% of 2327 patients admitted. This study aims to contribute to the epidemiologic definition of the risk of HIV infection due to occupational exposure, stressing the peculiar conditions of urgency-emergency often characterizing the activity within the ED.
Donow, H S
1990-08-01
Care of an elder patient is often regarded by the children as an unwanted burden. Anderson's 1968 play, I Never Sang for My Father, and Ariyoshi's 1972 novel, Kokotsu no hito [The Twilight years], show how two different families of two different cultures (American and Japanese) respond to this crisis. Both texts arrive at dramatically different conclusions: in one the children, Gene and Alice, prove unwilling or unable to cope with the problems posed by their father's need; in the other Akiko, though nearly overwhelmed by the burden of her father-in-law's illness, emerges richer for the experience.
Improving chemical species tomography of turbulent flows using covariance estimation.
Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J
2017-05-01
Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.
Locating an atmospheric contamination source using slow manifolds
NASA Astrophysics Data System (ADS)
Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee
2009-04-01
Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.
Developing Pre-Service Teachers Understanding of Fractions through Problem Posing
ERIC Educational Resources Information Center
Toluk-Ucar, Zulbiye
2009-01-01
This study investigated the effect of problem posing on the pre-service primary teachers' understanding of fraction concepts enrolled in two different versions of a methods course at a university in Turkey. In the experimental version, problem posing was used as a teaching strategy. At the beginning of the study, the pre-service teachers'…
The Effects of Problem Posing on Student Mathematical Learning: A Meta-Analysis
ERIC Educational Resources Information Center
Rosli, Roslinda; Capraro, Mary Margaret; Capraro, Robert M.
2014-01-01
The purpose of the study was to meta-synthesize research findings on the effectiveness of problem posing and to investigate the factors that might affect the incorporation of problem posing in the teaching and learning of mathematics. The eligibility criteria for inclusion of literature in the meta-analysis was: published between 1989 and 2011,…
Teachers Implementing Mathematical Problem Posing in the Classroom: Challenges and Strategies
ERIC Educational Resources Information Center
Leung, Shuk-kwan S.
2013-01-01
This paper reports a study about how a teacher educator shared knowledge with teachers when they worked together to implement mathematical problem posing (MPP) in the classroom. It includes feasible methods for getting practitioners to use research-based tasks aligned to the curriculum in order to encourage children to pose mathematical problems.…
Problem-Posing in Education: Transformation of the Practice of the Health Professional.
ERIC Educational Resources Information Center
Casagrande, L. D. R.; Caron-Ruffino, M.; Rodrigues, R. A. P.; Vendrusculo, D. M. S.; Takayanagui, A. M. M.; Zago, M. M. F.; Mendes, M. D.
1998-01-01
Studied the use of a problem-posing model in health education. The model based on the ideas of Paulo Freire is presented. Four innovative experiences of teaching-learning in environmental and occupational health and patient education are reported. Notes that the problem-posing model has the capability to transform health-education practice.…
Scott, Elizabeth; Herbold, Nancie
2010-06-01
Foodborne illnesses pose a problem to all individuals but are especially significant for infants, the elderly, and individuals with compromised immune systems. Personal hygiene is recognized as the number-one way people can lower their risk. The majority of meals in the U.S. are eaten at home. Little is known, however, about the actual application of personal hygiene and sanitation behaviors in the home. The study discussed in this article assessed knowledge of hygiene practices compared to observed behaviors and determined whether knowledge equated to practice. It was a descriptive study involving a convenience sample of 30 households. Subjects were recruited from the Boston area and a researcher and/or a research assistant traveled to the homes of study participants to videotape a standard food preparation procedure preceded by floor mopping. The results highlight the differences between individuals' reported beliefs and actual practice. This information can aid food safety and other health professionals in targeting food safety education so that consumers understand their own critical role in decreasing their risk for foodborne illness.
Payne, John
1971-01-01
The new film of David Mercer's Family life poses some hard questions for psychiatry to answer and puts the Laingian case for 'schizophrenia' being an illness created within the family unit. PMID:27670980
Mighty Mathematicians: Using Problem Posing and Problem Solving to Develop Mathematical Power
ERIC Educational Resources Information Center
McGatha, Maggie B.; Sheffield, Linda J.
2006-01-01
This article describes a year-long professional development institute combined with a summer camp for students. Both were designed to help teachers and students develop their problem-solving and problem-posing abilities.
Experimental and Theoretical Results in Output-Trajectory Redesign for Flexible Structures
NASA Technical Reports Server (NTRS)
Dewey, J. S.; Devasia, Santosh
1996-01-01
In this paper we study the optimal redesign of output trajectory for linear invertible systems. This is particularly important for tracking control of flexible structures because the input-state trajectories that achieve the required output may cause excessive vibrations in the structure. A trade-off is then required between tracking and vibrations reduction. We pose and solve this problem as the minimization of a quadratic cost function. The theory is developed and applied to the output tracking of a flexible structure and experimental results are presented.
Microwave inversion of leaf area and inclination angle distributions from backscattered data
NASA Technical Reports Server (NTRS)
Lang, R. H.; Saleh, H. A.
1985-01-01
The backscattering coefficient from a slab of thin randomly oriented dielectric disks over a flat lossy ground is used to reconstruct the inclination angle and area distributions of the disks. The disks are employed to model a leafy agricultural crop, such as soybeans, in the L-band microwave region of the spectrum. The distorted Born approximation, along with a thin disk approximation, is used to obtain a relationship between the horizontal-like polarized backscattering coefficient and the joint probability density of disk inclination angle and disk radius. Assuming large skin depth reduces the relationship to a linear Fredholm integral equation of the first kind. Due to the ill-posed nature of this equation, a Phillips-Twomey regularization method with a second difference smoothing condition is used to find the inversion. Results are obtained in the presence of 1 and 10 percent noise for both leaf inclination angle and leaf radius densities.
Least Squares Shadowing sensitivity analysis of chaotic limit cycle oscillations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Qiqi, E-mail: qiqi@mit.edu; Hu, Rui, E-mail: hurui@mit.edu; Blonigan, Patrick, E-mail: blonigan@mit.edu
2014-06-15
The adjoint method, among other sensitivity analysis methods, can fail in chaotic dynamical systems. The result from these methods can be too large, often by orders of magnitude, when the result is the derivative of a long time averaged quantity. This failure is known to be caused by ill-conditioned initial value problems. This paper overcomes this failure by replacing the initial value problem with the well-conditioned “least squares shadowing (LSS) problem”. The LSS problem is then linearized in our sensitivity analysis algorithm, which computes a derivative that converges to the derivative of the infinitely long time average. We demonstrate ourmore » algorithm in several dynamical systems exhibiting both periodic and chaotic oscillations.« less
McEwan, Miranda; Friedman, Susan Hatters
2016-12-01
Psychiatrists are mandated to report suspicions of child abuse in America. Potential for harm to children should be considered when one is treating parents who are at risk. Although it is the commonly held wisdom that mental illness itself is a major risk factor for child abuse, there are methodologic issues with studies purporting to demonstrate this. Rather, the risk from an individual parent must be considered. Substance abuse and personality disorder pose a separate risk than serious mental illness. Violence risk from mental illness is dynamic, rather than static. When severe mental illness is well-treated, the risk is decreased. However, these families are in need of social support. Copyright © 2016 Elsevier Inc. All rights reserved.
An Analysis of Problem-Posing Tasks in Chinese and US Elementary Mathematics Textbooks
ERIC Educational Resources Information Center
Cai, Jinfa; Jiang, Chunlian
2017-01-01
This paper reports on 2 studies that examine how mathematical problem posing is integrated in Chinese and US elementary mathematics textbooks. Study 1 involved a historical analysis of the problem-posing (PP) tasks in 3 editions of the most widely used elementary mathematics textbook series published by People's Education Press in China over 3…
ERIC Educational Resources Information Center
Aydogdu Iskenderoglu, Tuba
2018-01-01
It is important for pre-service teachers to know the conceptual difficulties they have experienced regarding the concepts of multiplication and division in fractions and problem posing is a way to learn these conceptual difficulties. Problem posing is a synthetic activity that fundamentally has multiple answers. The purpose of this study is to…
ERIC Educational Resources Information Center
Cankoy, Osman; Özder, Hasan
2017-01-01
The aim of this study is to develop a scoring rubric to assess primary school students' problem posing skills. The rubric including five dimensions namely solvability, reasonability, mathematical structure, context and language was used. The raters scored the students' problem posing skills both with and without the scoring rubric to test the…
ERIC Educational Resources Information Center
Van Harpen, Xianwei Y.; Presmeg, Norma C.
2013-01-01
The importance of students' problem-posing abilities in mathematics has been emphasized in the K-12 curricula in the USA and China. There are claims that problem-posing activities are helpful in developing creative approaches to mathematics. At the same time, there are also claims that students' mathematical content knowledge could be highly…
An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)
ERIC Educational Resources Information Center
Arikan, Elif Esra; Ünal, Hasan
2015-01-01
To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…
Mathematical Creative Process Wallas Model in Students Problem Posing with Lesson Study Approach
ERIC Educational Resources Information Center
Nuha, Muhammad 'Azmi; Waluya, S. B.; Junaedi, Iwan
2018-01-01
Creative thinking is very important in the modern era so that it should be improved by doing efforts such as making a lesson that train students to pose their own problems. The purposes of this research are (1) to give an initial description of students about mathematical creative thinking level in Problem Posing Model with Lesson Study approach…
Problem Posing with Realistic Mathematics Education Approach in Geometry Learning
NASA Astrophysics Data System (ADS)
Mahendra, R.; Slamet, I.; Budiyono
2017-09-01
One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.
NASA Technical Reports Server (NTRS)
Zubko, V.; Dwek, E.; Arendt, R. G.; Oegerle, William (Technical Monitor)
2001-01-01
We present new interstellar dust models that are consistent with both, the FUV to near-IR extinction and infrared (IR) emission measurements from the diffuse interstellar medium. The models are characterized by different dust compositions and abundances. The problem we solve consists of determining the size distribution of the various dust components of the model. This problem is a typical ill-posed inversion problem which we solve using the regularization approach. We reproduce the Li Draine (2001, ApJ, 554, 778) results, however their model requires an excessive amount of interstellar silicon (48 ppM of hydrogen compared to the 36 ppM available for an ISM of solar composition) to be locked up in dust. We found that dust models consisting of PAHs, amorphous silicate, graphite, and composite grains made up from silicates, organic refractory, and water ice, provide an improved fit to the extinction and IR emission measurements, while still requiring a subsolar amount of silicon to be in the dust. This research was supported by NASA Astrophysical Theory Program NRA 99-OSS-01.
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Locating the source of projectile fluid droplets
NASA Astrophysics Data System (ADS)
Varney, Christopher R.; Gittes, Fred
2011-08-01
The ill-posed projectile problem of finding the source height from spattered droplets of viscous fluid is a longstanding obstacle to accident reconstruction and crime-scene analysis. It is widely known how to infer the impact angle of droplets on a surface from the elongation of their impact profiles. However, the lack of velocity information makes finding the height of the origin from the impact position and angle of individual drops not possible. From aggregate statistics of the spatter and basic equations of projectile motion, we introduce a reciprocal correlation plot that is effective when the polar launch angle is concentrated in a narrow range. The vertical coordinate depends on the orientation of the spattered surface and equals the tangent of the impact angle for a level surface. When the horizontal plot coordinate is twice the reciprocal of the impact distance, we can infer the source height as the slope of the data points in the reciprocal correlation plot. If the distribution of launch angles is not narrow, failure of the method is evident in the lack of linear correlation. We perform a number of experimental trials, as well as numerical calculations and show that the height estimate is relatively insensitive to aerodynamic drag. Besides its possible relevance for crime investigation, reciprocal-plot analysis of spatter may find application to volcanism and other topics and is most immediately applicable for undergraduate science and engineering students in the context of crime-scene analysis.
Comparison of Compressed Sensing Algorithms for Inversion of 3-D Electrical Resistivity Tomography.
NASA Astrophysics Data System (ADS)
Peddinti, S. R.; Ranjan, S.; Kbvn, D. P.
2016-12-01
Image reconstruction algorithms derived from electrical resistivity tomography (ERT) are highly non-linear, sparse, and ill-posed. The inverse problem is much severe, when dealing with 3-D datasets that result in large sized matrices. Conventional gradient based techniques using L2 norm minimization with some sort of regularization can impose smoothness constraint on the solution. Compressed sensing (CS) is relatively new technique that takes the advantage of inherent sparsity in parameter space in one or the other form. If favorable conditions are met, CS was proven to be an efficient image reconstruction technique that uses limited observations without losing edge sharpness. This paper deals with the development of an open source 3-D resistivity inversion tool using CS framework. The forward model was adopted from RESINVM3D (Pidlisecky et al., 2007) with CS as the inverse code. Discrete cosine transformation (DCT) function was used to induce model sparsity in orthogonal form. Two CS based algorithms viz., interior point method and two-step IST were evaluated on a synthetic layered model with surface electrode observations. The algorithms were tested (in terms of quality and convergence) under varying degrees of parameter heterogeneity, model refinement, and reduced observation data space. In comparison to conventional gradient algorithms, CS was proven to effectively reconstruct the sub-surface image with less computational cost. This was observed by a general increase in NRMSE from 0.5 in 10 iterations using gradient algorithm to 0.8 in 5 iterations using CS algorithms.
NASA Astrophysics Data System (ADS)
Meric, Ilker; Johansen, Geir A.; Holstad, Marie B.; Mattingly, John; Gardner, Robin P.
2012-05-01
Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately.
Villotti, Patrizia; Corbière, Marc; Dewa, Carolyn S; Fraccaroli, Franco; Sultan-Taïeb, Hélène; Zaniboni, Sara; Lecomte, Tania
2017-09-12
Compared to groups with other disabilities, people with a severe mental illness face the greatest stigma and barriers to employment opportunities. This study contributes to the understanding of the relationship between workplace social support and work productivity in people with severe mental illness working in Social Enterprises by taking into account the mediating role of self-stigma and job tenure self-efficacy. A total of 170 individuals with a severe mental disorder employed in a Social Enterprise filled out questionnaires assessing personal and work-related variables at Phase-1 (baseline) and Phase-2 (6-month follow-up). Process modeling was used to test for serial mediation. In the Social Enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems. When testing serial multiple mediations, the specific indirect effect of high workplace social support on work productivity through both low internalized stigma and high job tenure self-efficacy was significant with a point estimate of 1.01 (95% CI = 0.42, 2.28). Continued work in this area can provide guidance for organizations in the open labor market addressing the challenges posed by the work integration of people with severe mental illness. Implications for Rehabilitation: Work integration of people with severe mental disorders is difficult because of limited access to supportive and nondiscriminatory workplaces. Social enterprise represents an effective model for supporting people with severe mental disorders to integrate the labor market. In the social enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems.
Chatzitomaris, Apostolos; Hoermann, Rudolf; Midgley, John E.; Hering, Steffen; Urban, Aline; Dietrich, Barbara; Abood, Assjana; Klein, Harald H.; Dietrich, Johannes W.
2017-01-01
The hypothalamus–pituitary–thyroid feedback control is a dynamic, adaptive system. In situations of illness and deprivation of energy representing type 1 allostasis, the stress response operates to alter both its set point and peripheral transfer parameters. In contrast, type 2 allostatic load, typically effective in psychosocial stress, pregnancy, metabolic syndrome, and adaptation to cold, produces a nearly opposite phenotype of predictive plasticity. The non-thyroidal illness syndrome (NTIS) or thyroid allostasis in critical illness, tumors, uremia, and starvation (TACITUS), commonly observed in hospitalized patients, displays a historically well-studied pattern of allostatic thyroid response. This is characterized by decreased total and free thyroid hormone concentrations and varying levels of thyroid-stimulating hormone (TSH) ranging from decreased (in severe cases) to normal or even elevated (mainly in the recovery phase) TSH concentrations. An acute versus chronic stage (wasting syndrome) of TACITUS can be discerned. The two types differ in molecular mechanisms and prognosis. The acute adaptation of thyroid hormone metabolism to critical illness may prove beneficial to the organism, whereas the far more complex molecular alterations associated with chronic illness frequently lead to allostatic overload. The latter is associated with poor outcome, independently of the underlying disease. Adaptive responses of thyroid homeostasis extend to alterations in thyroid hormone concentrations during fetal life, periods of weight gain or loss, thermoregulation, physical exercise, and psychiatric diseases. The various forms of thyroid allostasis pose serious problems in differential diagnosis of thyroid disease. This review article provides an overview of physiological mechanisms as well as major diagnostic and therapeutic implications of thyroid allostasis under a variety of developmental and straining conditions. PMID:28775711
NASA Astrophysics Data System (ADS)
Berger, Marsha; Goodman, Jonathan
2018-04-01
This paper examines the questions of whether smaller asteroids that burst in the air over water can generate tsunamis that could pose a threat to distant locations. Such airburst-generated tsunamis are qualitatively different than the more frequently studied earthquake-generated tsunamis, and differ as well from tsunamis generated by asteroids that strike the ocean. Numerical simulations are presented using the shallow water equations in several settings, demonstrating very little tsunami threat from this scenario. A model problem with an explicit solution that demonstrates and explains the same phenomena found in the computations is analyzed. We discuss the question of whether compressibility and dispersion are important effects that should be included, and show results from a more sophisticated model problem using the linearized Euler equations that begins to addresses this.
Problem Posing and Solving with Mathematical Modeling
ERIC Educational Resources Information Center
English, Lyn D.; Fox, Jillian L.; Watters, James J.
2005-01-01
Mathematical modeling is explored as both problem posing and problem solving from two perspectives, that of the child and the teacher. Mathematical modeling provides rich learning experiences for elementary school children and their teachers.
An incremental strategy for calculating consistent discrete CFD sensitivity derivatives
NASA Technical Reports Server (NTRS)
Korivi, Vamshi Mohan; Taylor, Arthur C., III; Newman, Perry A.; Hou, Gene W.; Jones, Henry E.
1992-01-01
In this preliminary study involving advanced computational fluid dynamic (CFD) codes, an incremental formulation, also known as the 'delta' or 'correction' form, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations which are associated with aerodynamic sensitivity analysis. For typical problems in 2D, a direct solution method can be applied to these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods appear to be needed for future 3D applications; however, because direct solver methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form result in certain difficulties, such as ill-conditioning of the coefficient matrix, which can be overcome when these equations are cast in the incremental form; these and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two laminar sample problems: (1) transonic flow through a double-throat nozzle; and (2) flow over an isolated airfoil.
Common mental health problems in immigrants and refugees: general approach in primary care
Kirmayer, Laurence J.; Narasiah, Lavanya; Munoz, Marie; Rashid, Meb; Ryder, Andrew G.; Guzder, Jaswant; Hassan, Ghayda; Rousseau, Cécile; Pottie, Kevin
2011-01-01
Background: Recognizing and appropriately treating mental health problems among new immigrants and refugees in primary care poses a challenge because of differences in language and culture and because of specific stressors associated with migration and resettlement. We aimed to identify risk factors and strategies in the approach to mental health assessment and to prevention and treatment of common mental health problems for immigrants in primary care. Methods: We searched and compiled literature on prevalence and risk factors for common mental health problems related to migration, the effect of cultural influences on health and illness, and clinical strategies to improve mental health care for immigrants and refugees. Publications were selected on the basis of relevance, use of recent data and quality in consultation with experts in immigrant and refugee mental health. Results: The migration trajectory can be divided into three components: premigration, migration and postmigration resettlement. Each phase is associated with specific risks and exposures. The prevalence of specific types of mental health problems is influenced by the nature of the migration experience, in terms of adversity experienced before, during and after resettlement. Specific challenges in migrant mental health include communication difficulties because of language and cultural differences; the effect of cultural shaping of symptoms and illness behaviour on diagnosis, coping and treatment; differences in family structure and process affecting adaptation, acculturation and intergenerational conflict; and aspects of acceptance by the receiving society that affect employment, social status and integration. These issues can be addressed through specific inquiry, the use of trained interpreters and culture brokers, meetings with families, and consultation with community organizations. Interpretation: Systematic inquiry into patients’ migration trajectory and subsequent follow-up on culturally appropriate indicators of social, vocational and family functioning over time will allow clinicians to recognize problems in adaptation and undertake mental health promotion, disease prevention or treatment interventions in a timely way. PMID:20603342
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Cerenkov luminescence tomography based on preconditioning orthogonal matching pursuit
NASA Astrophysics Data System (ADS)
Liu, Haixiao; Hu, Zhenhua; Wang, Kun; Tian, Jie; Yang, Xin
2015-03-01
Cerenkov luminescence imaging (CLI) is a novel optical imaging method and has been proved to be a potential substitute of the traditional radionuclide imaging such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT). This imaging method inherits the high sensitivity of nuclear medicine and low cost of optical molecular imaging. To obtain the depth information of the radioactive isotope, Cerenkov luminescence tomography (CLT) is established and the 3D distribution of the isotope is reconstructed. However, because of the strong absorption and scatter, the reconstruction of the CLT sources is always converted to an ill-posed linear system which is hard to be solved. In this work, the sparse nature of the light source was taken into account and the preconditioning orthogonal matching pursuit (POMP) method was established to effectively reduce the ill-posedness and obtain better reconstruction accuracy. To prove the accuracy and speed of this algorithm, a heterogeneous numerical phantom experiment and an in vivo mouse experiment were conducted. Both the simulation result and the mouse experiment showed that our reconstruction method can provide more accurate reconstruction result compared with the traditional Tikhonov regularization method and the ordinary orthogonal matching pursuit (OMP) method. Our reconstruction method will provide technical support for the biological application for Cerenkov luminescence.
Diagnosis of organic brain syndrome: an emergency department dilemma.
Dubin, W R; Weiss, K J
1984-01-01
Delirium and dementia frequently pose a diagnostic dilemma for clinicians in the emergency department. The overlap of symptoms between organic brain syndrome and functional psychiatric illness, coupled with a dramatic presentation, often leads to a premature psychiatric diagnosis. In this paper, the authors discuss those symptoms of organic brain syndrome that most frequently generate diagnostic confusion in the emergency department and result in a misdiagnosis of functional illness.
Multichannel myopic deconvolution in underwater acoustic channels via low-rank recovery
Tian, Ning; Byun, Sung-Hoon; Sabra, Karim; Romberg, Justin
2017-01-01
This paper presents a technique for solving the multichannel blind deconvolution problem. The authors observe the convolution of a single (unknown) source with K different (unknown) channel responses; from these channel outputs, the authors want to estimate both the source and the channel responses. The authors show how this classical signal processing problem can be viewed as solving a system of bilinear equations, and in turn can be recast as recovering a rank-1 matrix from a set of linear observations. Results of prior studies in the area of low-rank matrix recovery have identified effective convex relaxations for problems of this type and efficient, scalable heuristic solvers that enable these techniques to work with thousands of unknown variables. The authors show how a priori information about the channels can be used to build a linear model for the channels, which in turn makes solving these systems of equations well-posed. This study demonstrates the robustness of this methodology to measurement noises and parametrization errors of the channel impulse responses with several stylized and shallow water acoustic channel simulations. The performance of this methodology is also verified experimentally using shipping noise recorded on short bottom-mounted vertical line arrays. PMID:28599565
Problem-posing in education: transformation of the practice of the health professional.
Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D
1998-02-01
This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.
An interior-point method-based solver for simulation of aircraft parts riveting
NASA Astrophysics Data System (ADS)
Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael
2018-05-01
The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.
The inverse problem of estimating the gravitational time dilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.
2016-11-15
Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
[Problem-posing as a nutritional education strategy with obese teenagers].
Rodrigues, Erika Marafon; Boog, Maria Cristina Faber
2006-05-01
Obesity is a public health issue with relevant social determinants in its etiology and where interventions with teenagers encounter complex biopsychological conditions. This study evaluated intervention in nutritional education through a problem-posing approach with 22 obese teenagers, treated collectively and individually for eight months. Speech acts were collected through the use of word cards, observer recording, and tape-recording. The study adopted a qualitative methodology, and the approach involved content analysis. Problem-posing facilitated changes in eating behavior, triggering reflections on nutritional practices, family circumstances, social stigma, interaction with health professionals, and religion. Teenagers under individual care posed problems more effectively in relation to eating, while those under collective care posed problems in relation to family and psychological issues, with effective qualitative eating changes in both groups. The intervention helped teenagers understand their life history and determinants of eating behaviors, spontaneously implementing eating changes and making them aware of possibilities for maintaining the new practices and autonomously exercising their role as protagonists in their own health care.
Martins Alho, Miriam A; Marrero-Ponce, Yovani; Barigye, Stephen J; Meneses-Marcel, Alfredo; Machado Tugores, Yanetsy; Montero-Torres, Alina; Gómez-Barrio, Alicia; Nogal, Juan J; García-Sánchez, Rory N; Vega, María Celeste; Rolón, Miriam; Martínez-Fernández, Antonio R; Escario, José A; Pérez-Giménez, Facundo; Garcia-Domenech, Ramón; Rivera, Norma; Mondragón, Ricardo; Mondragón, Mónica; Ibarra-Velarde, Froylán; Lopez-Arencibia, Atteneri; Martín-Navarro, Carmen; Lorenzo-Morales, Jacob; Cabrera-Serra, Maria Gabriela; Piñero, Jose; Tytgat, Jan; Chicharro, Roberto; Arán, Vicente J
2014-03-01
Protozoan parasites have been one of the most significant public health problems for centuries and several human infections caused by them have massive global impact. Most of the current drugs used to treat these illnesses have been used for decades and have many limitations such as the emergence of drug resistance, severe side-effects, low-to-medium drug efficacy, administration routes, cost, etc. These drugs have been largely neglected as models for drug development because they are majorly used in countries with limited resources and as a consequence with scarce marketing possibilities. Nowadays, there is a pressing need to identify and develop new drug-based antiprotozoan therapies. In an effort to overcome this problem, the main purpose of this study is to develop a QSARs-based ensemble classifier for antiprotozoan drug-like entities from a heterogeneous compounds collection. Here, we use some of the TOMOCOMD-CARDD molecular descriptors and linear discriminant analysis (LDA) to derive individual linear classification functions in order to discriminate between antiprotozoan and non-antiprotozoan compounds as a way to enable the computational screening of virtual combinatorial datasets and/or drugs already approved. Firstly, we construct a wide-spectrum benchmark database comprising of 680 organic chemicals with great structural variability (254 of them antiprotozoan agents and 426 to drugs having other clinical uses). This series of compounds was processed by a k-means cluster analysis in order to design training and predicting sets. In total, seven discriminant functions were obtained, by using the whole set of atom-based linear indices. All the LDA-based QSAR models show accuracies above 85% in the training set and values of Matthews correlation coefficients (C) vary from 0.70 to 0.86. The external validation set shows rather-good global classifications of around 80% (92.05% for best equation). Later, we developed a multi-agent QSAR classification system, in which the individual QSAR outputs are the inputs of the aforementioned fusion approach. Finally, the fusion model was used for the identification of a novel generation of lead-like antiprotozoan compounds by using ligand-based virtual screening of 'available' small molecules (with synthetic feasibility) in our 'in-house' library. A new molecular subsystem (quinoxalinones) was then theoretically selected as a promising lead series, and its derivatives subsequently synthesized, structurally characterized, and experimentally assayed by using in vitro screening that took into consideration a battery of five parasite-based assays. The chemicals 11(12) and 16 are the most active (hits) against apicomplexa (sporozoa) and mastigophora (flagellata) subphylum parasites, respectively. Both compounds depicted good activity in every protozoan in vitro panel and they did not show unspecific cytotoxicity on the host cells. The described technical framework seems to be a promising QSAR-classifier tool for the molecular discovery and development of novel classes of broad-antiprotozoan-spectrum drugs, which may meet the dual challenges posed by drug-resistant parasites and the rapid progression of protozoan illnesses. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reverse Flood Routing with the Lag-and-Route Storage Model
NASA Astrophysics Data System (ADS)
Mazi, K.; Koussis, A. D.
2010-09-01
This work presents a method for reverse routing of flood waves in open channels, which is an inverse problem of the signal identification type. Inflow determination from outflow measurements is useful in hydrologic forensics and in optimal reservoir control, but has been seldom studied. Such problems are ill posed and their solution is sensitive to small perturbations present in the data, or to any related uncertainty. Therefore the major difficulty in solving this inverse problem consists in controlling the amplification of errors that inevitably befall flow measurements, from which the inflow signal is to be determined. The lag-and-route model offers a convenient framework for reverse routing, because not only is formal deconvolution not required, but also reverse routing is through a single linear reservoir. In addition, this inversion degenerates to calculating the intermediate inflow (prior to the lag step) simply as the sum of the outflow and of its time derivative multiplied by the reservoir’s time constant. The remaining time shifting (lag) of the intermediate, reversed flow presents no complications, as pure translation causes no error amplification. Note that reverse routing with the inverted Muskingum scheme (Koussis et al., submitted to the 12th Plinius Conference) fails when that scheme is specialised to the Kalinin-Miljukov model (linear reservoirs in series). The principal functioning of the reverse routing procedure was verified first with perfect field data (outflow hydrograph generated by forward routing of a known inflow hydrograph). The field data were then seeded with random error. To smooth the oscillations caused by the imperfect (measured) outflow data, we applied a multipoint Savitzky-Golay low-pass filter. The combination of reverse routing and filtering achieved an effective recovery of the inflow signal extremely efficiently. Specifically, we compared the reverse routing results of the inverted lag-and-route model and of the inverted Kalinin-Miljukov model. The latter applies the lag-and-route model’s single-reservoir inversion scheme sequentially to its cascade of linear reservoirs, the number of which is related to the stream's hydromorphology. For this purpose, we used the example of Bruen & Dooge (2007), who back-routed flow hydrographs in a 100-km long prismatic channel using a scheme for the reverse solution of the St. Venant equations of flood wave motion. The lag-and-route reverse routing model recovered the inflow hydrograph with comparable accuracy to that of the multi-reservoir, inverted Kalinin-Miljukov model, both performing as well as the box-scheme for reverse routing with the St. Venant equations. In conclusion, the success in the regaining of the inflow signal by the devised single-reservoir reverse routing procedure, with multipoint low-pass filtering, can be attributed to its simple computational structure that endows it with remarkable robustness and exceptional efficiency.
ERIC Educational Resources Information Center
Contreras, José N.
2013-01-01
This paper discusses a classroom experience in which a group of prospective secondary mathematics teachers were asked to create, cooperatively (in class) and individually, problems related to Viviani's problem using a problem-posing framework. When appropriate, students used Sketchpad to explore the problem to better understand its attributes…
ERIC Educational Resources Information Center
Ünlü, Melihan
2017-01-01
The aim of the study was to determine mathematics teacher candidates' knowledge about problem solving strategies through problem posing. This qualitative research was conducted with 95 mathematics teacher candidates studying at education faculty of a public university during the first term of the 2015-2016 academic year in Turkey. Problem Posing…
The Chronically Ill Child in the School.
ERIC Educational Resources Information Center
Sexson, Sandra; Madan-Swain, Avi
1995-01-01
Examines the effects of chronic illness on the school-age population. Facilitating successful functioning of chronically ill youths is a growing problem. Focuses on problems encountered by the chronically ill student who has either been diagnosed with a chronic illness or who has survived such an illness. Discusses the role of the school…
NASA Astrophysics Data System (ADS)
Parshin, D. A.
2017-09-01
We study the processes of additive formation of spherically shaped rigid bodies due to the uniform accretion of additional matter to their surface in an arbitrary centrally symmetric force field. A special case of such a field can be the gravitational or electrostatic force field. We consider the elastic deformation of the formed body. The body is assumed to be isotropic with elasticmoduli arbitrarily varying along the radial coordinate.We assume that arbitrary initial circular stresses can arise in the additional material added to the body in the process of its formation. In the framework of linear mechanics of growing bodies, the mathematical model of the processes under study is constructed in the quasistatic approximation. The boundary value problems describing the development of stress-strain state of the object under study before the beginning of the process and during the entire process of its formation are posed. The closed analytic solutions of the posed problems are constructed by quadratures for some general types of material inhomogeneity. Important typical characteristics of the mechanical behavior of spherical bodies additively formed in the central force field are revealed. These characteristics substantially distinguish such bodies from the already completely composed bodies similar in dimensions and properties which are placed in the force field and are described by problems of mechanics of deformable solids in the classical statement disregarding the mechanical aspects of additive processes.
Sleep Problems in Children and Adolescents with Common Medical Conditions
Lewandowski, Amy S.; Ward, Teresa M.; Palermo, Tonya M.
2011-01-01
Synopsis Sleep is critically important to children’s health and well-being. Untreated sleep disturbances and sleep disorders pose significant adverse daytime consequences and place children at considerable risk for poor health outcomes. Sleep disturbances occur at a greater frequency in children with acute and chronic medical conditions compared to otherwise healthy peers. Sleep disturbances in medically ill children can be associated with sleep disorders (e.g., sleep disordered breathing, restless leg syndrome), co-morbid with acute and chronic conditions (e.g., asthma, arthritis, cancer), or secondary to underlying disease-related mechanisms (e.g. airway restriction, inflammation) treatment regimens, or hospitalization. Clinical management should include a multidisciplinary approach with particular emphasis on routine, regular sleep assessments and prevention of daytime consequences and promotion of healthy sleep habits and health outcomes. PMID:21600350
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses
NASA Astrophysics Data System (ADS)
Petschke, Danny; Staab, Torsten E. M.
2018-01-01
The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.
Adaptive Leadership Framework for Chronic Illness
Anderson, Ruth A.; Bailey, Donald E.; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S.; Thygeson, N. Marcus; Docherty, Sharron L.
2015-01-01
We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care. PMID:25647829
NASA Astrophysics Data System (ADS)
Polydorides, Nick; Lionheart, William R. B.
2002-12-01
The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.
Folk concepts of mental disorders among Chinese-Australian patients and their caregivers.
Hsiao, Fei-Hsiu; Klimidis, Steven; Minas, Harry I; Tan, Eng S
2006-07-01
This paper reports a study of (a) popular conceptions of mental illness throughout history, (b) how current social and cultural knowledge about mental illness influences Chinese-Australian patients' and caregivers' understanding of mental illness and the consequences of this for explaining and labelling patients' problems. According to traditional Chinese cultural knowledge about health and illness, Chinese people believe that psychotic illness is the only type of mental illness, and that non-psychotic illness is a physical illness. Regarding patients' problems as not being due to mental illness may result in delaying use of Western mental health services. Data collection took place in 2001. Twenty-eight Chinese-Australian patients with mental illness and their caregivers were interviewed at home, drawing on Kleinman's explanatory model and studies of cultural transmission. Interviews were tape-recorded and transcribed, and analysed for plots and themes. Chinese-Australians combined traditional knowledge with Western medical knowledge to develop their own labels for various kinds of mental disorders, including 'mental illness', 'physical illness', 'normal problems of living' and 'psychological problems'. As they learnt more about Western conceptions of psychology and psychiatry, their understanding of some disorders changed. What was previously ascribed to non-mental disorders was often re-labelled as 'mental illness' or 'psychological problems'. Educational programmes aimed at introducing Chinese immigrants to counselling and other psychiatric services could be made more effective if designers gave greater consideration to Chinese understanding of mental illness.
Avidan, Michael S; FCASA; Searleman, Adam C; Storandt, Martha; Barnett, Kara; Vannucci, Andrea; Saager, Leif; Xiong, Chengjie; Grant, Elizabeth A; Kaiser, Dagmar; Morris, John C; Evers, Alex S
2009-01-01
Background Persistent postoperative cognitive decline is thought to be a public health problem, but its severity may have been overestimated because of limitations in statistical methodology. This study assessed whether long-term cognitive decline occurred after surgery or illness by using an innovative approach and including participants with early Alzheimer's disease to overcome some limitations. Methods In this retrospective cohort study, three groups were identified from participants tested annually at Washington University's Alzheimer Disease Research Center in St. Louis: those with non-cardiac surgery, illness, or neither. This enabled long-term tracking of cognitive function before and after surgery and illness. The effect of surgery and illness on longitudinal cognitive course was analyzed using a general linear mixed effects model. For participants without initial dementia, time to dementia onset was analyzed using sequential Cox proportional hazards regression. Results Of the 575 participants, 214 were nondemented and 361 had very mild or mild dementia at enrollment. Cognitive trajectories did not differ among the three groups (surgery, illness, control), although demented participants declined more markedly than nondemented. Of the initially nondemented participants, 23% progressed to a clinical dementia rating greater than zero, but this was not more common following surgery or illness. Conclusions The study did not detect long-term cognitive decline independently attributable to surgery or illness nor were these events associated with accelerated progression to dementia. The decision to proceed with surgery in elderly people, including those with early Alzheimer's disease, may presently be made without factoring in the specter of persistent cognitive deterioration. PMID:19786858
Avidan, Michael S; Searleman, Adam C; Storandt, Martha; Barnett, Kara; Vannucci, Andrea; Saager, Leif; Xiong, Chengjie; Grant, Elizabeth A; Kaiser, Dagmar; Morris, John C; Evers, Alex S
2009-11-01
Persistent postoperative cognitive decline is thought to be a public health problem, but its severity may have been overestimated because of limitations in statistical methodology. This study assessed whether long-term cognitive decline occurred after surgery or illness by using an innovative approach and including participants with early Alzheimer disease to overcome some limitations. In this retrospective cohort study, three groups were identified from participants tested annually at the Washington University Alzheimer's Disease Research Center in St. Louis, Missouri: those with noncardiac surgery, illness, or neither. This enabled long-term tracking of cognitive function before and after surgery and illness. The effect of surgery and illness on longitudinal cognitive course was analyzed using a general linear mixed effects model. For participants without initial dementia, time to dementia onset was analyzed using sequential Cox proportional hazards regression. Of the 575 participants, 214 were nondemented and 361 had very mild or mild dementia at enrollment. Cognitive trajectories did not differ among the three groups (surgery, illness, control), although demented participants declined more markedly than nondemented participants. Of the initially nondemented participants, 23% progressed to a clinical dementia rating greater than zero, but this was not more common after surgery or illness. The study did not detect long-term cognitive decline independently attributable to surgery or illness, nor were these events associated with accelerated progression to dementia. The decision to proceed with surgery in elderly people, including those with early Alzheimer disease, may be made without factoring in the specter of persistent cognitive deterioration.
A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks
ERIC Educational Resources Information Center
Singer, Florence Mihaela; Voica, Cristian
2013-01-01
The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…
Opportunities to Pose Problems Using Digital Technology in Problem Solving Environments
ERIC Educational Resources Information Center
Aguilar-Magallón, Daniel Aurelio; Fernández, Willliam Enrique Poveda
2017-01-01
This article reports and analyzes different types of problems that nine students in a Master's Program in Mathematics Education posed during a course on problem solving. What opportunities (affordances) can a dynamic geometry system (GeoGebra) offer to allow in-service and in-training teachers to formulate and solve problems, and what type of…
Bayesian inversion of refraction seismic traveltime data
NASA Astrophysics Data System (ADS)
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test for a synthetic data set from a known model is also presented.
Quantum Linear System Algorithm for Dense Matrices
NASA Astrophysics Data System (ADS)
Wossnig, Leonard; Zhao, Zhikuan; Prakash, Anupam
2018-02-01
Solving linear systems of equations is a frequently encountered problem in machine learning and optimization. Given a matrix A and a vector b the task is to find the vector x such that A x =b . We describe a quantum algorithm that achieves a sparsity-independent runtime scaling of O (κ2√{n }polylog(n )/ɛ ) for an n ×n dimensional A with bounded spectral norm, where κ denotes the condition number of A , and ɛ is the desired precision parameter. This amounts to a polynomial improvement over known quantum linear system algorithms when applied to dense matrices, and poses a new state of the art for solving dense linear systems on a quantum computer. Furthermore, an exponential improvement is achievable if the rank of A is polylogarithmic in the matrix dimension. Our algorithm is built upon a singular value estimation subroutine, which makes use of a memory architecture that allows for efficient preparation of quantum states that correspond to the rows of A and the vector of Euclidean norms of the rows of A .
Robust Head-Pose Estimation Based on Partially-Latent Mixture of Linear Regressions.
Drouard, Vincent; Horaud, Radu; Deleforge, Antoine; Ba, Sileye; Evangelidis, Georgios
2017-03-01
Head-pose estimation has many applications, such as social event analysis, human-robot and human-computer interaction, driving assistance, and so forth. Head-pose estimation is challenging, because it must cope with changing illumination conditions, variabilities in face orientation and in appearance, partial occlusions of facial landmarks, as well as bounding-box-to-face alignment errors. We propose to use a mixture of linear regressions with partially-latent output. This regression method learns to map high-dimensional feature vectors (extracted from bounding boxes of faces) onto the joint space of head-pose angles and bounding-box shifts, such that they are robustly predicted in the presence of unobservable phenomena. We describe in detail the mapping method that combines the merits of unsupervised manifold learning techniques and of mixtures of regressions. We validate our method with three publicly available data sets and we thoroughly benchmark four variants of the proposed algorithm with several state-of-the-art head-pose estimation methods.
The well-posedness of the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Tadmor, E.
1984-01-01
The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without loss of derivatives.
Linear and nonlinear acoustic wave propagation in the atmosphere
NASA Technical Reports Server (NTRS)
Hariharan, S. I.; Yu, Ping
1988-01-01
The investigation of the acoustic wave propagation theory and numerical implementation for the situation of an isothermal atmosphere is described. A one-dimensional model to validate an asymptotic theory and a 3-D situation to relate to a realistic situation are considered. In addition, nonlinear wave propagation and the numerical treatment are included. It is known that the gravitational effects play a crucial role in the low frequency acoustic wave propagation. They propagate large distances and, as such, the numerical treatment of those problems become difficult in terms of posing boundary conditions which are valid for all frequencies.
The well-posedness of the Kuramoto-Sivashinsky equation
NASA Technical Reports Server (NTRS)
Tadmor, E.
1986-01-01
The Kuramoto-Sivashinsky equation arises in a variety of applications, among which are modeling reaction diffusion systems, flame propagation and viscous flow problems. It is considered here, as a prototype to the larger class of generalized Burgers equations: those consist of a quadratic nonlinearity and an arbitrary linear parabolic part. It is shown that such equations are well posed, thus admitting a unique smooth solution, continuously dependent on its initial data. As an attractive alternative to standard energy methods, existence and stability are derived in this case, by patching in the large short time solutions without 'loss of derivatives'.
NASA Astrophysics Data System (ADS)
Giudici, Mauro; Baratelli, Fulvia; Vassena, Chiara; Cattaneo, Laura
2014-05-01
Numerical modelling of the dynamic evolution of ice sheets and glaciers requires the solution of discrete equations which are based on physical principles (e.g. conservation of mass, linear momentum and energy) and phenomenological constitutive laws (e.g. Glen's and Fourier's laws). These equations must be accompanied by information on the forcing term and by initial and boundary conditions (IBC) on ice velocity, stress and temperature; on the other hand the constitutive laws involves many physical parameters, which possibly depend on the ice thermodynamical state. The proper forecast of the dynamics of ice sheets and glaciers (forward problem, FP) requires a precise knowledge of several quantities which appear in the IBCs, in the forcing terms and in the phenomenological laws and which cannot be easily measured at the study scale in the field. Therefore these quantities can be obtained through model calibration, i.e. by the solution of an inverse problem (IP). Roughly speaking, the IP aims at finding the optimal values of the model parameters that yield the best agreement of the model output with the field observations and data. The practical application of IPs is usually formulated as a generalised least squares approach, which can be cast in the framework of Bayesian inference. IPs are well developed in several areas of science and geophysics and several applications were proposed also in glaciology. The objective of this paper is to provide a further step towards a thorough and rigorous theoretical framework in cryospheric studies. Although the IP is often claimed to be ill-posed, this is rigorously true for continuous domain models, whereas for numerical models, which require the solution of algebraic equations, the properties of the IP must be analysed with more care. First of all, it is necessary to clarify the role of experimental and monitoring data to determine the calibration targets and the values of the parameters that can be considered to be fixed, whereas only the model output should depend on the subset of the parameters that can be identified with the calibration procedure and the solution to the IP. It is actually difficult to guarantee the existence and uniqueness of a solution to the IP for complex non-linear models. Also identifiability, a property related to the solution to the FP, and resolution should be carefully considered. Moreover, instability of the IP should not be confused with ill-conditioning and with the properties of the method applied to compute a solution. Finally, sensitivity analysis is of paramount importance to assess the reliability of the estimated parameters and of the model output, but it is often based on the one-at-a-time approach, through the application of the adjoint-state method, to compute local sensitivity, i.e. the uncertainty on the model output due to small variations of the input parameters, whereas first-order approaches that consider the whole possible variability of the model parameters should be considered. This theoretical framework and the relevant properties are illustrated by means of a simple numerical example of isothermal ice flow, based on the shallow ice approximation.
A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow
NASA Technical Reports Server (NTRS)
Baker, Gregory; Siegel, Michael; Tanveer, Saleh
1995-01-01
We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.
Algorithms and Array Design Criteria for Robust Imaging in Interferometry
NASA Astrophysics Data System (ADS)
Kurien, Binoy George
Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.
The Structure of Ill-Structured (and Well-Structured) Problems Revisited
ERIC Educational Resources Information Center
Reed, Stephen K.
2016-01-01
In his 1973 article "The Structure of ill structured problems", Herbert Simon proposed that solving ill-structured problems could be modeled within the same information-processing framework developed for solving well-structured problems. This claim is reexamined within the context of over 40 years of subsequent research and theoretical…
Marshall, R C; McGurk, S R; Karow, C M; Kairy, T J; Flashman, L A
2006-06-01
Severe mental illness is associated with impairments in executive functions, such as conceptual reasoning, planning, and strategic thinking all of which impact problem solving. The present study examined the utility of a novel assessment tool for problem solving, the Rapid Assessment of Problem Solving Test (RAPS) in persons with severe mental illness. Subjects were 47 outpatients with severe mental illness and an equal number healthy controls matched for age and gender. Results confirmed all hypotheses with respect to how subjects with severe mental illness would perform on the RAPS. Specifically, the severely mentally ill subjects (1) solved fewer problems on the RAPS, (2) when they did solve problems on the test, they did so far less efficiently than their healthy counterparts, and (3) the two groups differed markedly in the types of questions asked on the RAPS. The healthy control subjects tended to take a systematic, organized, but not always optimal approach to solving problems on the RAPS. The subjects with severe mental illness used some of the problem solving strategies of the healthy controls, but their performance was less consistent and tended to deteriorate when the complexity of the problem solving task increased. This was reflected by a high degree of guessing in lieu of asking constraint questions, particularly if a category-limited question was insufficient to continue the problem solving effort.
The challenge of gun control for mental health advocates.
Pandya, Anand
2013-09-01
Mass shootings, such as the 2012 Newtown massacre, have repeatedly led to political discourse about limiting access to guns for individuals with serious mental illness. Although the political climate after such tragic events poses a considerable challenge to mental health advocates who wish to minimize unsympathetic portrayals of those with mental illness, such media attention may be a rare opportunity to focus attention on risks of victimization of those with serious mental illness and barriers to obtaining psychiatric care. Current federal gun control laws may discourage individuals from seeking psychiatric treatment and describe individuals with mental illness using anachronistic, imprecise, and gratuitously stigmatizing language. This article lays out potential talking points that may be useful after future gun violence.
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em
2018-02-01
Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.
Estimation of Faults in DC Electrical Power System
NASA Technical Reports Server (NTRS)
Gorinevsky, Dimitry; Boyd, Stephen; Poll, Scott
2009-01-01
This paper demonstrates a novel optimization-based approach to estimating fault states in a DC power system. Potential faults changing the circuit topology are included along with faulty measurements. Our approach can be considered as a relaxation of the mixed estimation problem. We develop a linear model of the circuit and pose a convex problem for estimating the faults and other hidden states. A sparse fault vector solution is computed by using 11 regularization. The solution is computed reliably and efficiently, and gives accurate diagnostics on the faults. We demonstrate a real-time implementation of the approach for an instrumented electrical power system testbed, the ADAPT testbed at NASA ARC. The estimates are computed in milliseconds on a PC. The approach performs well despite unmodeled transients and other modeling uncertainties present in the system.
Well-posedness of the free boundary problem in compressible elastodynamics
NASA Astrophysics Data System (ADS)
Trakhinin, Yuri
2018-02-01
We study the free boundary problem for the flow of a compressible isentropic inviscid elastic fluid. At the free boundary moving with the velocity of the fluid particles the columns of the deformation gradient are tangent to the boundary and the pressure vanishes outside the flow domain. We prove the local-in-time existence of a unique smooth solution of the free boundary problem provided that among three columns of the deformation gradient there are two which are non-collinear vectors at each point of the initial free boundary. If this non-collinearity condition fails, the local-in-time existence is proved under the classical Rayleigh-Taylor sign condition satisfied at the first moment. By constructing an Hadamard-type ill-posedness example for the frozen coefficients linearized problem we show that the simultaneous failure of the non-collinearity condition and the Rayleigh-Taylor sign condition leads to Rayleigh-Taylor instability.
Optimal aeroassisted coplanar orbital transfer using an energy model
NASA Technical Reports Server (NTRS)
Halyo, Nesim; Taylor, Deborah B.
1989-01-01
The atmospheric portion of the trajectories for the aeroassisted coplanar orbit transfer was investigated. The equations of motion for the problem are expressed using reduced order model and total vehicle energy, kinetic plus potential, as the independent variable rather than time. The order reduction is achieved analytically without an approximation of the vehicle dynamics. In this model, the problem of coplanar orbit transfer is seen as one in which a given amount of energy must be transferred from the vehicle to the atmosphere during the trajectory without overheating the vehicle. An optimal control problem is posed where a linear combination of the integrated square of the heating rate and the vehicle drag is the cost function to be minimized. The necessary conditions for optimality are obtained. These result in a 4th order two-point-boundary-value problem. A parametric study of the optimal guidance trajectory in which the proportion of the heating rate term versus the drag varies is made. Simulations of the guidance trajectories are presented.
Levels of arithmetic reasoning in solving an open-ended problem
NASA Astrophysics Data System (ADS)
Kosyvas, Georgios
2016-04-01
This paper presents the results of an experimental teaching carried out on 12-year-old students. An open-ended task was given to them and they had not been taught the algorithmic process leading to the solution. The formal solution to the problem refers to a system of two linear equations with two unknown quantities. In this mathematical activity, students worked cooperatively. They discussed their discoveries in groups of four and then presented their answers to the whole class developing a rich communication. This study describes the characteristic arguments that represent certain different forms of reasoning that emerged during the process of justifying the solutions of the problem. The findings of this research show that within an environment conducive to creativity, which encourages collaboration, exploration and sharing ideas, students can be engaged in developing multiple mathematical strategies, posing new questions, creating informal proofs, showing beauty and elegance and bringing out that problem solving is a powerful way of learning mathematics.
2016-04-27
Essential facts Scarlet fever is characterised by a rash that usually accompanies a sore throat and flushed cheeks. It is mainly a childhood illness. While this contagious disease rarely poses a danger to life today, outbreaks in the past led to many deaths.
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2014 CFR
2014-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2012 CFR
2012-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2013 CFR
2013-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M
2015-02-01
It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
Nardodkar, Renuka; Pathare, Soumitra; Ventriglio, Antonio; Castaldelli-Maia, João; Javate, Kenneth R; Torales, Julio; Bhugra, Dinesh
2016-08-01
The right to work and employment is indispensable for social integration of persons with mental health problems. This study examined whether existing laws pose structural barriers in the realization of right to work and employment of persons with mental health problems across the world. It reviewed disability-specific, human rights legislation, and labour laws of all UN Member States in the context of Article 27 of the UN Convention on the Rights of Persons with Disabilities (CRPD). It wes found that laws in 62% of countries explicitly mention mental disability/impairment/illness in the definition of disability. In 64% of countries, laws prohibit discrimination against persons with mental health during recruitment; in one-third of countries laws prohibit discontinuation of employment. More than half (56%) the countries have laws in place which offer access to reasonable accommodation in the workplace. In 59% of countries laws promote employment of persons with mental health problems through different affirmative actions. Nearly 50 years after the adoption of the International Covenant on Economic, Social, and Cultural Rights and 10 years after the adoption of CRPD by the UN General Assembly, legal discrimination against persons with mental health problems continues to exist globally. Countries and policy-makers need to implement legislative measures to ensure non-discrimination of persons with mental health problems during employment.
NASA Astrophysics Data System (ADS)
Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu
2016-12-01
Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
Pre-Service Elementary Teachers' Motivation and Ill-Structured Problem Solving in Korea
ERIC Educational Resources Information Center
Kim, Min Kyeong; Cho, Mi Kyung
2016-01-01
This article examines the use and application of an ill-structured problem to pre-service elementary teachers in Korea in order to find implications of pre-service teacher education with regard to contextualized problem solving by analyzing experiences of ill-structured problem solving. Participants were divided into small groups depending on the…
ERIC Educational Resources Information Center
Kapur, Manu
2018-01-01
The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…
ERIC Educational Resources Information Center
Xie, Jinxia; Masingila, Joanna O.
2017-01-01
Existing studies have quantitatively evidenced the relatedness between problem posing and problem solving, as well as the magnitude of this relationship. However, the nature and features of this relationship need further qualitative exploration. This paper focuses on exploring the interactions, i.e., mutual effects and supports, between problem…
Anderson, Ruth A; Bailey, Donald E; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S; Thygeson, N Marcus; Docherty, Sharron L
2015-01-01
We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care.
Reducing errors in the GRACE gravity solutions using regularization
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.
Computational method for analysis of polyethylene biodegradation
NASA Astrophysics Data System (ADS)
Watanabe, Masaji; Kawai, Fusako; Shibata, Masaru; Yokoyama, Shigeo; Sudate, Yasuhiro
2003-12-01
In a previous study concerning the biodegradation of polyethylene, we proposed a mathematical model based on two primary factors: the direct consumption or absorption of small molecules and the successive weight loss of large molecules due to β-oxidation. Our model is an initial value problem consisting of a differential equation whose independent variable is time. Its unknown variable represents the total weight of all the polyethylene molecules that belong to a molecular-weight class specified by a parameter. In this paper, we describe a numerical technique to introduce experimental results into analysis of our model. We first establish its mathematical foundation in order to guarantee its validity, by showing that the initial value problem associated with the differential equation has a unique solution. Our computational technique is based on a linear system of differential equations derived from the original problem. We introduce some numerical results to illustrate our technique as a practical application of the linear approximation. In particular, we show how to solve the inverse problem to determine the consumption rate and the β-oxidation rate numerically, and illustrate our numerical technique by analyzing the GPC patterns of polyethylene wax obtained before and after 5 weeks cultivation of a fungus, Aspergillus sp. AK-3. A numerical simulation based on these degradation rates confirms that the primary factors of the polyethylene biodegradation posed in modeling are indeed appropriate.
The influence of initial conditions on dispersion and reactions
NASA Astrophysics Data System (ADS)
Wood, B. D.
2016-12-01
In various generalizations of the reaction-dispersion problem, researchers have developed frameworks in which the apparent dispersion coefficient can be negative. Such dispersion coefficients raise several difficult questions. Most importantly, the presence of a negative dispersion coefficient at the macroscale leads to a macroscale representation that illustrates an apparent decrease in entropy with increasing time; this, then, appears to be in violation of basic thermodynamic principles. In addition, the proposition of a negative dispersion coefficient leads to an inherently ill-posed mathematical transport equation. The ill-posedness of the problem arises because there is no unique initial condition that corresponds to a later-time concentration distribution (assuming that if discontinuous initial conditions are allowed). In this presentation, we explain how the phenomena of negative dispersion coefficients actually arise because the governing differential equation for early times should, when derived correctly, incorporate a term that depends upon the initial and boundary conditions. The process of reactions introduces a similar phenomena, where the structure of the initial and boundary condition influences the form of the macroscopic balance equations. When upscaling is done properly, new equations are developed that include source terms that are not present in the classical (late-time) reaction-dispersion equation. These source terms depend upon the structure of the initial condition of the reacting species, and they decrease exponentially in time (thus, they converge to the conventional equations at asymptotic times). With this formulation, the resulting dispersion tensor is always positive-semi-definite, and the reaction terms directly incorporate information about the state of mixedness of the system. This formulation avoids many of the problems that would be engendered by defining negative-definite dispersion tensors, and properly represents the effective rate of reaction at early times.
Bayesian tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Psychiatric diagnostic dilemmas in the medical setting.
Strain, James J
2005-09-01
To review the problems posed for doctors by the failure of existing taxonomies to provide a satisfactory method for deriving diagnoses in cases of physical/psychiatric comorbidity, and of relating diagnoses on multiple axes. Review of existing taxonomies and key criticisms. The author was guided in selection by his experience as a member of the working parties involved in the creation of the American Psychiatric Association's DSM-IV. The attempts of the two major taxonomies, the ICD-10 and the American Psychiatric Association's DSM-IV, to address the problem by use of glossaries and multiple axes are described, and found wanting. Novel approaches, including McHugh and Slavey's perspectives of disease, dimensions, behaviour and life story, are described and evaluated. The problem of developing valid and reliable measures of physical/psychiatric comorbidity is addressed, including a discussion of genetic factors, neurobiological variables, target markers and other pathophysiological indicators. Finally, the concept of depression as a systemic illness involving brain, mind and body is raised and the implications of this discussed. Taxonomies require major revision in order to provide a useful basis for communication and research about one of the most frequent presentations in the community, physical/psychiatric comorbidity.
NASA Astrophysics Data System (ADS)
Alifanov, O. M.; Budnik, S. A.; Nenarokomov, A. V.; Netelev, A. V.; Titov, D. M.
2013-04-01
In many practical situations it is impossible to measure directly thermal and thermokinetic properties of analyzed composite materials. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurements is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The general method of iterative regularization is concerned with application to the estimation of materials properties. The objective of this paper is to estimate thermal and thermokinetic properties of advanced materials using the approach based on inverse methods. An experimental-computational system is presented for investigating the thermal and kinetics properties of composite materials by methods of inverse heat transfer problems and which is developed at the Thermal Laboratory of Department Space Systems Engineering, of Moscow Aviation Institute (MAI). The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium.
Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope
NASA Astrophysics Data System (ADS)
Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng
2017-10-01
A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.
Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction
NASA Astrophysics Data System (ADS)
Liang, Guanghui; Ren, Shangjie; Dong, Feng
2017-07-01
The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Recursive linearization of multibody dynamics equations of motion
NASA Technical Reports Server (NTRS)
Lin, Tsung-Chieh; Yae, K. Harold
1989-01-01
The equations of motion of a multibody system are nonlinear in nature, and thus pose a difficult problem in linear control design. One approach is to have a first-order approximation through the numerical perturbations at a given configuration, and to design a control law based on the linearized model. Here, a linearized model is generated analytically by following the footsteps of the recursive derivation of the equations of motion. The equations of motion are first written in a Newton-Euler form, which is systematic and easy to construct; then, they are transformed into a relative coordinate representation, which is more efficient in computation. A new computational method for linearization is obtained by applying a series of first-order analytical approximations to the recursive kinematic relationships. The method has proved to be computationally more efficient because of its recursive nature. It has also turned out to be more accurate because of the fact that analytical perturbation circumvents numerical differentiation and other associated numerical operations that may accumulate computational error, thus requiring only analytical operations of matrices and vectors. The power of the proposed linearization algorithm is demonstrated, in comparison to a numerical perturbation method, with a two-link manipulator and a seven degrees of freedom robotic manipulator. Its application to control design is also demonstrated.
Full cycle rapid scan EPR deconvolution algorithm.
Tseytlin, Mark
2017-08-01
Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan. Copyright © 2017 Elsevier Inc. All rights reserved.
Full cycle rapid scan EPR deconvolution algorithm
NASA Astrophysics Data System (ADS)
Tseytlin, Mark
2017-08-01
Rapid scan electron paramagnetic resonance (RS EPR) is a continuous-wave (CW) method that combines narrowband excitation and broadband detection. Sinusoidal magnetic field scans that span the entire EPR spectrum cause electron spin excitations twice during the scan period. Periodic transient RS signals are digitized and time-averaged. Deconvolution of absorption spectrum from the measured full-cycle signal is an ill-posed problem that does not have a stable solution because the magnetic field passes the same EPR line twice per sinusoidal scan during up- and down-field passages. As a result, RS signals consist of two contributions that need to be separated and postprocessed individually. Deconvolution of either of the contributions is a well-posed problem that has a stable solution. The current version of the RS EPR algorithm solves the separation problem by cutting the full-scan signal into two half-period pieces. This imposes a constraint on the experiment; the EPR signal must completely decay by the end of each half-scan in order to not be truncated. The constraint limits the maximum scan frequency and, therefore, the RS signal-to-noise gain. Faster scans permit the use of higher excitation powers without saturating the spin system, translating into a higher EPR sensitivity. A stable, full-scan algorithm is described in this paper that does not require truncation of the periodic response. This algorithm utilizes the additive property of linear systems: the response to a sum of two inputs is equal the sum of responses to each of the inputs separately. Based on this property, the mathematical model for CW RS EPR can be replaced by that of a sum of two independent full-cycle pulsed field-modulated experiments. In each of these experiments, the excitation power equals to zero during either up- or down-field scan. The full-cycle algorithm permits approaching the upper theoretical scan frequency limit; the transient spin system response must decay within the scan period. Separation of the interfering up- and down-field scan responses remains a challenge for reaching the full potential of this new method. For this reason, only a factor of two increase in the scan rate was achieved, in comparison with the standard half-scan RS EPR algorithm. It is important for practical use that faster scans not necessarily increase the signal bandwidth because acceleration of the Larmor frequency driven by the changing magnetic field changes its sign after passing the inflection points on the scan. The half-scan and full-scan algorithms are compared using a LiNC-BuO spin probe of known line-shape, demonstrating that the new method produces stable solutions when RS signals do not completely decay by the end of each half-scan.
Improving attitudes toward mathematics learning with problem posing in class VIII
NASA Astrophysics Data System (ADS)
Vionita, Alfha; Purboningsih, Dyah
2017-08-01
This research is classroom action research which is collaborated to improve student's behavior toward math and mathematics learning at class VIII by using problem posing approach. The subject of research is all of students grade VIIIA which consist of 32 students. This research has been held on two period, first period is about 3 times meeting, and second period is about 4 times meeting. The instrument of this research is implementation of learning observation's guidance by using problem posing approach. Cycle test has been used to measure cognitive competence, and questionnaire to measure the students' behavior in mathematics learning process. The result of research shows the students' behavior has been improving after using problem posing approach. It is showed by the behavior's criteria of students that has increasing result from the average in first period to high in second period. Furthermore, the percentage of test result is also improve from 68,75% in first period to 78,13% in second period. On the other hand, the implementation of learning observation by using problem posing approach has also improving and it is showed by the average percentage of teacher's achievement in first period is 89,2% and student's achievement 85,8%. These results get increase in second period for both teacher and students' achievement which are 94,4% and 91,11%. As a result, students' behavior toward math learning process in class VIII has been improving by using problem posing approach.
Human health effects and remotely sensed cyanobacteria
Cyanobacteria blooms (HAB) pose a potential health risk to beachgoers, including HAB-associated gastrointestinal, respiratory and dermal illness. We conducted a prospective study of beachgoers at a Great Lakes beach during July – September, 2003. We recorded each participan...
A constrained robust least squares approach for contaminant release history identification
NASA Astrophysics Data System (ADS)
Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.
2006-04-01
Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.
Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.
2011-03-01
The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.
A well-posed optimal spectral element approximation for the Stokes problem
NASA Technical Reports Server (NTRS)
Maday, Y.; Patera, A. T.; Ronquist, E. M.
1987-01-01
A method is proposed for the spectral element simulation of incompressible flow. This method constitutes in a well-posed optimal approximation of the steady Stokes problem with no spurious modes in the pressure. The resulting method is analyzed, and numerical results are presented for a model problem.
Pose and Solve Varignon Converse Problems
ERIC Educational Resources Information Center
Contreras, José N.
2014-01-01
The activity of posing and solving problems can enrich learners' mathematical experiences because it fosters a spirit of inquisitiveness, cultivates their mathematical curiosity, and deepens their views of what it means to do mathematics. To achieve these goals, a mathematical problem needs to be at the appropriate level of difficulty,…
Applications: Students, the Mathematics Curriculum and Mathematics Textbooks
ERIC Educational Resources Information Center
Kilic, Cigdem
2013-01-01
Problem posing is one of the most important topics in a mathematics education. Through problem posing, students gain mathematical abilities and concepts and teachers can evaluate their students and arrange adequate learning environments. The aim of the present study is to investigate Turkish primary school teachers' opinions about problem posing…
Investigating the Impact of Field Trips on Teachers' Mathematical Problem Posing
ERIC Educational Resources Information Center
Courtney, Scott A.; Caniglia, Joanne; Singh, Rashmi
2014-01-01
This study examines the impact of field trip experiences on teachers' mathematical problem posing. Teachers from a large urban public school system in the Midwest participated in a professional development program that incorporated experiential learning with mathematical problem formulation experiences. During 2 weeks of summer 2011, 68 teachers…
The role of service areas in the optimization of FSS orbital and frequency assignments
NASA Technical Reports Server (NTRS)
Levis, C. A.; Wang, C. W.; Yamamura, Y.; Reilly, C. H.; Gonsalvez, D. J.
1985-01-01
A relationship is derived, on a single-entry interference basis, for the minimum allowable spacing between two satellites as a function of electrical parameters and service-area geometries. For circular beams, universal curves relate the topocentric satellite spacing angle to the service-area separation angle measured at the satellite. The corresponding geocentric spacing depends only weakly on the mean longitude of the two satellites, and this is true also for alliptical antenna beams. As a consequence, if frequency channels are preassigned, the orbital assignment synthesis of a satellite system can be formulated as a mixed-integer programming (MIP) problem or approximated by a linear programming (LP) problem, with the interference protection requirements enforced by constraints while some linear function is optimized. Possible objective-function choices are discussed and explicit formulations are presented for the choice of the sum of the absolute deviations of the orbital locations from some prescribed ideal location set. A test problem is posed consisting of six service areas, each served by one satellite, all using elliptical antenna beams and the same frequency channels. Numerical results are given for the three ideal location prescriptions for both the MIP and LP formulations. The resulting scenarios also satisfy reasonable aggregate interference protection requirements.
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-01-01
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
Thermal diffusion of Boussinesq solitons.
Arévalo, Edward; Mertens, Franz G
2007-10-01
We consider the problem of the soliton dynamics in the presence of an external noisy force for the Boussinesq type equations. A set of ordinary differential equations (ODEs) of the relevant coordinates of the system is derived. We show that for the improved Boussinesq (IBq) equation the set of ODEs has limiting cases leading to a set of ODEs which can be directly derived either from the ill-posed Boussinesq equation or from the Korteweg-de Vries (KdV) equation. The case of a soliton propagating in the presence of damping and thermal noise is considered for the IBq equation. A good agreement between theory and simulations is observed showing the strong robustness of these excitations. The results obtained here generalize previous results obtained in the frame of the KdV equation for lattice solitons in the monatomic chain of atoms.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-08-12
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.
NASA Astrophysics Data System (ADS)
Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.
2016-11-01
Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.
Robotic disaster recovery efforts with ad-hoc deployable cloud computing
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.
2013-06-01
Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
London, L
2009-11-01
Little research into neurobehavioural methods and effects occurs in developing countries, where established neurotoxic chemicals continue to pose significant occupational and environmental burdens, and where agents newly identified as neurotoxic are also widespread. Much of the morbidity and mortality associated with neurotoxic agents remains hidden in developing countries as a result of poor case detection, lack of skilled personnel, facilities and equipment for diagnosis, inadequate information systems, limited resources for research and significant competing causes of ill-health, such as HIV/AIDS and malaria. Placing the problem in a human rights context enables researchers and scientists in developing countries to make a strong case for why the field of neurobehavioural methods and effects matters because there are numerous international human rights commitments that make occupational and environmental health and safety a human rights obligation.
A microwave tomography strategy for structural monitoring
NASA Astrophysics Data System (ADS)
Catapano, I.; Crocco, L.; Isernia, T.
2009-04-01
The capability of the electromagnetic waves to penetrate optical dense regions can be conveniently exploited to provide high informative images of the internal status of manmade structures in a non destructive and minimally invasive way. In this framework, as an alternative to the wide adopted radar techniques, Microwave Tomography approaches are worth to be considered. As a matter of fact, they may accurately reconstruct the permittivity and conductivity distributions of a given region from the knowledge of a set of incident fields and measures of the corresponding scattered fields. As far as cultural heritage conservation is concerned, this allow not only to detect the anomalies, which can possibly damage the integrity and the stability of the structure, but also characterize their morphology and electric features, which are useful information to properly address the repair actions. However, since a non linear and ill-posed inverse scattering problem has to be solved, proper regularization strategies and sophisticated data processing tools have to be adopt to assure the reliability of the results. To pursue this aim, in the last years huge attention has been focused on the advantages introduced by diversity in data acquisition (multi-frequency/static/view data) [1,2] as well as on the analysis of the factors affecting the solution of an inverse scattering problem [3]. Moreover, how the degree of non linearity of the relationship between the scattered field and the electromagnetic parameters of the targets can be changed by properly choosing the mathematical model adopt to formulate the scattering problem has been shown in [4]. Exploiting the above results, in this work we propose an imaging procedure in which the inverse scattering problem is formulated as an optimization problem where the mathematical relationship between data and unknowns is expressed by means of a convenient integral equations model and the sought solution is defined as the global minimum of a cost functional. In particular, a local minimization scheme is exploited and a pre-processing step, devoted to preliminary asses the location and the shape of the anomalies, is exploited. The effectiveness of the proposed strategy has been preliminary assessed by means of numerical examples concerning the diagnostic of masonry structures, which will be shown in the Conference. [1] O. M. Bucci, L. Crocco, T. Isernia, and V. Pascazio, Subsurface inverse scattering problems: Quantifying, qualifying and achieving the available information, IEEE Trans. Geosci. Remote Sens., 39(5), 2527-2538, 2001. [2] R. Persico, R. Bernini, and F. Soldovieri, "The role of the measurement configuration in inverse scattering from buried objects under the distorted Born approximation," IEEE Trans. Antennas Propag., vol. 53, no. 6, pp. 1875-1887, Jun. 2005. [3] I. Catapano, L. Crocco, M. D'Urso, T. Isernia, "On the Effect of Support Estimation and of a New Model in 2-D Inverse Scattering Problems," IEEE Trans. Antennas Propagat., vol.55, no.6, pp.1895-1899, 2007. [4] M. D'Urso, I. Catapano, L. Crocco and T. Isernia, Effective solution of 3D scattering problems via series expansions: applicability and a new hybrid scheme, IEEE Trans. On Geosci. Remote Sens., vol.45, no.3, pp. 639-648, 2007.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1993-01-01
In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.
ERIC Educational Resources Information Center
Darvin, Jacqueline
2009-01-01
One way to merge imagination with problem-posing and problem-solving in the English classroom is by asking students to respond to "cultural and political vignettes" (CPVs). CPVs are cultural and political situations that are presented to students so that they can practice the creative and essential decision-making skills that they will need to use…
ERIC Educational Resources Information Center
Huntley, Mary Ann; Davis, Jon D.
2008-01-01
A cross-curricular structured-probe task-based clinical interview study with 44 pairs of third year high-school mathematics students, most of whom were high achieving, was conducted to investigate their approaches to a variety of algebra problems. This paper presents results from three problems that were posed in symbolic form. Two problems are…
Management Issues in Critically Ill Pediatric Patients with Trauma.
Ahmed, Omar Z; Burd, Randall S
2017-10-01
The management of critically ill pediatric patients with trauma poses many challenges because of the infrequency and diversity of severe injuries and a paucity of high-level evidence to guide care for these uncommon events. This article discusses recent recommendations for early resuscitation and blood component therapy for hypovolemic pediatric patients with trauma. It also highlights the specific types of injuries that lead to severe injury in children and presents challenges related to their management. Copyright © 2017 Elsevier Inc. All rights reserved.
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss EM
2017-01-01
Background: Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. Aims: The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Design and setting: Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Results: Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4–68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3–17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36–1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness (χ2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01–1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Conclusion: Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation. PMID:28812945
Shear-stress fluctuations and relaxation in polymer glasses
NASA Astrophysics Data System (ADS)
Kriuchevskyi, I.; Wittmer, J. P.; Meyer, H.; Benzerara, O.; Baschnagel, J.
2018-01-01
We investigate by means of molecular dynamics simulation a coarse-grained polymer glass model focusing on (quasistatic and dynamical) shear-stress fluctuations as a function of temperature T and sampling time Δ t . The linear response is characterized using (ensemble-averaged) expectation values of the contributions (time averaged for each shear plane) to the stress-fluctuation relation μsf for the shear modulus and the shear-stress relaxation modulus G (t ) . Using 100 independent configurations, we pay attention to the respective standard deviations. While the ensemble-averaged modulus μsf(T ) decreases continuously with increasing T for all Δ t sampled, its standard deviation δ μsf(T ) is nonmonotonic with a striking peak at the glass transition. The question of whether the shear modulus is continuous or has a jump singularity at the glass transition is thus ill posed. Confirming the effective time-translational invariance of our systems, the Δ t dependence of μsf and related quantities can be understood using a weighted integral over G (t ) .
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
NASA Astrophysics Data System (ADS)
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
NASA Astrophysics Data System (ADS)
Ahmed, A. Soueid; Jardani, A.; Revil, A.; Dupont, J. P.
2016-03-01
Transient hydraulic tomography is used to image the heterogeneous hydraulic conductivity and specific storage fields of shallow aquifers using time series of hydraulic head data. Such ill-posed and non-unique inverse problem can be regularized using some spatial geostatistical characteristic of the two fields. In addition to hydraulic heads changes, the flow of water, during pumping tests, generates an electrical field of electrokinetic nature. These electrical field fluctuations can be passively recorded at the ground surface using a network of non-polarizing electrodes connected to a high impedance (> 10 MOhm) and sensitive (0.1 mV) voltmeter, a method known in geophysics as the self-potential method. We perform a joint inversion of the self-potential and hydraulic head data to image the hydraulic conductivity and specific storage fields. We work on a 3D synthetic confined aquifer and we use the adjoint state method to compute the sensitivities of the hydraulic parameters to the hydraulic head and self-potential data in both steady-state and transient conditions. The inverse problem is solved using the geostatistical quasi-linear algorithm framework of Kitanidis. When the number of piezometers is small, the record of the transient self-potential signals provides useful information to characterize the hydraulic conductivity and specific storage fields. These results show that the self-potential method reveals the heterogeneities of some areas of the aquifer, which could not been captured by the tomography based on the hydraulic heads alone. In our analysis, the improvement on the hydraulic conductivity and specific storage estimations were based on perfect knowledge of electrical resistivity field. This implies that electrical resistivity will need to be jointly inverted with the hydraulic parameters in future studies and the impact of its uncertainty assessed with respect to the final tomograms of the hydraulic parameters.
NASA Astrophysics Data System (ADS)
Zaroli, C.; Sambridge, M.; Lévêque, J.-J.; Debayle, E.; Nolet, G.
2013-06-01
In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global Multiple-Frequency Tomography (MFT), using a data set of 287 078 S-wave delay-times measured in five frequency bands (10, 15, 22, 34, 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise. In addition, a sharp change of behaviour of the model ℓ∞-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation. Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models (correlation superior to 98% up to spherical harmonic degree 80). The obtained tomographic model is displayed in mid lower-mantle (660-1910 km depth), and is shown to be compatible with three other recent global shear-velocity models. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of the Earth's mantle.
NASA Astrophysics Data System (ADS)
Zaroli, C.; Sambridge, M.; Lévêque, J.-J.; Debayle, E.; Nolet, G.
2013-10-01
In a linear ill-posed inverse problem, the regularisation parameter (damping) controls the balance between minimising both the residual data misfit and the model norm. Poor knowledge of data uncertainties often makes the selection of damping rather arbitrary. To go beyond that subjectivity, an objective rationale for the choice of damping is presented, which is based on the coherency of delay-time estimates in different frequency bands. Our method is tailored to the problem of global multiple-frequency tomography (MFT), using a data set of 287 078 S-wave delay times measured in five frequency bands (10, 15, 22, 34, and 51 s central periods). Whereas for each ray path the delay-time estimates should vary coherently from one period to the other, the noise most likely is not coherent. Thus, the lack of coherency of the information in different frequency bands is exploited, using an analogy with the cross-validation method, to identify models dominated by noise. In addition, a sharp change of behaviour of the model ℓ∞-norm, as the damping becomes lower than a threshold value, is interpreted as the signature of data noise starting to significantly pollute at least one model component. Models with damping larger than this threshold are diagnosed as being constructed with poor data exploitation. Finally, a preferred model is selected from the remaining range of permitted model solutions. This choice is quasi-objective in terms of model interpretation, as the selected model shows a high degree of similarity with almost all other permitted models (correlation superior to 98% up to spherical harmonic degree 80). The obtained tomographic model is displayed in the mid lower-mantle (660-1910 km depth), and is shown to be compatible with three other recent global shear-velocity models. A wider application of the presented rationale should permit us to converge towards more objective seismic imaging of Earth's mantle.
NASA Astrophysics Data System (ADS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-04-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
ERIC Educational Resources Information Center
Aguilar-Magallón, Daniel Aurelio; Reyes-Martìnez, Isaid
2016-01-01
We analyze and discuss ways in which prospective high school teachers pose and pursue questions or problems during the process of reconstructing dynamic configurations of figures given in problem statements. To what extent does the systematic use of a Dynamic Geometry System (DGS) help the participants engage in problem posing activities…
Luyckx, Koen; Rassart, Jessica; Aujoulat, Isabelle; Goubert, Liesbet; Weets, Ilse
2016-04-01
This long-term prospective study examined whether illness self-concept (or the degree to which chronic illness becomes integrated in the self) mediated the pathway from self-esteem to problem areas in diabetes in emerging adults with Type 1 diabetes. Having a central illness self-concept (i.e. feeling overwhelmed by diabetes) was found to relate to lower self-esteem, and more treatment, food, emotional, and social support problems. Furthermore, path analyses indicated that self-esteem was negatively related to both levels and relative changes in these problem areas in diabetes over a period of 5 years. Illness self-concept fully mediated these associations. © The Author(s) 2014.
Marker optimization for facial motion acquisition and deformation.
Le, Binh H; Zhu, Mingyang; Deng, Zhigang
2013-11-01
A long-standing problem in marker-based facial motion capture is what are the optimal facial mocap marker layouts. Despite its wide range of potential applications, this problem has not yet been systematically explored to date. This paper describes an approach to compute optimized marker layouts for facial motion acquisition as optimization of characteristic control points from a set of high-resolution, ground-truth facial mesh sequences. Specifically, the thin-shell linear deformation model is imposed onto the example pose reconstruction process via optional hard constraints such as symmetry and multiresolution constraints. Through our experiments and comparisons, we validate the effectiveness, robustness, and accuracy of our approach. Besides guiding minimal yet effective placement of facial mocap markers, we also describe and demonstrate its two selected applications: marker-based facial mesh skinning and multiresolution facial performance capture.
Concepts, Structures, and Goals: Redefining Ill-Definedness
ERIC Educational Resources Information Center
Lynch, Collin; Ashley, Kevin D.; Pinkwart, Niels; Aleven, Vincent
2009-01-01
In this paper we consider prior definitions of the terms "ill-defined domain" and "ill-defined problem". We then present alternate definitions that better support research at the intersection of Artificial Intelligence and Education. In our view both problems and domains are ill-defined when essential concepts, relations, or criteria are un- or…
General methodology for simultaneous representation and discrimination of multiple object classes
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Casasent, David P.
1998-03-01
We address a new general method for linear and nonlinear feature extraction for simultaneous representation and classification. We call this approach the maximum representation and discrimination feature (MRDF) method. We develop a novel nonlinear eigenfeature extraction technique to represent data with closed-form solutions and use it to derive a nonlinear MRDF algorithm. Results of the MRDF method on synthetic databases are shown and compared with results from standard Fukunaga-Koontz transform and Fisher discriminant function methods. The method is also applied to an automated product inspection problem and for classification and pose estimation of two similar objects under 3D aspect angle variations.
Asymptotic analysis of the local potential approximation to the Wetterich equation
NASA Astrophysics Data System (ADS)
Bender, Carl M.; Sarkar, Sarben
2018-06-01
This paper reports a study of the nonlinear partial differential equation that arises in the local potential approximation to the Wetterich formulation of the functional renormalization group equation. A cut-off-dependent shift of the potential in this partial differential equation is performed. This shift allows a perturbative asymptotic treatment of the differential equation for large values of the infrared cut-off. To leading order in perturbation theory the differential equation becomes a heat equation, where the sign of the diffusion constant changes as the space-time dimension D passes through 2. When D < 2, one obtains a forward heat equation whose initial-value problem is well-posed. However, for D > 2 one obtains a backward heat equation whose initial-value problem is ill-posed. For the special case D = 1 the asymptotic series for cubic and quartic models is extrapolated to the small infrared-cut-off limit by using Padé techniques. The effective potential thus obtained from the partial differential equation is then used in a Schrödinger-equation setting to study the stability of the ground state. For cubic potentials it is found that this Padé procedure distinguishes between a -symmetric theory and a conventional Hermitian theory (g real). For an theory the effective potential is nonsingular and has a stable ground state but for a conventional theory the effective potential is singular. For a conventional Hermitian theory and a -symmetric theory (g > 0) the results are similar; the effective potentials in both cases are nonsingular and possess stable ground states.
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828