A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
1980-02-01
to estimate f -..ell, -noderately ,-ell, or- poorly. 1 ’The sansitivity *of a rec-ilarized estimate of f to the noise is made explicit. After giving the...AD-A 7 .SA92 925 WISCONSIN UN! V-MADISON DEFT OF STATISTICS F /S 11,’ 1 ILL POSED PRORLEMS: NUMERICAL ANn STATISTICAL METHODS FOR MILOL-ETC(U FEB 80 a...estimate f given z. We first define the 1 intrinsic rank of the problem where jK(tit) f (t)dt is known exactly. This 0 definition is used to provide insight
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.
2017-08-01
This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.
Control and System Theory, Optimization, Inverse and Ill-Posed Problems
1988-09-14
Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
Regularization techniques for backward--in--time evolutionary PDE problems
NASA Astrophysics Data System (ADS)
Gustafsson, Jonathan; Protas, Bartosz
2007-11-01
Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.
Assimilating data into open ocean tidal models
NASA Astrophysics Data System (ADS)
Kivman, Gennady A.
The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation
Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.
2017-11-27
Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.
Microbial food-borne illnesses pose a significant health problem in Japan. In 1996 the world's largest outbreak of Escherichia coli food illness occurred in Japan. Since then, new regulatory measures were established, including strict hygiene practices in meat and food processi...
NASA Astrophysics Data System (ADS)
Pickard, William F.
2004-10-01
The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.
Robust penalty method for structural synthesis
NASA Technical Reports Server (NTRS)
Kamat, M. P.
1983-01-01
The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.
Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.
Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao
2016-01-01
Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.
NASA Astrophysics Data System (ADS)
Nie, Yao; Zheng, Xiaoxin
2018-07-01
We study the Cauchy problem for the 3D incompressible hyperdissipative Navier–Stokes equations and consider the well-posedness and ill-posedness in critical Fourier-Herz spaces . We prove that if and , the system is locally well-posed for large initial data as well as globally well-posed for small initial data. Also, we obtain the same result for and . More importantly, we show that the system is ill-posed in the sense of norm inflation for and q > 2. The proof relies heavily on particular structure of initial data u 0 that we construct, which makes the first iteration of solution inflate. Specifically, the special structure of u 0 transforms an infinite sum into a finite sum in ‘remainder term’, which permits us to control the remainder.
Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow
NASA Technical Reports Server (NTRS)
Arian, Eyal; Ta'asan, Shlomo
1996-01-01
In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.
NASA Astrophysics Data System (ADS)
Helmers, Michael; Herrmann, Michael
2018-03-01
We consider a lattice regularization for an ill-posed diffusion equation with a trilinear constitutive law and study the dynamics of phase interfaces in the parabolic scaling limit. Our main result guarantees for a certain class of single-interface initial data that the lattice solutions satisfy asymptotically a free boundary problem with a hysteretic Stefan condition. The key challenge in the proof is to control the microscopic fluctuations that are inevitably produced by the backward diffusion when a particle passes the spinodal region.
NASA Astrophysics Data System (ADS)
Jia, Zhongxiao; Yang, Yanfei
2018-05-01
In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).
NASA Astrophysics Data System (ADS)
Saadat, S. A.; Safari, A.; Needell, D.
2016-06-01
The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.
NASA Astrophysics Data System (ADS)
Li, Zhenhai; Nie, Chenwei; Yang, Guijun; Xu, Xingang; Jin, Xiuliang; Gu, Xiaohe
2014-10-01
Leaf area index (LAI) and LCC, as the two most important crop growth variables, are major considerations in management decisions, agricultural planning and policy making. Estimation of canopy biophysical variables from remote sensing data was investigated using a radiative transfer model. However, the ill-posed problem is unavoidable for the unique solution of the inverse problem and the uncertainty of measurements and model assumptions. This study focused on the use of agronomy mechanism knowledge to restrict and remove the ill-posed inversion results. For this purpose, the inversion results obtained using the PROSAIL model alone (NAMK) and linked with agronomic mechanism knowledge (AMK) were compared. The results showed that AMK did not significantly improve the accuracy of LAI inversion. LAI was estimated with high accuracy, and there was no significant improvement after considering AMK. The validation results of the determination coefficient (R2) and the corresponding root mean square error (RMSE) between measured LAI and estimated LAI were 0.635 and 1.022 for NAMK, and 0.637 and 0.999 for AMK, respectively. LCC estimation was significantly improved with agronomy mechanism knowledge; the R2 and RMSE values were 0.377 and 14.495 μg cm-2 for NAMK, and 0.503 and 10.661 μg cm-2 for AMK, respectively. Results of the comparison demonstrated the need for agronomy mechanism knowledge in radiative transfer model inversion.
Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning
ERIC Educational Resources Information Center
Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan
2009-01-01
In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…
Solving ill-posed inverse problems using iterative deep neural networks
NASA Astrophysics Data System (ADS)
Adler, Jonas; Öktem, Ozan
2017-12-01
We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).
NASA Astrophysics Data System (ADS)
Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh
2015-11-01
The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.
The quasi-optimality criterion in the linear functional strategy
NASA Astrophysics Data System (ADS)
Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey
2018-07-01
The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.
Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.
1980-06-30
4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of
Atmospheric inverse modeling via sparse reconstruction
NASA Astrophysics Data System (ADS)
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
ERIC Educational Resources Information Center
Doleck, Tenzin; Jarrell, Amanda; Poitras, Eric G.; Chaouachi, Maher; Lajoie, Susanne P.
2016-01-01
Clinical reasoning is a central skill in diagnosing cases. However, diagnosing a clinical case poses several challenges that are inherent to solving multifaceted ill-structured problems. In particular, when solving such problems, the complexity stems from the existence of multiple paths to arriving at the correct solution (Lajoie, 2003). Moreover,…
Rapid optimization of multiple-burn rocket flights.
NASA Technical Reports Server (NTRS)
Brown, K. R.; Harrold, E. F.; Johnson, G. W.
1972-01-01
Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.
NASA Astrophysics Data System (ADS)
Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich
Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.
Least Squares Computations in Science and Engineering
1994-02-01
iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory
Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics
NASA Technical Reports Server (NTRS)
Tam, Christopher K. W.; Auriault, Laurent
1996-01-01
It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.
Chemical approaches to solve mycotoxin problems and improve food safety
USDA-ARS?s Scientific Manuscript database
Foodborne illnesses are experienced by most of the population and are preventable. Agricultural produce can occasionally become contaminated with fungi capable of making mycotoxins that pose health risks and reduce values. Many strategies are employed to keep food safe from mycotoxin contamination. ...
NASA Astrophysics Data System (ADS)
Klibanov, Michael V.; Kuzhuget, Andrey V.; Golubnichiy, Kirill V.
2016-01-01
A new empirical mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock, new initial and new boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. To verify the validity of our model, real market data for 368 randomly selected liquid options are used. A new trading strategy is proposed. Our results indicates that our method is profitable on those options. Furthermore, it is shown that the performance of two simple extrapolation-based techniques is much worse. We conjecture that our method might lead to significant profits of those financial insitutions which trade large amounts of options. We caution, however, that further studies are necessary to verify this conjecture.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Inverse problems and optimal experiment design in unsteady heat transfer processes identification
NASA Technical Reports Server (NTRS)
Artyukhin, Eugene A.
1991-01-01
Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.
Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation
NASA Astrophysics Data System (ADS)
Wang, Linjun; Han, Xu; Wei, Zhouchao
The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.
NASA Astrophysics Data System (ADS)
Antokhin, I. I.
2017-06-01
We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.
NASA Astrophysics Data System (ADS)
Sirota, Dmitry; Ivanov, Vadim
2017-11-01
Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.
Moving force identification based on modified preconditioned conjugate gradient method
NASA Astrophysics Data System (ADS)
Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy
2018-06-01
This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.
NASA Astrophysics Data System (ADS)
Trillon, Adrien
Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.
Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann
2011-10-07
The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.
NASA Astrophysics Data System (ADS)
Lanen, Theo A.; Watt, David W.
1995-10-01
Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.
NASA Astrophysics Data System (ADS)
Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.
2015-12-01
Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.
Local well-posedness for dispersion generalized Benjamin-Ono equations in Sobolev spaces
NASA Astrophysics Data System (ADS)
Guo, Zihua
We prove that the Cauchy problem for the dispersion generalized Benjamin-Ono equation ∂u+|∂u+uu=0, u(x,0)=u(x), is locally well-posed in the Sobolev spaces H for s>1-α if 0⩽α⩽1. The new ingredient is that we generalize the methods of Ionescu, Kenig and Tataru (2008) [13] to approach the problem in a less perturbative way, in spite of the ill-posedness results of Molinet, Saut and Tzvetkov (2001) [21]. Moreover, as a bi-product we prove that if 0<α⩽1 the corresponding modified equation (with the nonlinearity ±uuu) is locally well-posed in H for s⩾1/2-α/4.
Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction
NASA Astrophysics Data System (ADS)
Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng
2017-01-01
Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
TOPICAL REVIEW: The stability for the Cauchy problem for elliptic equations
NASA Astrophysics Data System (ADS)
Alessandrini, Giovanni; Rondi, Luca; Rosset, Edi; Vessella, Sergio
2009-12-01
We discuss the ill-posed Cauchy problem for elliptic equations, which is pervasive in inverse boundary value problems modeled by elliptic equations. We provide essentially optimal stability results, in wide generality and under substantially minimal assumptions. As a general scheme in our arguments, we show that all such stability results can be derived by the use of a single building brick, the three-spheres inequality. Due to the current absence of research funding from the Italian Ministry of University and Research, this work has been completed without any financial support.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
NASA Technical Reports Server (NTRS)
Oliver, A Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems
Assessment of thyroid function in dogs with low plasma thyroxine concentration.
Diaz Espineira, M M; Mol, J A; Peeters, M E; Pollak, Y W E A; Iversen, L; van Dijk, J E; Rijnberk, A; Kooistra, H S
2007-01-01
Differentiation between hypothyroidism and nonthyroidal illness in dogs poses specific problems, because plasma total thyroxine (TT4) concentrations are often low in nonthyroidal illness, and plasma thyroid stimulating hormone (TSH) concentrations are frequently not high in primary hypothyroidism. The serum concentrations of the common basal biochemical variables (TT4, freeT4 [fT4], and TSH) overlap between dogs with hypothyroidism and dogs with nonthyroidal illness, but, with stimulation tests and quantitative measurement of thyroidal 99mTcO4(-) uptake, differentiation will be possible. In 30 dogs with low plasma TT4 concentration, the final diagnosis was based upon histopathologic examination of thyroid tissue obtained by biopsy. Fourteen dogs had primary hypothyroidism, and 13 dogs had nonthyroidal illness. Two dogs had secondary hypothyroidism, and 1 dog had metastatic thyroid cancer. The diagnostic value was assessed for (1) plasma concentrations of TT4, fT4, and TSH; (2) TSH-stimulation test; (3) plasma TSH concentration after stimulation with TSH-releasing hormone (TRH); (4) occurrence of thyroglobulin antibodies (TgAbs); and (5) thyroidal 99mTcO4(-) uptake. Plasma concentrations of TT4, fT4, TSH, and the hormone pairs TT4/TSH and fT4/TSH overlapped in the 2 groups, whereas, with TgAbs, there was 1 false-negative result. Results of the TSH- and TRH-stimulation tests did not meet earlier established diagnostic criteria, overlapped, or both. With a quantitative measurement of thyroidal 99mTcO4(-) uptake, there was no overlap between dogs with primary hypothyroidism and dogs with nonthyroidal illness. The results of this study confirm earlier observations that, in dogs, accurate biochemical diagnosis of primary hypothyroidism poses specific problems. Previous studies, in which the TSH-stimulation test was used as the "gold standard" for the diagnosis of hypothyroidism may have suffered from misclassification. Quantitative measurement of thyroidal 99mTcO- uptake has the highest discriminatory power with regard to the differentiation between primary hypothyroidism and nonthyroidal illness.
History matching by spline approximation and regularization in single-phase areal reservoirs
NASA Technical Reports Server (NTRS)
Lee, T. Y.; Kravaris, C.; Seinfeld, J.
1986-01-01
An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.
The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations
NASA Technical Reports Server (NTRS)
Hesthaven, J. S.
1997-01-01
We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.
Convex Relaxation For Hard Problem In Data Mining And Sensor Localization
2017-04-13
Drusvyatskiy, S.A. Vavasis, and H. Wolkowicz. Extreme point in- equalities and geometry of the rank sparsity ball. Math . Program., 152(1-2, Ser. A...521–544, 2015. [3] M-H. Lin and H. Wolkowicz. Hiroshima’s theorem and matrix norm inequalities. Acta Sci. Math . (Szeged), 81(1-2):45–53, 2015. [4] D...9867-4. [8] D. Drusvyatskiy, G. Li, and H. Wolkowicz. Alternating projections for ill-posed semidenite feasibility problems. Math . Program., 2016
1977-12-01
exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution
Pose-free structure from motion using depth from motion constraints.
Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G
2011-10-01
Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE
Application of Turchin's method of statistical regularization
NASA Astrophysics Data System (ADS)
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Transition from the labor market: older workers and retirement.
Peterson, Chris L; Murphy, Greg
2010-01-01
The new millennium has seen the projected growth of older populations as a source of many problems, not the least of which is how to sustain this increasingly aging population. Some decades ago, early retirement from work posed few problems for governments, but most nations are now trying to ensure that workers remain in the workforce longer. In this context, the role played by older employees can be affected by at least two factors: their productivity (or perceived productivity) and their acceptance by younger workers and management. If the goal of maintaining employees into older age is to be achieved and sustained, opportunities must be provided, for example, for more flexible work arrangements and more possibilities to pursue bridge employment (work after formal retirement). The retirement experience varies, depending on people's circumstances. Some people, for example, have retirement forced upon them by illness or injury at work, by ill-health (such as chronic illnesses), or by downsizing and associated redundancies. This article focuses on the problems and opportunities associated with working to an older age or leaving the workforce early, particularly due to factors beyond one's control.
Martin, Graeme; Beech, Nic; MacIntosh, Robert; Bushfield, Stacey
2015-01-01
The discourse of leaderism in health care has been a subject of much academic and practical debate. Recently, distributed leadership (DL) has been adopted as a key strand of policy in the UK National Health Service (NHS). However, there is some confusion over the meaning of DL and uncertainty over its application to clinical and non-clinical staff. This article examines the potential for DL in the NHS by drawing on qualitative data from three co-located health-care organisations that embraced DL as part of their organisational strategy. Recent theorising positions DL as a hybrid model combining focused and dispersed leadership; however, our data raise important challenges for policymakers and senior managers who are implementing such a leadership policy. We show that there are three distinct forms of disconnect and that these pose a significant problem for DL. However, we argue that instead of these disconnects posing a significant problem for the discourse of leaderism, they enable a fantasy of leadership that draws on and supports the discourse. © 2014 The Authors. Sociology of Health & Illness © 2014 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd.
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology
NASA Astrophysics Data System (ADS)
Barker, T.; Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.
2017-05-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities.
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology
Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.
2017-01-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities. PMID:28588402
Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology.
Barker, T; Schaeffer, D G; Shearer, M; Gray, J M N T
2017-05-01
Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ ( I )-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I -dependent rheology. When the I -dependence comes from a specific friction coefficient μ ( I ), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ ( I ) satisfies certain minimal, physically natural, inequalities.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl
2007-02-01
Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.
Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.
Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti
2006-02-01
Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.
Load identification approach based on basis pursuit denoising algorithm
NASA Astrophysics Data System (ADS)
Ginsberg, D.; Ruby, M.; Fritzen, C. P.
2015-07-01
The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.
Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.
Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M
2013-01-01
Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.
Validating an artificial intelligence human proximity operations system with test cases
NASA Astrophysics Data System (ADS)
Huber, Justin; Straub, Jeremy
2013-05-01
An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.
Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution
2015-06-08
deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero
Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation
Barbero, Sergio; Thibos, Larry N.
2007-01-01
Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302
A practical method to assess model sensitivity and parameter uncertainty in C cycle models
NASA Astrophysics Data System (ADS)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2015-04-01
The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.
Treatment of Nuclear Data Covariance Information in Sample Generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William
This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.
NASA Astrophysics Data System (ADS)
Dzuba, Sergei A.
2016-08-01
Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.
Munir, Fehmidah; Yarker, Joanna; Haslam, Cheryl
2008-01-01
To investigate the organizational perspectives on the effectiveness of their attendance management policies for chronically ill employees. A mixed-method approach was employed involving questionnaire survey with employees and in-depth interviews with key stakeholders of the organizational policies. Participants reported that attendance management polices and the point at which systems were triggered, posed problems for employees managing chronic illness. These systems presented risk to health: employees were more likely to turn up for work despite feeling unwell (presenteeism) to avoid a disciplinary situation but absence-related support was only provided once illness progressed to long-term sick leave. Attendance management polices also raised ethical concerns for 'forced' illness disclosure and immense pressures on line managers to manage attendance. Participants felt their current attendance management polices were unfavourable toward those managing a chronic illness. The policies heavily focused on attendance despite illness and on providing return to work support following long-term sick leave. Drawing on the results, the authors conclude that attendance management should promote job retention rather than merely prevent absence per se. They outline areas of improvement in the attendance management of employees with chronic illness.
Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior
NASA Technical Reports Server (NTRS)
Wilson, T. L.
1984-01-01
The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.
Finell, Eerika; Seppälä, Tuija; Suoninen, Eero
2018-07-01
Suffering from a contested illness poses a serious threat to one's identity. We analyzed the rhetorical identity management strategies respondents used when depicting their health problems and lives in the context of observed or suspected indoor air (IA) problems in the workplace. The data consisted of essays collected by the Finnish Literature Society. We used discourse-oriented methods to interpret a variety of language uses in the construction of identity strategies. Six strategies were identified: respondents described themselves as normal and good citizens with strong characters, and as IA sufferers who received acknowledge from others, offered positive meanings to their in-group, and demanded recognition. These identity strategies located on two continua: (a) individual- and collective-level strategies and (b) dissolved and emphasized (sub)category boundaries. The practical conclusion is that professionals should be aware of these complex coping strategies when aiming to interact effectively with people suffering from contested illnesses.
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
2017-07-10
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy
We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less
Estimation of the parameters of disturbances on long-range radio-communication paths
NASA Astrophysics Data System (ADS)
Gerasimov, Iu. S.; Gordeev, V. A.; Kristal, V. S.
1982-09-01
Radio propagation on long-range paths is disturbed by such phenomena as ionospheric density fluctuations, meteor trails, and the Faraday effect. In the present paper, the determination of the characteristics of such disturbances on the basis of received-signal parameters is considered as an inverse and ill-posed problem. A method for investigating the indeterminacy which arises in such determinations is proposed, and a quantitative analysis of this indeterminacy is made.
Spotted star mapping by light curve inversion: Tests and application to HD 12545
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.
2013-06-01
A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.
Dai, W W; Marsili, P M; Martinez, E; Morucci, J P
1994-05-01
This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).
Ill-posedness in modeling mixed sediment river morphodynamics
NASA Astrophysics Data System (ADS)
Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid
2018-04-01
In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.
A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics
NASA Astrophysics Data System (ADS)
Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.
2016-02-01
The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.
NASA Astrophysics Data System (ADS)
Atzberger, C.
2013-12-01
The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion
[Multidisciplinary approach in public health research. The example of accidents and safety at work].
Lert, F; Thebaud, A; Dassa, S; Goldberg, M
1982-01-01
This article critically analyses the various scientific approaches taken to industrial accidents, particularly in epidemiology, ergonomie and sociology, by attempting to outline the epistemological limitations in each respective field. An occupational accident is by its very nature not only a physical injury but also an economic, social and legal phenomenon, which more so than illness, enables us to examine the problems posed by the need for a multidisciplinary approach in Public Health research.
Controlled wavelet domain sparsity for x-ray tomography
NASA Astrophysics Data System (ADS)
Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli
2018-01-01
Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \
Model-based elastography: a survey of approaches to the inverse elasticity problem
Doyley, M M
2012-01-01
Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839
Sinc-Galerkin estimation of diffusivity in parabolic problems
NASA Technical Reports Server (NTRS)
Smith, Ralph C.; Bowers, Kenneth L.
1991-01-01
A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, J.G.
2011-07-01
A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less
NASA Astrophysics Data System (ADS)
Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba
2018-10-01
This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.
[Ethical questions related to nutrition and hidration: basic aspects].
Collazo Chao, E; Girela, E
2011-01-01
Conditions that pose ethical problems related to nutrition and hydration are very common nowdays, particularly within Hospitals among terminally ill patients and other patients who require nutrition and hydration. In this article we intend to analyze some circumstances, according to widely accepted ethical values, in order to outline a clear action model to help clinicians in making such difficult decisions. The problematic situations analyzed include: should hydration and nutrition be considered basic care or therapeutic measures?, and the ethical aspects of enteral versus parenteral nutrition.
Evaluation of global equal-area mass grid solutions from GRACE
NASA Astrophysics Data System (ADS)
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron
2015-04-01
The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.
A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound
NASA Astrophysics Data System (ADS)
Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.
2012-10-01
A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.
2011-12-15
the measured porosity values can be taken as equivalent to effective porosity values for this aquifer with the risk of only very limited overestimation...information to constrain/control an increasingly ill-posed problem, and (3) risk estimation of a model with more heterogeneity than is needed to explain...coarse fluvial deposits: Boise Hydrogeophysical Research Site, Geological Society of America Bulletin, 116(9–10), 1059–1073. Barrash, W., T. Clemo
NASA Astrophysics Data System (ADS)
Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele
2008-11-01
Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.
Stuart, Heather
2004-01-01
This paper addresses what is known about workplace stigma and employment inequity for people with mental and emotional problems. For people with serious mental disorders, studies show profound consequences of stigma, including diminished employability, lack of career advancement and poor quality of working life. People with serious mental illnesses are more likely to be unemployed or to be under-employed in inferior positions that are incommensurate with their skills or training. If they return to work following an illness, they often face hostility and reduced responsibilities. The result may be self-stigma and increased disability. Little is yet known about how workplace stigma affects those with less disabling psychological or emotional problems, even though these are likely to be more prevalent in workplace settings. Despite the heavy burden posed by poor mental health in the workplace, there is no regular source of population data relating to workplace stigma, and no evidence base to support the development of best-practice solutions for workplace anti-stigma programs. Suggestions for research are made in light of these gaps.
Silver, Eric; Wolff, Nancy
2010-01-01
The problems posed by persons with mental illness involved with the criminal justice system are vexing ones that have received attention at the local, state and national levels. The conceptual model currently guiding research and social action around these problems is shaped by the “criminalization” perspective and the associated belief that reconnecting individuals with mental health services will by itself reduce risk for arrest. This paper argues that such efforts are necessary but possibly not sufficient to achieve that reduction. Arguing for the need to develop a services research framework that identifies a broader range of risk factors for arrest, we describe three potentially useful criminological frameworks—the “life course,” “local life circumstances” and “routine activities” perspectives. Their utility as platforms for research in a population of persons with mental illness is discussed and suggestions are provided with regard to how services research guided by these perspectives might inform the development of community-based services aimed at reducing risk of arrest. PMID:16791518
Gilgen, D; Maeusezahl, D; Salis Gross, C; Battegay, E; Flubacher, P; Tanner, M; Weiss, M G; Hatz, C
2005-09-01
Migration, particularly among refugees and asylum seekers, poses many challenges to the health system of host countries. This study examined the impact of migration history on illness experience, its meaning and help-seeking strategies of migrant patients from Bosnia and Turkey with a range of common health problems in general practice in Basel, Switzerland. The Explanatory Model Interview Catalogue, a data collection instrument for cross-cultural research which combines epidemiological and ethnographic research approaches, was used in semi-structured one-to-one patient interviews. Bosnian patients (n=36) who had more traumatic migration experiences than Turkish/Kurdish (n=62) or Swiss internal migrants (n=48) reported a larger number of health problems than the other groups. Psychological distress was reported most frequently by all three groups in response to focussed queries, but spontaneously reported symptoms indicated the prominence of somatic, rather than psychological or psychosocial, problems. Among Bosnians, 78% identified traumatic migration experiences as a cause of their illness, in addition to a range of psychological and biomedical causes. Help-seeking strategies for the current illness included a wide range of treatments, such as basic medical care at private surgeries, outpatients department in hospitals as well as alternative medical treatments among all groups. Findings provide a useful guide to clinicians who work with migrants and should inform policy in medical care, information and health promotion for migrants in Switzerland as well as further education of health professionals on issues concerning migrants health.
An ill-posed parabolic evolution system for dispersive deoxygenation-reaeration in water
NASA Astrophysics Data System (ADS)
Azaïez, M.; Ben Belgacem, F.; Hecht, F.; Le Bot, C.
2014-01-01
We consider an inverse problem that arises in the management of water resources and pertains to the analysis of surface water pollution by organic matter. Most physically relevant models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor’s dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any conditions. In fact, for real-life concerns, measurements on the DO are easy to obtain and to save. On the contrary, collecting data on the BOD is a sensitive task and turns out to be a lengthy process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only is this problem plainly worth studying for its own interest but it could also be a mandatory step in other applications such as the identification of the location of pollution sources. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are the cause of the ill-posed data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can search for; it is the central purpose of the paper. Finally, we have performed some computational experiments to assess the capability of the mixed finite element in missing data recovery.
Local search heuristic for the discrete leader-follower problem with multiple follower objectives
NASA Astrophysics Data System (ADS)
Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand
2016-10-01
We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.
Inverse random source scattering for the Helmholtz equation in inhomogeneous media
NASA Astrophysics Data System (ADS)
Li, Ming; Chen, Chuchu; Li, Peijun
2018-01-01
This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing
2013-09-15
For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
Space structures insulating material's thermophysical and radiation properties estimation
NASA Astrophysics Data System (ADS)
Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.
2007-11-01
In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.
Fractional-order TV-L2 model for image denoising
NASA Astrophysics Data System (ADS)
Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu
2013-10-01
This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.
Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann
2011-11-01
Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.
Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function
NASA Astrophysics Data System (ADS)
Gao, Fei; Chang, Lei; Liu, Yu-xin
2017-07-01
We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.
Chopping Time of the FPU {α }-Model
NASA Astrophysics Data System (ADS)
Carati, A.; Ponno, A.
2018-03-01
We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.
A Toolbox for Imaging Stellar Surfaces
NASA Astrophysics Data System (ADS)
Young, John
2018-04-01
In this talk I will review the available algorithms for synthesis imaging at visible and infrared wavelengths, including both gray and polychromatic methods. I will explain state-of-the-art approaches to constraining the ill-posed image reconstruction problem, and selecting an appropriate regularisation function and strength of regularisation. The reconstruction biases that can follow from non-optimal choices will be discussed, including their potential impact on the physical interpretation of the results. This discussion will be illustrated with example stellar surface imaging results from real VLTI and COAST datasets.
Boisvert, R F; Donahue, M J; Lozier, D W; McMichael, R; Rust, B W
2001-01-01
In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST's current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years.
Computing motion using resistive networks
NASA Technical Reports Server (NTRS)
Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James
1988-01-01
Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.
Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.
2005-01-01
This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.
An ambiguity of information content and error in an ill-posed satellite inversion
NASA Astrophysics Data System (ADS)
Koner, Prabhat
According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.
ERIC Educational Resources Information Center
Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven
2013-01-01
The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jonsson, Jacob C.; Branden, Henrik
2006-10-19
This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less
Lassa fever: the challenges of curtailing a deadly disease.
Ibekwe, Titus
2012-01-01
Today Lassa fever is mainly a disease of the developing world, however several imported cases have been reported in different parts of the world and there are growing concerns of the potentials of Lassa fever Virus as a biological weapon. Yet no tangible solution to this problem has been developed nearly half a decade after its identification. Hence, the paper is aimed at appraising the problems associated with LAF illness; the challenges in curbing the epidemic and recommendations on important focal points. A Review based on the documents from the EFAS conference 2011 and literature search on PubMed, Scopus and Science direct. The retrieval of relevant papers was via the University of British Columbia and University of Toronto Libraries. The two major search engines returned 61 and 920 articles respectively. Out of these, the final 26 articles that met the criteria were selected. Relevant information on epidemiology, burden of management and control were obtained. Prompt and effective containment of the Lassa fever disease in Lassa village four decades ago could have saved the West African sub-region and indeed the entire globe from the devastating effect and threats posed by this illness. That was a hard lesson calling for much more proactive measures towards the eradication of the illness at primary, secondary and tertiary levels of health care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chvetsov, A; Sandison, G; Schwartz, J
Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less
Pre-Service Teachers' Free and Structured Mathematical Problem Posing
ERIC Educational Resources Information Center
Silber, Steven; Cai, Jinfa
2017-01-01
This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
NASA Astrophysics Data System (ADS)
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography
NASA Astrophysics Data System (ADS)
Chu, Pan; Lei, Jing
2017-11-01
The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.
Regularized minimum I-divergence methods for the inverse blackbody radiation problem
NASA Astrophysics Data System (ADS)
Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin
2006-08-01
This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.
Creativity of Field-dependent and Field-independent Students in Posing Mathematical Problems
NASA Astrophysics Data System (ADS)
Azlina, N.; Amin, S. M.; Lukito, A.
2018-01-01
This study aims at describing the creativity of elementary school students with different cognitive styles in mathematical problem-posing. The posed problems were assessed based on three components of creativity, namely fluency, flexibility, and novelty. The free-type problem posing was used in this study. This study is a descriptive research with qualitative approach. Data collections were conducted through written task and task-based interviews. The subjects were two elementary students. One of them is Field Dependent (FD) and the other is Field Independent (FI) which were measured by GEFT (Group Embedded Figures Test). Further, the data were analyzed based on creativity components. The results show thatFD student’s posed problems have fulfilled the two components of creativity namely fluency, in which the subject posed at least 3 mathematical problems, and flexibility, in whichthe subject posed problems with at least 3 different categories/ideas. Meanwhile,FI student’s posed problems have fulfilled all three components of creativity, namely fluency, in which thesubject posed at least 3 mathematical problems, flexibility, in which thesubject posed problems with at least 3 different categories/ideas, and novelty, in which the subject posed problems that are purely the result of her own ideas and different from problems they have known.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bielecki, J.; Scholz, M.; Drozdowicz, K.
A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less
Skill Levels of Prospective Physics Teachers on Problem Posing
ERIC Educational Resources Information Center
Cildir, Sema; Sezen, Nazan
2011-01-01
Problem posing is one of the topics which the educators thoroughly accentuate. Problem posing skill is defined as an introvert activity of a student's learning. In this study, skill levels of prospective physics teachers on problem posing were determined and their views on problem posing were evaluated. To this end, prospective teachers were given…
The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method
NASA Astrophysics Data System (ADS)
Voronina, T. A.; Romanenko, A. A.
2016-12-01
Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.
A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus
NASA Astrophysics Data System (ADS)
Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei
2005-01-01
Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.
Fundamentals of diffusion MRI physics.
Kiselev, Valerij G
2017-03-01
Diffusion MRI is commonly considered the "engine" for probing the cellular structure of living biological tissues. The difficulty of this task is threefold. First, in structurally heterogeneous media, diffusion is related to structure in quite a complicated way. The challenge of finding diffusion metrics for a given structure is equivalent to other problems in physics that have been known for over a century. Second, in most cases the MRI signal is related to diffusion in an indirect way dependent on the measurement technique used. Third, finding the cellular structure given the MRI signal is an ill-posed inverse problem. This paper reviews well-established knowledge that forms the basis for responding to the first two challenges. The inverse problem is briefly discussed and the reader is warned about a number of pitfalls on the way. Copyright © 2017 John Wiley & Sons, Ltd.
PAN AIR modeling studies. [higher order panel method for aircraft design
NASA Technical Reports Server (NTRS)
Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.
1983-01-01
PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.
WASP (Write a Scientific Paper): Special cases of selective non-treatment and/or DNR.
Mallia, Pierre
2018-05-03
Fetuses at low gestational age limit of viability, neonates with life threatening or life limiting congenital anomalies and deteriorating acutely ill newborn babies in intensive care, pose taxing ethical questions on whether to forego or stop treatment and allow them to die naturally. Although there is essentially no ethical difference between end of life decision between neonates and other children and adults, in the former, the fact that we are dealing with a new life, may pose greater problems to staff and parents. Good communication skills and involvement of all the team and the parents should start from the beginning to see which treatment can be foregone or stopped in the best interests of the child. This article deals with the importance of clinical ethics to avoid legal and moral showdowns and discusses accepted moral practice in this difficult area. Copyright © 2018. Published by Elsevier B.V.
Determining the Performances of Pre-Service Primary School Teachers in Problem Posing Situations
ERIC Educational Resources Information Center
Kilic, Cigdem
2013-01-01
This study examined the problem posing strategies of pre-service primary school teachers in different problem posing situations (PPSs) and analysed the issues they encounter while posing problems. A problem posing task consisting of six PPSs (two free, two structured, and two semi-structured situations) was delivered to 40 participants.…
NASA Technical Reports Server (NTRS)
Stanitz, J. D.
1985-01-01
The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.
Multistatic aerosol-cloud lidar in space: A theoretical perspective
NASA Astrophysics Data System (ADS)
Mishchenko, M. I.; Alexandrov, M. D.; Brian, C.; Travis, L. D.
2016-12-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170° can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
Multistatic Aerosol Cloud Lidar in Space: A Theoretical Perspective
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Alexandrov, Mikhail D.; Cairns, Brian; Travis, Larry D.
2016-01-01
Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170deg can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.
Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning
ERIC Educational Resources Information Center
Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting
2012-01-01
A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…
Characteristics of Problem Posing of Grade 9 Students on Geometric Tasks
ERIC Educational Resources Information Center
Chua, Puay Huat; Wong, Khoon Yoong
2012-01-01
This is an exploratory study into the individual problem-posing characteristics of 480 Grade 9 Singapore students who were novice problem posers working on two geometric tasks. The students were asked to pose a problem for their friends to solve. Analyses of solvable posed problems were based on the problem type, problem information, solution type…
NASA Astrophysics Data System (ADS)
Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem
2017-04-01
The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.
[Legal aspects of the use of footbaths for cattle and sheep].
Kleiminger, E
2012-04-24
Claw diseases pose a major problem for dairy and sheep farms. As well as systemic treatments of these illnesses by means of drug injection, veterinarians discuss the application of footbaths for the local treatment of dermatitis digitalis or foot rot. On farms footbaths are used with different substances and for various purposes. The author presents the requirements for veterinary medicinal products (marketing authorization and manufacturing authorization) and demonstrates the operation of the "cascade in case of a treatment crisis". In addition, the distinction between veterinary hygiene biocidal products and veterinary medicinal products and substances to care for claws is explained.
Boisvert, Ronald F.; Donahue, Michael J.; Lozier, Daniel W.; McMichael, Robert; Rust, Bert W.
2001-01-01
In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST’s current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years. PMID:27500024
Antinauseants in Pregnancy: Teratogens or Not?
Biringer, Anne
1984-01-01
Nausea and/or vomiting affect 50% of all pregnant women. For most women, this is a self-limited problem which responds well to conservative management. However, there are some situations where the risk to the mother and fetus posed by the illness are greater than the possible risks of teratogenicity of antinauseant drugs. Antihistamines have had the widest testing, and to date, there has been no evidence linking doxylamine, dimenhydrinate or promethazine to congenital malformations. Since no available drugs have official approval for use in nausea and vomiting of pregnancy the physician is left alone to make this difficult decision. PMID:21279128
On the reconstruction of the surface structure of the spotted stars
NASA Astrophysics Data System (ADS)
Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.
2013-07-01
We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.
Problem Posing with the Multiplication Table
ERIC Educational Resources Information Center
Dickman, Benjamin
2014-01-01
Mathematical problem posing is an important skill for teachers of mathematics, and relates readily to mathematical creativity. This article gives a bit of background information on mathematical problem posing, lists further references to connect problem posing and creativity, and then provides 20 problems based on the multiplication table to be…
Investigation of Problem-Solving and Problem-Posing Abilities of Seventh-Grade Students
ERIC Educational Resources Information Center
Arikan, Elif Esra; Ünal, Hasan
2015-01-01
This study aims to examine the effect of multiple problem-solving skills on the problem-posing abilities of gifted and non-gifted students and to assess whether the possession of such skills can predict giftedness or affect problem-posing abilities. Participants' metaphorical images of problem posing were also explored. Participants were 20 gifted…
Sanz, E.; Voss, C.I.
2006-01-01
Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.
Renal and urologic manifestations of pediatric condition falsification/Munchausen by proxy.
Feldman, Kenneth W; Feldman, Marc D; Grady, Richard; Burns, Mark W; McDonald, Ruth
2007-06-01
Renal and urologic problems in pediatric condition falsification (PCF)/Munchausen by proxy (MBP) can pose frustrating diagnostic and management problems. Five previously unreported victims of PCF/MBP are described. Symptoms included artifactual hematuria, recalcitrant urinary infections, dysfunctional voiding, perineal irritation, glucosuria, and "nutcracker syndrome", in addition to alleged sexual abuse. Falsifications included false or exaggerated history, specimen contamination, and induced illness. Caretakers also intentionally withheld appropriately prescribed treatment. Children underwent invasive diagnostic and surgical procedures because of the falsifications. They developed iatrogenic complications as well as behavioral problems stemming from their abuse. A PCF/MBP database was started in 1995 and includes the characteristics of 135 PCF/MBP victims examined by the first author between 1974 and 2006. Analysis of the database revealed that 25% of the children had renal or urologic issues. They were the presenting/primary issue for five. Diagnosis of PCF/MBP was delayed an average of 4.5 years from symptom onset. Almost all patients were victimized by their mothers, and maternal health falsification and somatization were common. Thirty-one of 34 children had siblings who were also victimized, six of whom died. In conclusion, falsifications of childhood renal and urologic illness are relatively uncommon; however, the deceits are prolonged and tortuous. Early recognition and intervention might limit the harm.
Convex blind image deconvolution with inverse filtering
NASA Astrophysics Data System (ADS)
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Some Reflections on Problem Posing: A Conversation with Marion Walter
ERIC Educational Resources Information Center
Baxter, Juliet A.
2005-01-01
Marion Walter, an internationally acclaimed mathematics educator discusses about problem posing, focusing on both the merits of problem posing and techniques to encourage problem posing. She believes that playful attitude toward problem variables is an essential part of an inquiring mind and the more opportunities that learners have, to change a…
Wavelet methods in multi-conjugate adaptive optics
NASA Astrophysics Data System (ADS)
Helin, T.; Yudytskiy, M.
2013-08-01
The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.
Fundamental concepts of problem-based learning for the new facilitator.
Kanter, S L
1998-01-01
Problem-based learning (PBL) is a powerful small group learning tool that should be part of the armamentarium of every serious educator. Classic PBL uses ill-structured problems to simulate the conditions that occur in the real environment. Students play an active role and use an iterative process of seeking new information based on identified learning issues, restructuring the information in light of the new knowledge, gathering additional information, and so forth. Faculty play a facilitatory role, not a traditional instructional role, by posing metacognitive questions to students. These questions serve to assist in organizing, generalizing, and evaluating knowledge; to probe for supporting evidence; to explore faulty reasoning; to stimulate discussion of attitudes; and to develop self-directed learning and self-assessment skills. Professional librarians play significant roles in the PBL environment extending from traditional service provider to resource person to educator. Students and faculty usually find the learning experience productive and enjoyable. PMID:9681175
NASA Astrophysics Data System (ADS)
Wu, Wei; Zhao, Dewei; Zhang, Huan
2015-12-01
Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.
A space-frequency multiplicative regularization for force reconstruction problems
NASA Astrophysics Data System (ADS)
Aucejo, M.; De Smet, O.
2018-05-01
Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
Glimpse: Sparsity based weak lensing mass-mapping tool
NASA Astrophysics Data System (ADS)
Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.
2018-02-01
Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.
NASA Astrophysics Data System (ADS)
Hasanah, N.; Hayashi, Y.; Hirashima, T.
2017-02-01
Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.
Computed inverse resonance imaging for magnetic susceptibility map reconstruction.
Chen, Zikuan; Calhoun, Vince
2012-01-01
This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.
Computed inverse MRI for magnetic susceptibility map reconstruction
Chen, Zikuan; Calhoun, Vince
2015-01-01
Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372
ERIC Educational Resources Information Center
Lyonga, Agnes Ngale; Eighmy, Myron A.; Garden-Robinson, Julie
2010-01-01
Foodborne illness and food safety risks pose health threats to everyone, including international college students who live in the United States and encounter new or unfamiliar foods. This study assessed the prevalence of self-reported foodborne illness among international college students by cultural regions and length of time in the United…
Rizk, Nesrine A; Kanafani, Zeina A; Tabaja, Hussam Z; Kanj, Souha S
2017-07-01
Beta-lactams are at the cornerstone of therapy in critical care settings, but their clinical efficacy is challenged by the rise in bacterial resistance. Infections with multi-drug resistant organisms are frequent in intensive care units, posing significant therapeutic challenges. The problem is compounded by a dearth in the development of new antibiotics. In addition, critically-ill patients have unique physiologic characteristics that alter the drugs pharmacokinetics and pharmacodynamics. Areas covered: The prolonged infusion of antibiotics (extended infusion [EI] and continuous infusion [CI]) has been the focus of research in the last decade. As beta-lactams have time-dependent killing characteristics that are altered in critically-ill patients, prolonged infusion is an attractive approach to maximize their drug delivery and efficacy. Several studies have compared traditional dosing to EI/CI of beta-lactams with regard to clinical efficacy. Clinical data are primarily composed of retrospective studies and some randomized controlled trials. Several reports show promising results. Expert commentary: Reviewing the currently available evidence, we conclude that EI/CI is probably beneficial in the treatment of critically-ill patients in whom an organism has been identified, particularly those with respiratory infections. Further studies are needed to evaluate the efficacy of EI/CI in the management of infections with resistant organisms.
Wynaden, D; Orb, A; McGowan, S; Downie, J
2000-09-01
The preparedness of comprehensive nurses to work with the mentally ill is of concern to many mental health professionals. Discussion as to whether current undergraduate nursing programs in Australia prepare a graduate to work as a beginning practitioner in the mental health area has been the centre of debate for most of the 1990s. This, along with the apparent lack of interest and motivation of these nurses to work in the mental health area following graduation, remains a major problem for mental health care providers. With one in five Australians now experiencing the burden of a major mental illness, the preparation of a nurse who is competent to work with the mentally ill would appear to be a priority. The purpose of the present study was to determine third year undergraduate nursing students' perceived level of preparedness to work with mentally ill clients. The results suggested significant differences in students' perceived level of confidence, knowledge and skills prior to and following theoretical and clinical exposure to the mental health area. Pre-testing of students before entering their third year indicated that the philosophy of comprehensive nursing: integration, although aspired to in principle, does not appear to occur in reality.
Problem Posing as a Pedagogical Strategy: A Teacher's Perspective
ERIC Educational Resources Information Center
Staebler-Wiseman, Heidi A.
2011-01-01
Student problem posing has been advocated for mathematics instruction, and it has been suggested that problem posing can be used to develop students' mathematical content knowledge. But, problem posing has rarely been utilized in university-level mathematics courses. The goal of this teacher-as-researcher study was to develop and investigate…
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco
2008-01-01
Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932
Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media
NASA Astrophysics Data System (ADS)
Jakobsen, Morten; Tveit, Svenn
2018-05-01
We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.
Scene analysis in the natural environment
Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.
2014-01-01
The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740
Students’ Creativity: Problem Posing in Structured Situation
NASA Astrophysics Data System (ADS)
Amalina, I. K.; Amirudin, M.; Budiarto, M. T.
2018-01-01
This is a qualitative research concerning on students’ creativity on problem posing task. The study aimed at describing the students’ creative thinking ability to pose the mathematics problem in structured situations with varied condition of given problems. In order to find out the students’ creative thinking ability, an analysis of mathematics problem posing test based on fluency, novelty, and flexibility and interview was applied for categorizing students’ responses on that task. The data analysis used the quality of problem posing and categorized in 4 level of creativity. The results revealed from 29 secondary students grade 8, a student in CTL (Creative Thinking Level) 1 met the fluency. A student in CTL 2 met the novelty, while a student in CTL 3 met both fluency and novelty and no one in CTL 4. These results are affected by students’ mathematical experience. The findings of this study highlight that student’s problem posing creativity are dependent on their experience in mathematics learning and from the point of view of which students start to pose problem.
Validating a UAV artificial intelligence control system using an autonomous test case generator
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Huber, Justin
2013-05-01
The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.
CREKID: A computer code for transient, gas-phase combustion of kinetics
NASA Technical Reports Server (NTRS)
Pratt, D. T.; Radhakrishnan, K.
1984-01-01
A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.
Assessing Students' Mathematical Problem Posing
ERIC Educational Resources Information Center
Silver, Edward A.; Cai, Jinfa
2005-01-01
Specific examples are used to discuss assessment, an integral part of mathematics instruction, with problem posing and assessment of problem posing. General assessment criteria are suggested to evaluate student-generated problems in terms of their quantity, originality, and complexity.
ERIC Educational Resources Information Center
Ellerton, Nerida F.
2013-01-01
Although official curriculum documents make cursory mention of the need for problem posing in school mathematics, problem posing rarely becomes part of the implemented or assessed curriculum. This paper provides examples of how problem posing can be made an integral part of mathematics teacher education programs. It is argued that such programs…
ERIC Educational Resources Information Center
Van Harpen, Xianwei Y.; Sriraman, Bharath
2013-01-01
In the literature, problem-posing abilities are reported to be an important aspect/indicator of creativity in mathematics. The importance of problem-posing activities in mathematics is emphasized in educational documents in many countries, including the USA and China. This study was aimed at exploring high school students' creativity in…
Interlocked Problem Posing and Children's Problem Posing Performance in Free Structured Situations
ERIC Educational Resources Information Center
Cankoy, Osman
2014-01-01
The aim of this study is to explore the mathematical problem posing performance of students in free structured situations. Two classes of fifth grade students (N = 30) were randomly assigned to experimental and control groups. The categories of the problems posed in free structured situations by the 2 groups of students were studied through…
Problem-Posing Strategies Used by Years 8 and 9 Students
ERIC Educational Resources Information Center
Stoyanova, Elena
2005-01-01
According to Kilpatrick (1987), in the mathematics classrooms problem posing can be applied as a "goal" or as a means of instruction. Using problem posing as a goal of instruction involves asking students to respond to a range of problem-posing prompts. The main goal of this article is a classification of mathematics questions created by Years 8…
2D deblending using the multi-scale shaping scheme
NASA Astrophysics Data System (ADS)
Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan
2018-01-01
Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.
When a Problem Is More than a Teacher's Question
ERIC Educational Resources Information Center
Olson, Jo Clay; Knott, Libby
2013-01-01
Not only are the problems teachers pose throughout their teaching of great importance but also the ways in which they use those problems make this a critical component of teaching. A problem-posing episode includes the problem setup, the statement of the problem, and the follow-up questions. Analysis of problem-posing episodes of precalculus…
An Analysis of Secondary and Middle School Teachers' Mathematical Problem Posing
ERIC Educational Resources Information Center
Stickles, Paula R.
2011-01-01
This study identifies the kinds of problems teachers pose when they are asked to (a) generate problems from given information and (b) create new problems from ones given to them. To investigate teachers' problem posting, preservice and inservice teachers completed background questionnaires and four problem-posing instruments. Based on previous…
Ambikile, Joel Semel; Outwater, Anne
2012-07-05
It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child's illness. Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges.
Using informative priors in facies inversion: The case of C-ISR method
NASA Astrophysics Data System (ADS)
Valakas, G.; Modis, K.
2016-08-01
Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.
Regional regularization method for ECT based on spectral transformation of Laplacian
NASA Astrophysics Data System (ADS)
Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.
2016-10-01
Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.
ERIC Educational Resources Information Center
Kar, Tugrul
2015-01-01
This study aimed to investigate how the semantic structures of problems posed by sixth-grade middle school students for the addition of fractions affect their problem-posing performance. The students were presented with symbolic operations involving the addition of fractions and asked to pose two different problems related to daily-life situations…
Background. There is no consensus about the level of risk of gastrointestinal illness posed by consumption of drinking water that meets all regulatory requirements. Earlier drinking water intervention trials from Canada suggested that 14% - 40% of such gastrointestinal il...
A fractional-order accumulative regularization filter for force reconstruction
NASA Astrophysics Data System (ADS)
Wensong, Jiang; Zhongyu, Wang; Jing, Lv
2018-02-01
The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.
On the optimization of electromagnetic geophysical data: Application of the PSO algorithm
NASA Astrophysics Data System (ADS)
Godio, A.; Santilano, A.
2018-01-01
Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.
ERIC Educational Resources Information Center
Contreras, Jose
2007-01-01
In this article, I model how a problem-posing framework can be used to enhance our abilities to systematically generate mathematical problems by modifying the attributes of a given problem. The problem-posing model calls for the application of the following fundamental mathematical processes: proving, reversing, specializing, generalizing, and…
Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli
2018-05-17
The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.
ERIC Educational Resources Information Center
Kiliç, Çigdem
2017-01-01
This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…
Artifacts as Sources for Problem-Posing Activities
ERIC Educational Resources Information Center
Bonotto, Cinzia
2013-01-01
The problem-posing process represents one of the forms of authentic mathematical inquiry which, if suitably implemented in classroom activities, could move well beyond the limitations of word problems, at least as they are typically utilized. The two exploratory studies presented sought to investigate the impact of "problem-posing" activities when…
The Art of Problem Posing. 3rd Edition
ERIC Educational Resources Information Center
Brown, Stephen I.; Walter, Marion I.
2005-01-01
The new edition of this classic book describes and provides a myriad of examples of the relationships between problem posing and problem solving, and explores the educational potential of integrating these two activities in classrooms at all levels. "The Art of Problem Posing, Third Edition" encourages readers to shift their thinking…
ERIC Educational Resources Information Center
Chen, Limin; Van Dooren, Wim; Chen, Qi; Verschaffel, Lieven
2011-01-01
In the present study, which is a part of a research project about realistic word problem solving and problem posing in Chinese elementary schools, a problem solving and a problem posing test were administered to 128 pre-service and in-service elementary school teachers from Tianjin City in China, wherein the teachers were asked to solve 3…
Enhancing students’ mathematical problem posing skill through writing in performance tasks strategy
NASA Astrophysics Data System (ADS)
Kadir; Adelina, R.; Fatma, M.
2018-01-01
Many researchers have studied the Writing in Performance Task (WiPT) strategy in learning, but only a few paid attention on its relation to the problem-posing skill in mathematics. The problem-posing skill in mathematics covers problem reformulation, reconstruction, and imitation. The purpose of the present study was to examine the effect of WiPT strategy on students’ mathematical problem-posing skill. The research was conducted at a Public Junior Secondary School in Tangerang Selatan. It used a quasi-experimental method with randomized control group post-test. The samples were 64 students consists of 32 students of the experiment group and 32 students of the control. A cluster random sampling technique was used for sampling. The research data were obtained by testing. The research shows that the problem-posing skill of students taught by WiPT strategy is higher than students taught by a conventional strategy. The research concludes that the WiPT strategy is more effective in enhancing the students’ mathematical problem-posing skill compared to the conventional strategy.
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
The missions and means framework as an ontology
NASA Astrophysics Data System (ADS)
Deitz, Paul H.; Bray, Britt E.; Michaelis, James R.
2016-05-01
The analysis of warfare frequently suffers from an absence of logical structure for a] specifying explicitly the military mission and b] quantitatively evaluating the mission utility of alternative products and services. In 2003, the Missions and Means Framework (MMF) was developed to redress these shortcomings. The MMF supports multiple combatants, levels of war and, in fact, is a formal embodiment of the Military Decision-Making Process (MDMP). A major effect of incomplete analytic discipline in military systems analyses is that they frequently fall into the category of ill-posed problems in which they are under-specified, under-determined, or under-constrained. Critical context is often missing. This is frequently the result of incomplete materiel requirements analyses which have unclear linkages to higher levels of warfare, system-of-systems linkages, tactics, techniques and procedures, and the effect of opposition forces. In many instances the capabilities of materiel are assumed to be immutable. This is a result of not assessing how platform components morph over time due to damage, logistics, or repair. Though ill-posed issues can be found many places in military analysis, probably the greatest challenge comes in the disciplines of C4ISR supported by ontologies in which formal naming and definition of the types, properties, and interrelationships of the entities are fundamental to characterizing mission success. Though the MMF was not conceived as an ontology, over the past decade some workers, particularly in the field of communication, have labelled the MMF as such. This connection will be described and discussed.
Binary optimization for source localization in the inverse problem of ECG.
Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf
2014-09-01
The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography
NASA Astrophysics Data System (ADS)
Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Minimal residual method provides optimal regularization parameter for diffuse optical tomography.
Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K
2012-10-01
The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.
Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid
NASA Astrophysics Data System (ADS)
Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong
2017-11-01
The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.
An estimate for the thermal photon rate from lattice QCD
NASA Astrophysics Data System (ADS)
Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman
2018-03-01
We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.
Nguyen, N; Milanfar, P; Golub, G
2001-01-01
In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.
Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation
NASA Astrophysics Data System (ADS)
Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe
2018-04-01
In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.
Dissecting Success Stories on Mathematical Problem Posing: A Case of the Billiard Task
ERIC Educational Resources Information Center
Koichu, Boris; Kontorovich, Igor
2013-01-01
"Success stories," i.e., cases in which mathematical problems posed in a controlled setting are perceived by the problem posers or other individuals as interesting, cognitively demanding, or surprising, are essential for understanding the nature of problem posing. This paper analyzes two success stories that occurred with individuals of different…
ERIC Educational Resources Information Center
Crespo, Sandra; Sinclair, Nathalie
2008-01-01
School students of all ages, including those who subsequently become teachers, have limited experience posing their own mathematical problems. Yet problem posing, both as an act of mathematical inquiry and of mathematics teaching, is part of the mathematics education reform vision that seeks to promote mathematics as an worthy intellectual…
Helping Young Students to Better Pose an Environmental Problem
ERIC Educational Resources Information Center
Pruneau, Diane; Freiman, Viktor; Barbier, Pierre-Yves; Langis, Joanne
2009-01-01
Grade 3 students were asked to solve a sedimentation problem in a local river. With scientists, students explored many aspects of the problem and proposed solutions. Graphic representation tools were used to help students to better pose the problem. Using questionnaires and interviews, researchers observed students' capacity to pose the problem…
University Students' Problem Posing Abilities and Attitudes towards Mathematics.
ERIC Educational Resources Information Center
Grundmeier, Todd A.
2002-01-01
Explores the problem posing abilities and attitudes towards mathematics of students in a university pre-calculus class and a university mathematical proof class. Reports a significant difference in numeric posing versus non-numeric posing ability in both classes. (Author/MM)
NASA Astrophysics Data System (ADS)
Akben, Nimet
2018-05-01
The interrelationship between mathematics and science education has frequently been emphasized, and common goals and approaches have often been adopted between disciplines. Improving students' problem-solving skills in mathematics and science education has always been given special attention; however, the problem-posing approach which plays a key role in mathematics education has not been commonly utilized in science education. As a result, the purpose of this study was to better determine the effects of the problem-posing approach on students' problem-solving skills and metacognitive awareness in science education. This was a quasi-experimental based study conducted with 61 chemistry and 40 physics students; a problem-solving inventory and a metacognitive awareness inventory were administered to participants both as a pre-test and a post-test. During the 2017-2018 academic year, problem-solving activities based on the problem-posing approach were performed with the participating students during their senior year in various university chemistry and physics departments throughout the Republic of Turkey. The study results suggested that structured, semi-structured, and free problem-posing activities improve students' problem-solving skills and metacognitive awareness. These findings indicated not only the usefulness of integrating problem-posing activities into science education programs but also the need for further research into this question.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, M.K.; Fomel, S.B.; Sethian, J.A.
2009-01-01
In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less
Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction
NASA Astrophysics Data System (ADS)
Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.
2002-11-01
The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.
A frequency-domain seismic blind deconvolution based on Gini correlations
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing
2018-02-01
In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.
Quantitative imaging of aggregated emulsions.
Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J
2006-02-28
Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.
Multicollinearity in hierarchical linear models.
Yu, Han; Jiang, Shanhe; Land, Kenneth C
2015-09-01
This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.
User-assisted video segmentation system for visual communication
NASA Astrophysics Data System (ADS)
Wu, Zhengping; Chen, Chun
2002-01-01
Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.
NASA Astrophysics Data System (ADS)
Huang, Maosong; Qu, Xie; Lü, Xilin
2017-11-01
By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.
A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).
Wang, Qi; Wang, Huaxiang
2011-04-01
During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Single photon emission computed tomography-guided Cerenkov luminescence tomography
NASA Astrophysics Data System (ADS)
Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie
2012-07-01
Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.
Sparse radar imaging using 2D compressed sensing
NASA Astrophysics Data System (ADS)
Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying
2014-10-01
Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.
Improved real-time dynamics from imaginary frequency lattice simulations
NASA Astrophysics Data System (ADS)
Pawlowski, Jan M.; Rothkopf, Alexander
2018-03-01
The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.
Fast reconstruction of optical properties for complex segmentations in near infrared imaging
NASA Astrophysics Data System (ADS)
Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador
2017-04-01
The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.
Hessian Schatten-norm regularization for linear inverse problems.
Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael
2013-05-01
We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.
Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction
NASA Astrophysics Data System (ADS)
Mons, Vincent; Wang, Qi; Zaki, Tamer
2017-11-01
Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).
Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.
Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf
2018-05-12
We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.
Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing
ERIC Educational Resources Information Center
Kilic, Cigdem
2015-01-01
In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…
Students’ Mathematical Creative Thinking through Problem Posing Learning
NASA Astrophysics Data System (ADS)
Ulfah, U.; Prabawanto, S.; Jupri, A.
2017-09-01
The research aims to investigate the differences in enhancement of students’ mathematical creative thinking ability of those who received problem posing approach assisted by manipulative media and students who received problem posing approach without manipulative media. This study was a quasi experimental research with non-equivalent control group design. Population of this research was third-grade students of a primary school in Bandung city in 2016/2017 academic year. Sample of this research was two classes as experiment class and control class. The instrument used is a test of mathematical creative thinking ability. Based on the results of the research, it is known that the enhancement of the students’ mathematical creative thinking ability of those who received problem posing approach with manipulative media aid is higher than the ability of those who received problem posing approach without manipulative media aid. Students who get learning problem posing learning accustomed in arranging mathematical sentence become matter of story so it can facilitate students to comprehend about story
An Interview Forum on Interlibrary Loan/Document Delivery with Lynn Wiley and Tom Delaney
ERIC Educational Resources Information Center
Hasty, Douglas F.
2003-01-01
The Virginia Boucher-OCLC Distinguished ILL Librarian Award is the most prestigious commendation given to practitioners in the field. The following questions about ILL were posed to the two most recent recipients of the Boucher Award: Tom Delaney (2002), Coordinator of Interlibrary Loan Services at Colorado State University and Lynn Wiley (2001),…
Deinstitutionalization: Its Impact on Community Mental Health Centers and the Seriously Mentally Ill
ERIC Educational Resources Information Center
Kliewer, Stephen P.; McNally Melissa; Trippany, Robyn L.
2009-01-01
Deinstitutionalization has had a significant impact on the mental health system, including the client, the agency, and the counselor. For clients with serious mental illness, learning to live in a community setting poses challenges that are often difficult to overcome. Community mental health agencies must respond to these specific needs, thus…
NASA Astrophysics Data System (ADS)
Edjlali, Ehsan; Bérubé-Lauzière, Yves
2018-01-01
We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.
NASA Astrophysics Data System (ADS)
Supianto, A. A.; Hayashi, Y.; Hirashima, T.
2017-02-01
Problem-posing is well known as an effective activity to learn problem-solving methods. Monsakun is an interactive problem-posing learning environment to facilitate arithmetic word problems learning for one operation of addition and subtraction. The characteristic of Monsakun is problem-posing as sentence-integration that lets learners make a problem of three sentences. Monsakun provides learners with five or six sentences including dummies, which are designed through careful considerations by an expert teacher as a meaningful distraction to the learners in order to learn the structure of arithmetic word problems. The results of the practical use of Monsakun in elementary schools show that many learners have difficulties in arranging the proper answer at the high level of assignments. The analysis of the problem-posing process of such learners found that their misconception of arithmetic word problems causes impasses in their thinking and mislead them to use dummies. This study proposes a method of changing assignments as a support for overcoming bottlenecks of thinking. In Monsakun, the bottlenecks are often detected as a frequently repeated use of a specific dummy. If such dummy can be detected, it is the key factor to support learners to overcome their difficulty. This paper discusses how to detect the bottlenecks and to realize such support in learning by problem-posing.
The Problems Posed and Models Employed by Primary School Teachers in Subtraction with Fractions
ERIC Educational Resources Information Center
Iskenderoglu, Tuba Aydogdu
2017-01-01
Students have difficulties in solving problems of fractions in almost all levels, and in problem posing. Problem posing skills influence the process of development of the behaviors observed at the level of comprehension. That is why it is very crucial for teachers to develop activities for student to have conceptual comprehension of fractions and…
Problem-Posing Research in Mathematics Education: Looking Back, Looking Around, and Looking Ahead
ERIC Educational Resources Information Center
Silver, Edward A.
2013-01-01
In this paper, I comment on the set of papers in this special issue on mathematical problem posing. I offer some observations about the papers in relation to several key issues, and I suggest some productive directions for continued research inquiry on mathematical problem posing.
Depression and decision-making capacity for treatment or research: a systematic review
2013-01-01
Background Psychiatric disorders can pose problems in the assessment of decision-making capacity (DMC). This is so particularly where psychopathology is seen as the extreme end of a dimension that includes normality. Depression is an example of such a psychiatric disorder. Four abilities (understanding, appreciating, reasoning and ability to express a choice) are commonly assessed when determining DMC in psychiatry and uncertainty exists about the extent to which depression impacts capacity to make treatment or research participation decisions. Methods A systematic review of the medical ethical and empirical literature concerning depression and DMC was conducted. Medline, EMBASE and PsycInfo databases were searched for studies of depression and consent and DMC. Empirical studies and papers containing ethical analysis were extracted and analysed. Results 17 publications were identified. The clinical ethics studies highlighted appreciation of information as the ability that can be impaired in depression, indicating that emotional factors can impact on DMC. The empirical studies reporting decision-making ability scores also highlighted impairment of appreciation but without evidence of strong impact. Measurement problems, however, looked likely. The frequency of clinical judgements of lack of DMC in people with depression varied greatly according to acuity of illness and whether judgements are structured or unstructured. Conclusions Depression can impair DMC especially if severe. Most evidence indicates appreciation as the ability primarily impaired by depressive illness. Understanding and measuring the appreciation ability in depression remains a problem in need of further research. PMID:24330745
A Human Proximity Operations System test case validation approach
NASA Astrophysics Data System (ADS)
Huber, Justin; Straub, Jeremy
A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.
2012-01-01
Background It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. Methodology A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. Results The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child’s illness. Conclusion Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges. PMID:22559084
3D first-arrival traveltime tomography with modified total variation regularization
NASA Astrophysics Data System (ADS)
Jiang, Wenbin; Zhang, Jie
2018-02-01
Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.
NASA Astrophysics Data System (ADS)
Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.
2010-03-01
Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.
On decoupling of volatility smile and term structure in inverse option pricing
NASA Astrophysics Data System (ADS)
Egger, Herbert; Hein, Torsten; Hofmann, Bernd
2006-08-01
Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.
An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups
ERIC Educational Resources Information Center
Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi
2012-01-01
The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…
Problem Posing at All Levels in the Calculus Classroom
ERIC Educational Resources Information Center
Perrin, John Robert
2007-01-01
This article explores the use of problem posing in the calculus classroom using investigative projects. Specially, four examples of student work are examined, each one differing in originality of problem posed. By allowing students to explore actual questions that they have about calculus, coming from their own work or class discussion, or…
Critical Inquiry across the Disciplines: Strategies for Student-Generated Problem Posing
ERIC Educational Resources Information Center
Nardone, Carroll Ferguson; Lee, Renee Gravois
2011-01-01
Problem posing is a higher-order, active-learning task that is important for students to develop. This article describes a series of interdisciplinary learning activities designed to help students strengthen their problem-posing skills, which requires that students become more responsible for their learning and that faculty move to a facilitator…
Developing Teachers' Subject Didactic Competence through Problem Posing
ERIC Educational Resources Information Center
Ticha, Marie; Hospesova, Alena
2013-01-01
Problem posing (not only in lesson planning but also directly in teaching whenever needed) is one of the attributes of a teacher's subject didactic competence. In this paper, problem posing in teacher education is understood as an educational and a diagnostic tool. The results of the study were gained in pre-service primary school teacher…
ERIC Educational Resources Information Center
Barlow, Angela T.; Cates, Janie M.
2006-01-01
This study investigated the impact of incorporating problem posing in elementary classrooms on the beliefs held by elementary teachers about mathematics and mathematics teaching. Teachers participated in a year-long staff development project aimed at facilitating the incorporation of problem posing into their classrooms. Beliefs were examined via…
The Posing of Arithmetic Problems by Mathematically Talented Students
ERIC Educational Resources Information Center
Espinoza González, Johan; Lupiáñez Gómez, José Luis; Segovia Alex, Isidoro
2016-01-01
Introduction: This paper analyzes the arithmetic problems posed by a group of mathematically talented students when given two problem-posing tasks, and compares these students' responses to those given by a standard group of public school students to the same tasks. Our analysis focuses on characterizing and identifying the differences between the…
Posing Problems to Understand Children's Learning of Fractions
ERIC Educational Resources Information Center
Cheng, Lu Pien
2013-01-01
In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…
Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions
ERIC Educational Resources Information Center
Arikan, Elif Esra; Unal, Hasan
2014-01-01
The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…
Integrating Worked Examples into Problem Posing in a Web-Based Learning Environment
ERIC Educational Resources Information Center
Hsiao, Ju-Yuan; Hung, Chun-Ling; Lan, Yu-Feng; Jeng, Yoau-Chau
2013-01-01
Most students always lack of experience and perceive difficult regarding problem posing. The study hypothesized that worked examples may have benefits for supporting students' problem posing activities. A quasi-experiment was conducted in the context of a business mathematics course for examining the effects of integrating worked examples into…
Roberts, Laura Weiss; Kim, Jane Paik
2014-01-01
Motivation Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Methods Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Results Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy=−0.04, CI [−0.46, 0.39]; physically ill – healthy= −0.13, CI [−0.62, −.36]). Conclusions Clinical research volunteers and healthy clinical research-“naive” individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. PMID:24931849
Roberts, Laura Weiss; Kim, Jane Paik
2014-09-01
Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy = -0.04, CI [-0.46, 0.39]; physically ill - healthy = -0.13, CI [-0.62, -.36]). Clinical research volunteers and healthy clinical research-"naïve" individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. Copyright © 2014 Elsevier Ltd. All rights reserved.
A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays
Hughes, Alec; Hynynen, Kullervo
2016-01-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323
A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.
Hughes, Alec; Hynynen, Kullervo
2016-12-01
Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.
Applications of Electrical Impedance Tomography (EIT): A Short Review
NASA Astrophysics Data System (ADS)
Kanti Bera, Tushar
2018-03-01
Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.
[Prevalence of patients with HIV infection in an emergency department].
Greco, G M; Paparo, R; Ventura, R; Migliardi, C; Tallone, R; Moccia, F
1995-01-01
The activity at an ED, primarily aiming at providing rational and qualified support to critically ill patients, is forced to manage very different nosographic entities, including infectious, often contagious, pathologies. In this context the diffusion of HIV infection poses a number of problems concerning both the kind of patients presenting to the ED and the professional risk of health-care workers. In the first four months of 1992 the incidence of patients with recognized or presumed HIV infection at the "Pronto Soccorso Medico" was of 1.78% of 2327 patients admitted. This study aims to contribute to the epidemiologic definition of the risk of HIV infection due to occupational exposure, stressing the peculiar conditions of urgency-emergency often characterizing the activity within the ED.
Donow, H S
1990-08-01
Care of an elder patient is often regarded by the children as an unwanted burden. Anderson's 1968 play, I Never Sang for My Father, and Ariyoshi's 1972 novel, Kokotsu no hito [The Twilight years], show how two different families of two different cultures (American and Japanese) respond to this crisis. Both texts arrive at dramatically different conclusions: in one the children, Gene and Alice, prove unwilling or unable to cope with the problems posed by their father's need; in the other Akiko, though nearly overwhelmed by the burden of her father-in-law's illness, emerges richer for the experience.
Improving chemical species tomography of turbulent flows using covariance estimation.
Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J
2017-05-01
Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.
Locating an atmospheric contamination source using slow manifolds
NASA Astrophysics Data System (ADS)
Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee
2009-04-01
Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.
Developing Pre-Service Teachers Understanding of Fractions through Problem Posing
ERIC Educational Resources Information Center
Toluk-Ucar, Zulbiye
2009-01-01
This study investigated the effect of problem posing on the pre-service primary teachers' understanding of fraction concepts enrolled in two different versions of a methods course at a university in Turkey. In the experimental version, problem posing was used as a teaching strategy. At the beginning of the study, the pre-service teachers'…
The Effects of Problem Posing on Student Mathematical Learning: A Meta-Analysis
ERIC Educational Resources Information Center
Rosli, Roslinda; Capraro, Mary Margaret; Capraro, Robert M.
2014-01-01
The purpose of the study was to meta-synthesize research findings on the effectiveness of problem posing and to investigate the factors that might affect the incorporation of problem posing in the teaching and learning of mathematics. The eligibility criteria for inclusion of literature in the meta-analysis was: published between 1989 and 2011,…
Teachers Implementing Mathematical Problem Posing in the Classroom: Challenges and Strategies
ERIC Educational Resources Information Center
Leung, Shuk-kwan S.
2013-01-01
This paper reports a study about how a teacher educator shared knowledge with teachers when they worked together to implement mathematical problem posing (MPP) in the classroom. It includes feasible methods for getting practitioners to use research-based tasks aligned to the curriculum in order to encourage children to pose mathematical problems.…
Problem-Posing in Education: Transformation of the Practice of the Health Professional.
ERIC Educational Resources Information Center
Casagrande, L. D. R.; Caron-Ruffino, M.; Rodrigues, R. A. P.; Vendrusculo, D. M. S.; Takayanagui, A. M. M.; Zago, M. M. F.; Mendes, M. D.
1998-01-01
Studied the use of a problem-posing model in health education. The model based on the ideas of Paulo Freire is presented. Four innovative experiences of teaching-learning in environmental and occupational health and patient education are reported. Notes that the problem-posing model has the capability to transform health-education practice.…
ERIC Educational Resources Information Center
Kar, Tugrul
2016-01-01
This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…
NASA Astrophysics Data System (ADS)
Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang
2015-12-01
As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.
Scott, Elizabeth; Herbold, Nancie
2010-06-01
Foodborne illnesses pose a problem to all individuals but are especially significant for infants, the elderly, and individuals with compromised immune systems. Personal hygiene is recognized as the number-one way people can lower their risk. The majority of meals in the U.S. are eaten at home. Little is known, however, about the actual application of personal hygiene and sanitation behaviors in the home. The study discussed in this article assessed knowledge of hygiene practices compared to observed behaviors and determined whether knowledge equated to practice. It was a descriptive study involving a convenience sample of 30 households. Subjects were recruited from the Boston area and a researcher and/or a research assistant traveled to the homes of study participants to videotape a standard food preparation procedure preceded by floor mopping. The results highlight the differences between individuals' reported beliefs and actual practice. This information can aid food safety and other health professionals in targeting food safety education so that consumers understand their own critical role in decreasing their risk for foodborne illness.
Payne, John
1971-01-01
The new film of David Mercer's Family life poses some hard questions for psychiatry to answer and puts the Laingian case for 'schizophrenia' being an illness created within the family unit. PMID:27670980
Mighty Mathematicians: Using Problem Posing and Problem Solving to Develop Mathematical Power
ERIC Educational Resources Information Center
McGatha, Maggie B.; Sheffield, Linda J.
2006-01-01
This article describes a year-long professional development institute combined with a summer camp for students. Both were designed to help teachers and students develop their problem-solving and problem-posing abilities.
McEwan, Miranda; Friedman, Susan Hatters
2016-12-01
Psychiatrists are mandated to report suspicions of child abuse in America. Potential for harm to children should be considered when one is treating parents who are at risk. Although it is the commonly held wisdom that mental illness itself is a major risk factor for child abuse, there are methodologic issues with studies purporting to demonstrate this. Rather, the risk from an individual parent must be considered. Substance abuse and personality disorder pose a separate risk than serious mental illness. Violence risk from mental illness is dynamic, rather than static. When severe mental illness is well-treated, the risk is decreased. However, these families are in need of social support. Copyright © 2016 Elsevier Inc. All rights reserved.
An Analysis of Problem-Posing Tasks in Chinese and US Elementary Mathematics Textbooks
ERIC Educational Resources Information Center
Cai, Jinfa; Jiang, Chunlian
2017-01-01
This paper reports on 2 studies that examine how mathematical problem posing is integrated in Chinese and US elementary mathematics textbooks. Study 1 involved a historical analysis of the problem-posing (PP) tasks in 3 editions of the most widely used elementary mathematics textbook series published by People's Education Press in China over 3…
ERIC Educational Resources Information Center
Aydogdu Iskenderoglu, Tuba
2018-01-01
It is important for pre-service teachers to know the conceptual difficulties they have experienced regarding the concepts of multiplication and division in fractions and problem posing is a way to learn these conceptual difficulties. Problem posing is a synthetic activity that fundamentally has multiple answers. The purpose of this study is to…
ERIC Educational Resources Information Center
Cankoy, Osman; Özder, Hasan
2017-01-01
The aim of this study is to develop a scoring rubric to assess primary school students' problem posing skills. The rubric including five dimensions namely solvability, reasonability, mathematical structure, context and language was used. The raters scored the students' problem posing skills both with and without the scoring rubric to test the…
ERIC Educational Resources Information Center
Van Harpen, Xianwei Y.; Presmeg, Norma C.
2013-01-01
The importance of students' problem-posing abilities in mathematics has been emphasized in the K-12 curricula in the USA and China. There are claims that problem-posing activities are helpful in developing creative approaches to mathematics. At the same time, there are also claims that students' mathematical content knowledge could be highly…
An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)
ERIC Educational Resources Information Center
Arikan, Elif Esra; Ünal, Hasan
2015-01-01
To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…
Mathematical Creative Process Wallas Model in Students Problem Posing with Lesson Study Approach
ERIC Educational Resources Information Center
Nuha, Muhammad 'Azmi; Waluya, S. B.; Junaedi, Iwan
2018-01-01
Creative thinking is very important in the modern era so that it should be improved by doing efforts such as making a lesson that train students to pose their own problems. The purposes of this research are (1) to give an initial description of students about mathematical creative thinking level in Problem Posing Model with Lesson Study approach…
Problem Posing with Realistic Mathematics Education Approach in Geometry Learning
NASA Astrophysics Data System (ADS)
Mahendra, R.; Slamet, I.; Budiyono
2017-09-01
One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.
NASA Technical Reports Server (NTRS)
Zubko, V.; Dwek, E.; Arendt, R. G.; Oegerle, William (Technical Monitor)
2001-01-01
We present new interstellar dust models that are consistent with both, the FUV to near-IR extinction and infrared (IR) emission measurements from the diffuse interstellar medium. The models are characterized by different dust compositions and abundances. The problem we solve consists of determining the size distribution of the various dust components of the model. This problem is a typical ill-posed inversion problem which we solve using the regularization approach. We reproduce the Li Draine (2001, ApJ, 554, 778) results, however their model requires an excessive amount of interstellar silicon (48 ppM of hydrogen compared to the 36 ppM available for an ISM of solar composition) to be locked up in dust. We found that dust models consisting of PAHs, amorphous silicate, graphite, and composite grains made up from silicates, organic refractory, and water ice, provide an improved fit to the extinction and IR emission measurements, while still requiring a subsolar amount of silicon to be in the dust. This research was supported by NASA Astrophysical Theory Program NRA 99-OSS-01.
An efficient method for model refinement in diffuse optical tomography
NASA Astrophysics Data System (ADS)
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
NASA Astrophysics Data System (ADS)
Petržala, Jaromír
2018-07-01
The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.
Villotti, Patrizia; Corbière, Marc; Dewa, Carolyn S; Fraccaroli, Franco; Sultan-Taïeb, Hélène; Zaniboni, Sara; Lecomte, Tania
2017-09-12
Compared to groups with other disabilities, people with a severe mental illness face the greatest stigma and barriers to employment opportunities. This study contributes to the understanding of the relationship between workplace social support and work productivity in people with severe mental illness working in Social Enterprises by taking into account the mediating role of self-stigma and job tenure self-efficacy. A total of 170 individuals with a severe mental disorder employed in a Social Enterprise filled out questionnaires assessing personal and work-related variables at Phase-1 (baseline) and Phase-2 (6-month follow-up). Process modeling was used to test for serial mediation. In the Social Enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems. When testing serial multiple mediations, the specific indirect effect of high workplace social support on work productivity through both low internalized stigma and high job tenure self-efficacy was significant with a point estimate of 1.01 (95% CI = 0.42, 2.28). Continued work in this area can provide guidance for organizations in the open labor market addressing the challenges posed by the work integration of people with severe mental illness. Implications for Rehabilitation: Work integration of people with severe mental disorders is difficult because of limited access to supportive and nondiscriminatory workplaces. Social enterprise represents an effective model for supporting people with severe mental disorders to integrate the labor market. In the social enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems.
Chatzitomaris, Apostolos; Hoermann, Rudolf; Midgley, John E.; Hering, Steffen; Urban, Aline; Dietrich, Barbara; Abood, Assjana; Klein, Harald H.; Dietrich, Johannes W.
2017-01-01
The hypothalamus–pituitary–thyroid feedback control is a dynamic, adaptive system. In situations of illness and deprivation of energy representing type 1 allostasis, the stress response operates to alter both its set point and peripheral transfer parameters. In contrast, type 2 allostatic load, typically effective in psychosocial stress, pregnancy, metabolic syndrome, and adaptation to cold, produces a nearly opposite phenotype of predictive plasticity. The non-thyroidal illness syndrome (NTIS) or thyroid allostasis in critical illness, tumors, uremia, and starvation (TACITUS), commonly observed in hospitalized patients, displays a historically well-studied pattern of allostatic thyroid response. This is characterized by decreased total and free thyroid hormone concentrations and varying levels of thyroid-stimulating hormone (TSH) ranging from decreased (in severe cases) to normal or even elevated (mainly in the recovery phase) TSH concentrations. An acute versus chronic stage (wasting syndrome) of TACITUS can be discerned. The two types differ in molecular mechanisms and prognosis. The acute adaptation of thyroid hormone metabolism to critical illness may prove beneficial to the organism, whereas the far more complex molecular alterations associated with chronic illness frequently lead to allostatic overload. The latter is associated with poor outcome, independently of the underlying disease. Adaptive responses of thyroid homeostasis extend to alterations in thyroid hormone concentrations during fetal life, periods of weight gain or loss, thermoregulation, physical exercise, and psychiatric diseases. The various forms of thyroid allostasis pose serious problems in differential diagnosis of thyroid disease. This review article provides an overview of physiological mechanisms as well as major diagnostic and therapeutic implications of thyroid allostasis under a variety of developmental and straining conditions. PMID:28775711
Problem Posing and Solving with Mathematical Modeling
ERIC Educational Resources Information Center
English, Lyn D.; Fox, Jillian L.; Watters, James J.
2005-01-01
Mathematical modeling is explored as both problem posing and problem solving from two perspectives, that of the child and the teacher. Mathematical modeling provides rich learning experiences for elementary school children and their teachers.
Common mental health problems in immigrants and refugees: general approach in primary care
Kirmayer, Laurence J.; Narasiah, Lavanya; Munoz, Marie; Rashid, Meb; Ryder, Andrew G.; Guzder, Jaswant; Hassan, Ghayda; Rousseau, Cécile; Pottie, Kevin
2011-01-01
Background: Recognizing and appropriately treating mental health problems among new immigrants and refugees in primary care poses a challenge because of differences in language and culture and because of specific stressors associated with migration and resettlement. We aimed to identify risk factors and strategies in the approach to mental health assessment and to prevention and treatment of common mental health problems for immigrants in primary care. Methods: We searched and compiled literature on prevalence and risk factors for common mental health problems related to migration, the effect of cultural influences on health and illness, and clinical strategies to improve mental health care for immigrants and refugees. Publications were selected on the basis of relevance, use of recent data and quality in consultation with experts in immigrant and refugee mental health. Results: The migration trajectory can be divided into three components: premigration, migration and postmigration resettlement. Each phase is associated with specific risks and exposures. The prevalence of specific types of mental health problems is influenced by the nature of the migration experience, in terms of adversity experienced before, during and after resettlement. Specific challenges in migrant mental health include communication difficulties because of language and cultural differences; the effect of cultural shaping of symptoms and illness behaviour on diagnosis, coping and treatment; differences in family structure and process affecting adaptation, acculturation and intergenerational conflict; and aspects of acceptance by the receiving society that affect employment, social status and integration. These issues can be addressed through specific inquiry, the use of trained interpreters and culture brokers, meetings with families, and consultation with community organizations. Interpretation: Systematic inquiry into patients’ migration trajectory and subsequent follow-up on culturally appropriate indicators of social, vocational and family functioning over time will allow clinicians to recognize problems in adaptation and undertake mental health promotion, disease prevention or treatment interventions in a timely way. PMID:20603342
NASA Astrophysics Data System (ADS)
Horesh, L.; Haber, E.
2009-09-01
The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.
Inverse analysis and regularisation in conditional source-term estimation modelling
NASA Astrophysics Data System (ADS)
Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.
2014-05-01
Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.
Diagnosis of organic brain syndrome: an emergency department dilemma.
Dubin, W R; Weiss, K J
1984-01-01
Delirium and dementia frequently pose a diagnostic dilemma for clinicians in the emergency department. The overlap of symptoms between organic brain syndrome and functional psychiatric illness, coupled with a dramatic presentation, often leads to a premature psychiatric diagnosis. In this paper, the authors discuss those symptoms of organic brain syndrome that most frequently generate diagnostic confusion in the emergency department and result in a misdiagnosis of functional illness.
Problem-posing in education: transformation of the practice of the health professional.
Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D
1998-02-01
This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.
The inverse problem of estimating the gravitational time dilation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.
2016-11-15
Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less
NASA Astrophysics Data System (ADS)
Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen
2018-04-01
Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.
Källén-Lehmann spectroscopy for (un)physical degrees of freedom
NASA Astrophysics Data System (ADS)
Dudal, David; Oliveira, Orlando; Silva, Paulo J.
2014-01-01
We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.
Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments
Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun
2017-01-01
In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139
Berlow, Noah; Pal, Ranadip
2011-01-01
Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.
[Problem-posing as a nutritional education strategy with obese teenagers].
Rodrigues, Erika Marafon; Boog, Maria Cristina Faber
2006-05-01
Obesity is a public health issue with relevant social determinants in its etiology and where interventions with teenagers encounter complex biopsychological conditions. This study evaluated intervention in nutritional education through a problem-posing approach with 22 obese teenagers, treated collectively and individually for eight months. Speech acts were collected through the use of word cards, observer recording, and tape-recording. The study adopted a qualitative methodology, and the approach involved content analysis. Problem-posing facilitated changes in eating behavior, triggering reflections on nutritional practices, family circumstances, social stigma, interaction with health professionals, and religion. Teenagers under individual care posed problems more effectively in relation to eating, while those under collective care posed problems in relation to family and psychological issues, with effective qualitative eating changes in both groups. The intervention helped teenagers understand their life history and determinants of eating behaviors, spontaneously implementing eating changes and making them aware of possibilities for maintaining the new practices and autonomously exercising their role as protagonists in their own health care.
ERIC Educational Resources Information Center
Contreras, José N.
2013-01-01
This paper discusses a classroom experience in which a group of prospective secondary mathematics teachers were asked to create, cooperatively (in class) and individually, problems related to Viviani's problem using a problem-posing framework. When appropriate, students used Sketchpad to explore the problem to better understand its attributes…
ERIC Educational Resources Information Center
Ünlü, Melihan
2017-01-01
The aim of the study was to determine mathematics teacher candidates' knowledge about problem solving strategies through problem posing. This qualitative research was conducted with 95 mathematics teacher candidates studying at education faculty of a public university during the first term of the 2015-2016 academic year in Turkey. Problem Posing…
The Chronically Ill Child in the School.
ERIC Educational Resources Information Center
Sexson, Sandra; Madan-Swain, Avi
1995-01-01
Examines the effects of chronic illness on the school-age population. Facilitating successful functioning of chronically ill youths is a growing problem. Focuses on problems encountered by the chronically ill student who has either been diagnosed with a chronic illness or who has survived such an illness. Discusses the role of the school…
Sleep Problems in Children and Adolescents with Common Medical Conditions
Lewandowski, Amy S.; Ward, Teresa M.; Palermo, Tonya M.
2011-01-01
Synopsis Sleep is critically important to children’s health and well-being. Untreated sleep disturbances and sleep disorders pose significant adverse daytime consequences and place children at considerable risk for poor health outcomes. Sleep disturbances occur at a greater frequency in children with acute and chronic medical conditions compared to otherwise healthy peers. Sleep disturbances in medically ill children can be associated with sleep disorders (e.g., sleep disordered breathing, restless leg syndrome), co-morbid with acute and chronic conditions (e.g., asthma, arthritis, cancer), or secondary to underlying disease-related mechanisms (e.g. airway restriction, inflammation) treatment regimens, or hospitalization. Clinical management should include a multidisciplinary approach with particular emphasis on routine, regular sleep assessments and prevention of daytime consequences and promotion of healthy sleep habits and health outcomes. PMID:21600350
Applications of quantum entropy to statistics
NASA Astrophysics Data System (ADS)
Silver, R. N.; Martz, H. F.
This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.
DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses
NASA Astrophysics Data System (ADS)
Petschke, Danny; Staab, Torsten E. M.
2018-01-01
The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.
Adaptive Leadership Framework for Chronic Illness
Anderson, Ruth A.; Bailey, Donald E.; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S.; Thygeson, N. Marcus; Docherty, Sharron L.
2015-01-01
We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care. PMID:25647829
The 2-D magnetotelluric inverse problem solved with optimization
NASA Astrophysics Data System (ADS)
van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven
2011-02-01
The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.
NASA Astrophysics Data System (ADS)
Polydorides, Nick; Lionheart, William R. B.
2002-12-01
The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.
Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao
2018-06-13
Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.
Folk concepts of mental disorders among Chinese-Australian patients and their caregivers.
Hsiao, Fei-Hsiu; Klimidis, Steven; Minas, Harry I; Tan, Eng S
2006-07-01
This paper reports a study of (a) popular conceptions of mental illness throughout history, (b) how current social and cultural knowledge about mental illness influences Chinese-Australian patients' and caregivers' understanding of mental illness and the consequences of this for explaining and labelling patients' problems. According to traditional Chinese cultural knowledge about health and illness, Chinese people believe that psychotic illness is the only type of mental illness, and that non-psychotic illness is a physical illness. Regarding patients' problems as not being due to mental illness may result in delaying use of Western mental health services. Data collection took place in 2001. Twenty-eight Chinese-Australian patients with mental illness and their caregivers were interviewed at home, drawing on Kleinman's explanatory model and studies of cultural transmission. Interviews were tape-recorded and transcribed, and analysed for plots and themes. Chinese-Australians combined traditional knowledge with Western medical knowledge to develop their own labels for various kinds of mental disorders, including 'mental illness', 'physical illness', 'normal problems of living' and 'psychological problems'. As they learnt more about Western conceptions of psychology and psychiatry, their understanding of some disorders changed. What was previously ascribed to non-mental disorders was often re-labelled as 'mental illness' or 'psychological problems'. Educational programmes aimed at introducing Chinese immigrants to counselling and other psychiatric services could be made more effective if designers gave greater consideration to Chinese understanding of mental illness.
A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks
ERIC Educational Resources Information Center
Singer, Florence Mihaela; Voica, Cristian
2013-01-01
The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…
Opportunities to Pose Problems Using Digital Technology in Problem Solving Environments
ERIC Educational Resources Information Center
Aguilar-Magallón, Daniel Aurelio; Fernández, Willliam Enrique Poveda
2017-01-01
This article reports and analyzes different types of problems that nine students in a Master's Program in Mathematics Education posed during a course on problem solving. What opportunities (affordances) can a dynamic geometry system (GeoGebra) offer to allow in-service and in-training teachers to formulate and solve problems, and what type of…
Do everyday problems of people with chronic illness interfere with their disease management?
van Houtum, Lieke; Rijken, Mieke; Groenewegen, Peter
2015-10-01
Being chronically ill is a continuous process of balancing the demands of the illness and the demands of everyday life. Understanding how everyday life affects self-management might help to provide better professional support. However, little attention has been paid to the influence of everyday life on self-management. The purpose of this study is to examine to what extent problems in everyday life interfere with the self-management behaviour of people with chronic illness, i.e. their ability to manage their illness. To estimate the effects of having everyday problems on self-management, cross-sectional linear regression analyses with propensity score matching were conducted. Data was used from 1731 patients with chronic disease(s) who participated in a nationwide Dutch panel-study. One third of people with chronic illness encounter basic (e.g. financial, housing, employment) or social (e.g. partner, children, sexual or leisure) problems in their daily life. Younger people, people with poor health and people with physical limitations are more likely to have everyday problems. Experiencing basic problems is related to less active coping behaviour, while experiencing social problems is related to lower levels of symptom management and less active coping behaviour. The extent of everyday problems interfering with self-management of people with chronic illness depends on the type of everyday problems encountered, as well as on the type of self-management activities at stake. Healthcare providers should pay attention to the life context of people with chronic illness during consultations, as patients' ability to manage their illness is related to it.
A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow
NASA Technical Reports Server (NTRS)
Baker, Gregory; Siegel, Michael; Tanveer, Saleh
1995-01-01
We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.
Algorithms and Array Design Criteria for Robust Imaging in Interferometry
NASA Astrophysics Data System (ADS)
Kurien, Binoy George
Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.
The Structure of Ill-Structured (and Well-Structured) Problems Revisited
ERIC Educational Resources Information Center
Reed, Stephen K.
2016-01-01
In his 1973 article "The Structure of ill structured problems", Herbert Simon proposed that solving ill-structured problems could be modeled within the same information-processing framework developed for solving well-structured problems. This claim is reexamined within the context of over 40 years of subsequent research and theoretical…
Marshall, R C; McGurk, S R; Karow, C M; Kairy, T J; Flashman, L A
2006-06-01
Severe mental illness is associated with impairments in executive functions, such as conceptual reasoning, planning, and strategic thinking all of which impact problem solving. The present study examined the utility of a novel assessment tool for problem solving, the Rapid Assessment of Problem Solving Test (RAPS) in persons with severe mental illness. Subjects were 47 outpatients with severe mental illness and an equal number healthy controls matched for age and gender. Results confirmed all hypotheses with respect to how subjects with severe mental illness would perform on the RAPS. Specifically, the severely mentally ill subjects (1) solved fewer problems on the RAPS, (2) when they did solve problems on the test, they did so far less efficiently than their healthy counterparts, and (3) the two groups differed markedly in the types of questions asked on the RAPS. The healthy control subjects tended to take a systematic, organized, but not always optimal approach to solving problems on the RAPS. The subjects with severe mental illness used some of the problem solving strategies of the healthy controls, but their performance was less consistent and tended to deteriorate when the complexity of the problem solving task increased. This was reflected by a high degree of guessing in lieu of asking constraint questions, particularly if a category-limited question was insufficient to continue the problem solving effort.
Regolith thermal property inversion in the LUNAR-A heat-flow experiment
NASA Astrophysics Data System (ADS)
Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.
2001-11-01
In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.
The challenge of gun control for mental health advocates.
Pandya, Anand
2013-09-01
Mass shootings, such as the 2012 Newtown massacre, have repeatedly led to political discourse about limiting access to guns for individuals with serious mental illness. Although the political climate after such tragic events poses a considerable challenge to mental health advocates who wish to minimize unsympathetic portrayals of those with mental illness, such media attention may be a rare opportunity to focus attention on risks of victimization of those with serious mental illness and barriers to obtaining psychiatric care. Current federal gun control laws may discourage individuals from seeking psychiatric treatment and describe individuals with mental illness using anachronistic, imprecise, and gratuitously stigmatizing language. This article lays out potential talking points that may be useful after future gun violence.
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em
2018-02-01
Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.
2016-04-27
Essential facts Scarlet fever is characterised by a rash that usually accompanies a sore throat and flushed cheeks. It is mainly a childhood illness. While this contagious disease rarely poses a danger to life today, outbreaks in the past led to many deaths.
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2014 CFR
2014-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2012 CFR
2012-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.
Code of Federal Regulations, 2013 CFR
2013-07-01
... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...
Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M
2015-02-01
It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.
Nardodkar, Renuka; Pathare, Soumitra; Ventriglio, Antonio; Castaldelli-Maia, João; Javate, Kenneth R; Torales, Julio; Bhugra, Dinesh
2016-08-01
The right to work and employment is indispensable for social integration of persons with mental health problems. This study examined whether existing laws pose structural barriers in the realization of right to work and employment of persons with mental health problems across the world. It reviewed disability-specific, human rights legislation, and labour laws of all UN Member States in the context of Article 27 of the UN Convention on the Rights of Persons with Disabilities (CRPD). It wes found that laws in 62% of countries explicitly mention mental disability/impairment/illness in the definition of disability. In 64% of countries, laws prohibit discrimination against persons with mental health during recruitment; in one-third of countries laws prohibit discontinuation of employment. More than half (56%) the countries have laws in place which offer access to reasonable accommodation in the workplace. In 59% of countries laws promote employment of persons with mental health problems through different affirmative actions. Nearly 50 years after the adoption of the International Covenant on Economic, Social, and Cultural Rights and 10 years after the adoption of CRPD by the UN General Assembly, legal discrimination against persons with mental health problems continues to exist globally. Countries and policy-makers need to implement legislative measures to ensure non-discrimination of persons with mental health problems during employment.
NASA Astrophysics Data System (ADS)
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
NASA Astrophysics Data System (ADS)
Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar
2017-11-01
Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.
NASA Astrophysics Data System (ADS)
Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu
2016-12-01
Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.
NASA Astrophysics Data System (ADS)
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.
Pre-Service Elementary Teachers' Motivation and Ill-Structured Problem Solving in Korea
ERIC Educational Resources Information Center
Kim, Min Kyeong; Cho, Mi Kyung
2016-01-01
This article examines the use and application of an ill-structured problem to pre-service elementary teachers in Korea in order to find implications of pre-service teacher education with regard to contextualized problem solving by analyzing experiences of ill-structured problem solving. Participants were divided into small groups depending on the…
ERIC Educational Resources Information Center
Kapur, Manu
2018-01-01
The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…
ERIC Educational Resources Information Center
Xie, Jinxia; Masingila, Joanna O.
2017-01-01
Existing studies have quantitatively evidenced the relatedness between problem posing and problem solving, as well as the magnitude of this relationship. However, the nature and features of this relationship need further qualitative exploration. This paper focuses on exploring the interactions, i.e., mutual effects and supports, between problem…
Anderson, Ruth A; Bailey, Donald E; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S; Thygeson, N Marcus; Docherty, Sharron L
2015-01-01
We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care.
The influence of initial conditions on dispersion and reactions
NASA Astrophysics Data System (ADS)
Wood, B. D.
2016-12-01
In various generalizations of the reaction-dispersion problem, researchers have developed frameworks in which the apparent dispersion coefficient can be negative. Such dispersion coefficients raise several difficult questions. Most importantly, the presence of a negative dispersion coefficient at the macroscale leads to a macroscale representation that illustrates an apparent decrease in entropy with increasing time; this, then, appears to be in violation of basic thermodynamic principles. In addition, the proposition of a negative dispersion coefficient leads to an inherently ill-posed mathematical transport equation. The ill-posedness of the problem arises because there is no unique initial condition that corresponds to a later-time concentration distribution (assuming that if discontinuous initial conditions are allowed). In this presentation, we explain how the phenomena of negative dispersion coefficients actually arise because the governing differential equation for early times should, when derived correctly, incorporate a term that depends upon the initial and boundary conditions. The process of reactions introduces a similar phenomena, where the structure of the initial and boundary condition influences the form of the macroscopic balance equations. When upscaling is done properly, new equations are developed that include source terms that are not present in the classical (late-time) reaction-dispersion equation. These source terms depend upon the structure of the initial condition of the reacting species, and they decrease exponentially in time (thus, they converge to the conventional equations at asymptotic times). With this formulation, the resulting dispersion tensor is always positive-semi-definite, and the reaction terms directly incorporate information about the state of mixedness of the system. This formulation avoids many of the problems that would be engendered by defining negative-definite dispersion tensors, and properly represents the effective rate of reaction at early times.
Bayesian tomography by interacting Markov chains
NASA Astrophysics Data System (ADS)
Romary, T.
2017-12-01
In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.
Psychiatric diagnostic dilemmas in the medical setting.
Strain, James J
2005-09-01
To review the problems posed for doctors by the failure of existing taxonomies to provide a satisfactory method for deriving diagnoses in cases of physical/psychiatric comorbidity, and of relating diagnoses on multiple axes. Review of existing taxonomies and key criticisms. The author was guided in selection by his experience as a member of the working parties involved in the creation of the American Psychiatric Association's DSM-IV. The attempts of the two major taxonomies, the ICD-10 and the American Psychiatric Association's DSM-IV, to address the problem by use of glossaries and multiple axes are described, and found wanting. Novel approaches, including McHugh and Slavey's perspectives of disease, dimensions, behaviour and life story, are described and evaluated. The problem of developing valid and reliable measures of physical/psychiatric comorbidity is addressed, including a discussion of genetic factors, neurobiological variables, target markers and other pathophysiological indicators. Finally, the concept of depression as a systemic illness involving brain, mind and body is raised and the implications of this discussed. Taxonomies require major revision in order to provide a useful basis for communication and research about one of the most frequent presentations in the community, physical/psychiatric comorbidity.
NASA Astrophysics Data System (ADS)
Alifanov, O. M.; Budnik, S. A.; Nenarokomov, A. V.; Netelev, A. V.; Titov, D. M.
2013-04-01
In many practical situations it is impossible to measure directly thermal and thermokinetic properties of analyzed composite materials. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurements is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The general method of iterative regularization is concerned with application to the estimation of materials properties. The objective of this paper is to estimate thermal and thermokinetic properties of advanced materials using the approach based on inverse methods. An experimental-computational system is presented for investigating the thermal and kinetics properties of composite materials by methods of inverse heat transfer problems and which is developed at the Thermal Laboratory of Department Space Systems Engineering, of Moscow Aviation Institute (MAI). The system is aimed at investigating the materials in conditions of unsteady contact and/or radiation heating over a wide range of temperature changes and heating rates in a vacuum, air and inert gas medium.
Sensor fusion for structural tilt estimation using an acceleration-based tilt sensor and a gyroscope
NASA Astrophysics Data System (ADS)
Liu, Cheng; Park, Jong-Woong; Spencer, B. F., Jr.; Moon, Do-Soo; Fan, Jiansheng
2017-10-01
A tilt sensor can provide useful information regarding the health of structural systems. Most existing tilt sensors are gravity/acceleration based and can provide accurate measurements of static responses. However, for dynamic tilt, acceleration can dramatically affect the measured responses due to crosstalk. Thus, dynamic tilt measurement is still a challenging problem. One option is to integrate the output of a gyroscope sensor, which measures the angular velocity, to obtain the tilt; however, problems arise because the low-frequency sensitivity of the gyroscope is poor. This paper proposes a new approach to dynamic tilt measurements, fusing together information from a MEMS-based gyroscope and an acceleration-based tilt sensor. The gyroscope provides good estimates of the tilt at higher frequencies, whereas the acceleration measurements are used to estimate the tilt at lower frequencies. The Tikhonov regularization approach is employed to fuse these measurements together and overcome the ill-posed nature of the problem. The solution is carried out in the frequency domain and then implemented in the time domain using FIR filters to ensure stability. The proposed method is validated numerically and experimentally to show that it performs well in estimating both the pseudo-static and dynamic tilt measurements.
Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction
NASA Astrophysics Data System (ADS)
Liang, Guanghui; Ren, Shangjie; Dong, Feng
2017-07-01
The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.
The application of mean field theory to image motion estimation.
Zhang, J; Hanauer, G G
1995-01-01
Previously, Markov random field (MRF) model-based techniques have been proposed for image motion estimation. Since motion estimation is usually an ill-posed problem, various constraints are needed to obtain a unique and stable solution. The main advantage of the MRF approach is its capacity to incorporate such constraints, for instance, motion continuity within an object and motion discontinuity at the boundaries between objects. In the MRF approach, motion estimation is often formulated as an optimization problem, and two frequently used optimization methods are simulated annealing (SA) and iterative-conditional mode (ICM). Although the SA is theoretically optimal in the sense of finding the global optimum, it usually takes many iterations to converge. The ICM, on the other hand, converges quickly, but its results are often unsatisfactory due to its "hard decision" nature. Previously, the authors have applied the mean field theory to image segmentation and image restoration problems. It provides results nearly as good as SA but with much faster convergence. The present paper shows how the mean field theory can be applied to MRF model-based motion estimation. This approach is demonstrated on both synthetic and real-world images, where it produced good motion estimates.
Improving attitudes toward mathematics learning with problem posing in class VIII
NASA Astrophysics Data System (ADS)
Vionita, Alfha; Purboningsih, Dyah
2017-08-01
This research is classroom action research which is collaborated to improve student's behavior toward math and mathematics learning at class VIII by using problem posing approach. The subject of research is all of students grade VIIIA which consist of 32 students. This research has been held on two period, first period is about 3 times meeting, and second period is about 4 times meeting. The instrument of this research is implementation of learning observation's guidance by using problem posing approach. Cycle test has been used to measure cognitive competence, and questionnaire to measure the students' behavior in mathematics learning process. The result of research shows the students' behavior has been improving after using problem posing approach. It is showed by the behavior's criteria of students that has increasing result from the average in first period to high in second period. Furthermore, the percentage of test result is also improve from 68,75% in first period to 78,13% in second period. On the other hand, the implementation of learning observation by using problem posing approach has also improving and it is showed by the average percentage of teacher's achievement in first period is 89,2% and student's achievement 85,8%. These results get increase in second period for both teacher and students' achievement which are 94,4% and 91,11%. As a result, students' behavior toward math learning process in class VIII has been improving by using problem posing approach.
Human health effects and remotely sensed cyanobacteria
Cyanobacteria blooms (HAB) pose a potential health risk to beachgoers, including HAB-associated gastrointestinal, respiratory and dermal illness. We conducted a prospective study of beachgoers at a Great Lakes beach during July – September, 2003. We recorded each participan...
Non-ambiguous recovery of Biot poroelastic parameters of cellular panels using ultrasonicwaves
NASA Astrophysics Data System (ADS)
Ogam, Erick; Fellah, Z. E. A.; Sebaa, Naima; Groby, J.-P.
2011-03-01
The inverse problem of the recovery of the poroelastic parameters of open-cell soft plastic foam panels is solved by employing transmitted ultrasonic waves (USW) and the Biot-Johnson-Koplik-Champoux-Allard (BJKCA) model. It is shown by constructing the objective functional given by the total square of the difference between predictions from the BJKCA interaction model and experimental data obtained with transmitted USW that the inverse problem is ill-posed, since the functional exhibits several local minima and maxima. In order to solve this problem, which is beyond the capability of most off-the-shelf iterative nonlinear least squares optimization algorithms (such as the Levenberg Marquadt or Nelder-Mead simplex methods), simple strategies are developed. The recovered acoustic parameters are compared with those obtained using simpler interaction models and a method employing asymptotic phase velocity of the transmitted USW. The retrieved elastic moduli are validated by solving an inverse vibration spectroscopy problem with data obtained from beam-like specimens cut from the panels using an equivalent solid elastodynamic model as estimator. The phase velocities are reconstructed using computed, measured resonance frequencies and a time-frequency decomposition of transient waves induced in the beam specimen. These confirm that the elastic parameters recovered using vibration are valid over the frequency range ofstudy.
A well-posed optimal spectral element approximation for the Stokes problem
NASA Technical Reports Server (NTRS)
Maday, Y.; Patera, A. T.; Ronquist, E. M.
1987-01-01
A method is proposed for the spectral element simulation of incompressible flow. This method constitutes in a well-posed optimal approximation of the steady Stokes problem with no spurious modes in the pressure. The resulting method is analyzed, and numerical results are presented for a model problem.
Pose and Solve Varignon Converse Problems
ERIC Educational Resources Information Center
Contreras, José N.
2014-01-01
The activity of posing and solving problems can enrich learners' mathematical experiences because it fosters a spirit of inquisitiveness, cultivates their mathematical curiosity, and deepens their views of what it means to do mathematics. To achieve these goals, a mathematical problem needs to be at the appropriate level of difficulty,…
Applications: Students, the Mathematics Curriculum and Mathematics Textbooks
ERIC Educational Resources Information Center
Kilic, Cigdem
2013-01-01
Problem posing is one of the most important topics in a mathematics education. Through problem posing, students gain mathematical abilities and concepts and teachers can evaluate their students and arrange adequate learning environments. The aim of the present study is to investigate Turkish primary school teachers' opinions about problem posing…
Investigating the Impact of Field Trips on Teachers' Mathematical Problem Posing
ERIC Educational Resources Information Center
Courtney, Scott A.; Caniglia, Joanne; Singh, Rashmi
2014-01-01
This study examines the impact of field trip experiences on teachers' mathematical problem posing. Teachers from a large urban public school system in the Midwest participated in a professional development program that incorporated experiential learning with mathematical problem formulation experiences. During 2 weeks of summer 2011, 68 teachers…
Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule
NASA Astrophysics Data System (ADS)
Jin, Qinian; Wang, Wei
2018-03-01
The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.
Facial Affect Recognition Using Regularized Discriminant Analysis-Based Algorithms
NASA Astrophysics Data System (ADS)
Lee, Chien-Cheng; Huang, Shin-Sheng; Shih, Cheng-Yuan
2010-12-01
This paper presents a novel and effective method for facial expression recognition including happiness, disgust, fear, anger, sadness, surprise, and neutral state. The proposed method utilizes a regularized discriminant analysis-based boosting algorithm (RDAB) with effective Gabor features to recognize the facial expressions. Entropy criterion is applied to select the effective Gabor feature which is a subset of informative and nonredundant Gabor features. The proposed RDAB algorithm uses RDA as a learner in the boosting algorithm. The RDA combines strengths of linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA). It solves the small sample size and ill-posed problems suffered from QDA and LDA through a regularization technique. Additionally, this study uses the particle swarm optimization (PSO) algorithm to estimate optimal parameters in RDA. Experiment results demonstrate that our approach can accurately and robustly recognize facial expressions.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-01-01
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone. PMID:26266764
NASA Astrophysics Data System (ADS)
Jiang, Peng; Peng, Lihui; Xiao, Deyun
2007-06-01
This paper presents a regularization method by using different window functions as regularization for electrical capacitance tomography (ECT) image reconstruction. Image reconstruction for ECT is a typical ill-posed inverse problem. Because of the small singular values of the sensitivity matrix, the solution is sensitive to the measurement noise. The proposed method uses the spectral filtering properties of different window functions to make the solution stable by suppressing the noise in measurements. The window functions, such as the Hanning window, the cosine window and so on, are modified for ECT image reconstruction. Simulations with respect to five typical permittivity distributions are carried out. The reconstructions are better and some of the contours are clearer than the results from the Tikhonov regularization. Numerical results show that the feasibility of the image reconstruction algorithm using different window functions as regularization.
Thermal diffusion of Boussinesq solitons.
Arévalo, Edward; Mertens, Franz G
2007-10-01
We consider the problem of the soliton dynamics in the presence of an external noisy force for the Boussinesq type equations. A set of ordinary differential equations (ODEs) of the relevant coordinates of the system is derived. We show that for the improved Boussinesq (IBq) equation the set of ODEs has limiting cases leading to a set of ODEs which can be directly derived either from the ill-posed Boussinesq equation or from the Korteweg-de Vries (KdV) equation. The case of a soliton propagating in the presence of damping and thermal noise is considered for the IBq equation. A good agreement between theory and simulations is observed showing the strong robustness of these excitations. The results obtained here generalize previous results obtained in the frame of the KdV equation for lattice solitons in the monatomic chain of atoms.
Tang, Jun; Yao, Yibin; Zhang, Liang; Kong, Jian
2015-08-12
The insufficiency of data is the essential reason for ill-posed problem existed in computerized ionospheric tomography (CIT) technique. Therefore, the method of integrating multi-source data is proposed. Currently, the multiple satellite navigation systems and various ionospheric observing instruments provide abundant data which can be employed to reconstruct ionospheric electron density (IED). In order to improve the vertical resolution of IED, we do research on IED reconstruction by integration of ground-based GPS data, occultation data from the LEO satellite, satellite altimetry data from Jason-1 and Jason-2 and ionosonde data. We used the CIT results to compare with incoherent scatter radar (ISR) observations, and found that the multi-source data fusion was effective and reliable to reconstruct electron density, showing its superiority than CIT with GPS data alone.
NASA Astrophysics Data System (ADS)
Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.
2016-11-01
Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.
Robotic disaster recovery efforts with ad-hoc deployable cloud computing
NASA Astrophysics Data System (ADS)
Straub, Jeremy; Marsh, Ronald; Mohammad, Atif F.
2013-06-01
Autonomous operations of search and rescue (SaR) robots is an ill posed problem, which is complexified by the dynamic disaster recovery environment. In a typical SaR response scenario, responder robots will require different levels of processing capabilities during various parts of the response effort and will need to utilize multiple algorithms. Placing these capabilities onboard the robot is a mediocre solution that precludes algorithm specific performance optimization and results in mediocre performance. Architecture for an ad-hoc, deployable cloud environment suitable for use in a disaster response scenario is presented. Under this model, each service provider is optimized for the task and maintains a database of situation-relevant information. This service-oriented architecture (SOA 3.0) compliant framework also serves as an example of the efficient use of SOA 3.0 in an actual cloud application.
The mean field theory in EM procedures for blind Markov random field image restoration.
Zhang, J
1993-01-01
A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.
London, L
2009-11-01
Little research into neurobehavioural methods and effects occurs in developing countries, where established neurotoxic chemicals continue to pose significant occupational and environmental burdens, and where agents newly identified as neurotoxic are also widespread. Much of the morbidity and mortality associated with neurotoxic agents remains hidden in developing countries as a result of poor case detection, lack of skilled personnel, facilities and equipment for diagnosis, inadequate information systems, limited resources for research and significant competing causes of ill-health, such as HIV/AIDS and malaria. Placing the problem in a human rights context enables researchers and scientists in developing countries to make a strong case for why the field of neurobehavioural methods and effects matters because there are numerous international human rights commitments that make occupational and environmental health and safety a human rights obligation.
ERIC Educational Resources Information Center
Darvin, Jacqueline
2009-01-01
One way to merge imagination with problem-posing and problem-solving in the English classroom is by asking students to respond to "cultural and political vignettes" (CPVs). CPVs are cultural and political situations that are presented to students so that they can practice the creative and essential decision-making skills that they will need to use…
ERIC Educational Resources Information Center
Huntley, Mary Ann; Davis, Jon D.
2008-01-01
A cross-curricular structured-probe task-based clinical interview study with 44 pairs of third year high-school mathematics students, most of whom were high achieving, was conducted to investigate their approaches to a variety of algebra problems. This paper presents results from three problems that were posed in symbolic form. Two problems are…
Management Issues in Critically Ill Pediatric Patients with Trauma.
Ahmed, Omar Z; Burd, Randall S
2017-10-01
The management of critically ill pediatric patients with trauma poses many challenges because of the infrequency and diversity of severe injuries and a paucity of high-level evidence to guide care for these uncommon events. This article discusses recent recommendations for early resuscitation and blood component therapy for hypovolemic pediatric patients with trauma. It also highlights the specific types of injuries that lead to severe injury in children and presents challenges related to their management. Copyright © 2017 Elsevier Inc. All rights reserved.
Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss EM
2017-01-01
Background: Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. Aims: The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Design and setting: Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Results: Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4–68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3–17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36–1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness (χ2 = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01–1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Conclusion: Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation. PMID:28812945
ERIC Educational Resources Information Center
Aguilar-Magallón, Daniel Aurelio; Reyes-Martìnez, Isaid
2016-01-01
We analyze and discuss ways in which prospective high school teachers pose and pursue questions or problems during the process of reconstructing dynamic configurations of figures given in problem statements. To what extent does the systematic use of a Dynamic Geometry System (DGS) help the participants engage in problem posing activities…
Luyckx, Koen; Rassart, Jessica; Aujoulat, Isabelle; Goubert, Liesbet; Weets, Ilse
2016-04-01
This long-term prospective study examined whether illness self-concept (or the degree to which chronic illness becomes integrated in the self) mediated the pathway from self-esteem to problem areas in diabetes in emerging adults with Type 1 diabetes. Having a central illness self-concept (i.e. feeling overwhelmed by diabetes) was found to relate to lower self-esteem, and more treatment, food, emotional, and social support problems. Furthermore, path analyses indicated that self-esteem was negatively related to both levels and relative changes in these problem areas in diabetes over a period of 5 years. Illness self-concept fully mediated these associations. © The Author(s) 2014.
Concepts, Structures, and Goals: Redefining Ill-Definedness
ERIC Educational Resources Information Center
Lynch, Collin; Ashley, Kevin D.; Pinkwart, Niels; Aleven, Vincent
2009-01-01
In this paper we consider prior definitions of the terms "ill-defined domain" and "ill-defined problem". We then present alternate definitions that better support research at the intersection of Artificial Intelligence and Education. In our view both problems and domains are ill-defined when essential concepts, relations, or criteria are un- or…
Asymptotic analysis of the local potential approximation to the Wetterich equation
NASA Astrophysics Data System (ADS)
Bender, Carl M.; Sarkar, Sarben
2018-06-01
This paper reports a study of the nonlinear partial differential equation that arises in the local potential approximation to the Wetterich formulation of the functional renormalization group equation. A cut-off-dependent shift of the potential in this partial differential equation is performed. This shift allows a perturbative asymptotic treatment of the differential equation for large values of the infrared cut-off. To leading order in perturbation theory the differential equation becomes a heat equation, where the sign of the diffusion constant changes as the space-time dimension D passes through 2. When D < 2, one obtains a forward heat equation whose initial-value problem is well-posed. However, for D > 2 one obtains a backward heat equation whose initial-value problem is ill-posed. For the special case D = 1 the asymptotic series for cubic and quartic models is extrapolated to the small infrared-cut-off limit by using Padé techniques. The effective potential thus obtained from the partial differential equation is then used in a Schrödinger-equation setting to study the stability of the ground state. For cubic potentials it is found that this Padé procedure distinguishes between a -symmetric theory and a conventional Hermitian theory (g real). For an theory the effective potential is nonsingular and has a stable ground state but for a conventional theory the effective potential is singular. For a conventional Hermitian theory and a -symmetric theory (g > 0) the results are similar; the effective potentials in both cases are nonsingular and possess stable ground states.
NASA Astrophysics Data System (ADS)
Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn
EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.
Person Authentication Using Learned Parameters of Lifting Wavelet Filters
NASA Astrophysics Data System (ADS)
Niijima, Koichi
2006-10-01
This paper proposes a method for identifying persons by the use of the lifting wavelet parameters learned by kurtosis-minimization. Our learning method uses desirable properties of kurtosis and wavelet coefficients of a facial image. Exploiting these properties, the lifting parameters are trained so as to minimize the kurtosis of lifting wavelet coefficients computed for the facial image. Since this minimization problem is an ill-posed problem, it is solved by the aid of Tikhonov's regularization method. Our learning algorithm is applied to each of the faces to be identified to generate its feature vector whose components consist of the learned parameters. The constructed feature vectors are memorized together with the corresponding faces in a feature vectors database. Person authentication is performed by comparing the feature vector of a query face with those stored in the database. In numerical experiments, the lifting parameters are trained for each of the neutral faces of 132 persons (74 males and 58 females) in the AR face database. Person authentication is executed by using the smile and anger faces of the same persons in the database.
An efficient and flexible Abel-inversion method for noisy data
NASA Astrophysics Data System (ADS)
Antokhin, Igor I.
2016-12-01
We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.
Liao, Yu-Kai; Tseng, Sheng-Hao
2014-01-01
Accurately determining the optical properties of multi-layer turbid media using a layered diffusion model is often a difficult task and could be an ill-posed problem. In this study, an iterative algorithm was proposed for solving such problems. This algorithm employed a layered diffusion model to calculate the optical properties of a layered sample at several source-detector separations (SDSs). The optical properties determined at various SDSs were mutually referenced to complete one round of iteration and the optical properties were gradually revised in further iterations until a set of stable optical properties was obtained. We evaluated the performance of the proposed method using frequency domain Monte Carlo simulations and found that the method could robustly recover the layered sample properties with various layer thickness and optical property settings. It is expected that this algorithm can work with photon transport models in frequency and time domain for various applications, such as determination of subcutaneous fat or muscle optical properties and monitoring the hemodynamics of muscle. PMID:24688828
On the "Optimal" Choice of Trial Functions for Modelling Potential Fields
NASA Astrophysics Data System (ADS)
Michel, Volker
2015-04-01
There are many trial functions (e.g. on the sphere) available which can be used for the modelling of a potential field. Among them are orthogonal polynomials such as spherical harmonics and radial basis functions such as spline or wavelet basis functions. Their pros and cons have been widely discussed in the last decades. We present an algorithm, the Regularized Functional Matching Pursuit (RFMP), which is able to choose trial functions of different kinds in order to combine them to a stable approximation of a potential field. One main advantage of the RFMP is that the constructed approximation inherits the advantages of the different basis systems. By including spherical harmonics, coarse global structures can be represented in a sparse way. However, the additional use of spline basis functions allows a stable handling of scattered data grids. Furthermore, the inclusion of wavelets and scaling functions yields a multiscale analysis of the potential. In addition, ill-posed inverse problems (like a downward continuation or the inverse gravimetric problem) can be regularized with the algorithm. We show some numerical examples to demonstrate the possibilities which the RFMP provides.
A model of recovering the parameters of fast nonlocal heat transport in magnetic fusion plasmas
NASA Astrophysics Data System (ADS)
Kukushkin, A. B.; Kulichenko, A. A.; Sdvizhenskii, P. A.; Sokolov, A. V.; Voloshinov, V. V.
2017-12-01
A model is elaborated for interpreting the initial stage of the fast nonlocal transport events, which exhibit immediate response, in the diffusion time scale, of the spatial profile of electron temperature to its local perturbation, while the net heat flux is directed opposite to ordinary diffusion (i.e. along the temperature gradient). We solve the inverse problem of recovering the kernel of the integral equation, which describes nonlocal (superdiffusive) transport of energy due to emission and absorption of electromagnetic (EM) waves with long free path and strong reflection from the vacuum vessel’s wall. To allow for the errors of experimental data, we use the method based on the regularized (in the framework of an ill-posed problem, using the parametric models) approximation of available experimental data. The model is applied to interpreting the data from stellarator LHD and tokamak TFTR. The EM wave transport is considered here in the single-group approximation, however the limitations of the physics model enable us to identify the spectral range of the EM waves which might be responsible for the observed phenomenon.
Pixel-based parametric source depth map for Cerenkov luminescence imaging
NASA Astrophysics Data System (ADS)
Altabella, L.; Boschi, F.; Spinelli, A. E.
2016-01-01
Optical tomography represents a challenging problem in optical imaging because of the intrinsically ill-posed inverse problem due to photon diffusion. Cerenkov luminescence tomography (CLT) for optical photons produced in tissues by several radionuclides (i.e.: 32P, 18F, 90Y), has been investigated using both 3D multispectral approach and multiviews methods. Difficult in convergence of 3D algorithms can discourage to use this technique to have information of depth and intensity of source. For these reasons, we developed a faster 2D corrected approach based on multispectral acquisitions, to obtain source depth and its intensity using a pixel-based fitting of source intensity. Monte Carlo simulations and experimental data were used to develop and validate the method to obtain the parametric map of source depth. With this approach we obtain parametric source depth maps with a precision between 3% and 7% for MC simulation and 5-6% for experimental data. Using this method we are able to obtain reliable information about the source depth of Cerenkov luminescence with a simple and flexible procedure.
Bee, Penny; Berzins, Kathryn; Calam, Rachel; Pryjmachuk, Steven; Abel, Kathryn M.
2013-01-01
Severe parental mental illness poses a challenge to quality of life (QoL) in a substantial number of children and adolescents, and improving the lives of these children is of urgent political and public health concern. This study used a bottom-up qualitative approach to develop a new stakeholder-led model of quality of life relevant to this population. Qualitative data were collected from 19 individuals participating in focus groups or individual interviews. Participants comprised 8 clinical academics, health and social care professionals or voluntary agency representatives; 5 parents and 6 young people (aged 13–18 yrs) with lived experience of severe parental mental illness. Data underwent inductive thematic analysis for the purposes of informing a population-specific quality of life model. Fifty nine individual themes were identified and grouped into 11 key ‘meta-themes’. Mapping each meta-theme against existing child-centred quality of life concepts revealed a multi-dimensional model that endorsed, to a greater or lesser degree, the core domains of generic quality of life models. Three new population-specific priorities were also observed: i) the alleviation of parental mental health symptoms, ii) improved problem-based coping skills and iii) increased mental health literacy. The identification of these priorities raises questions regarding the validity of generic quality of life measures to monitor the effectiveness of services for families and children affected by severe mental illness. New, age-appropriate instruments that better reflect the life priorities and unique challenges faced by the children of parents with severe mental illness may need to be developed. Challenges then remain in augmenting and adapting service design and delivery mechanisms better to meet these needs. Future child and adult mental health services need to work seamlessly alongside statutory education and social care services and a growing number of relevant third sector providers to address fully the quality of life priorities of these vulnerable families. PMID:24040050
Bee, Penny; Berzins, Kathryn; Calam, Rachel; Pryjmachuk, Steven; Abel, Kathryn M
2013-01-01
Severe parental mental illness poses a challenge to quality of life (QoL) in a substantial number of children and adolescents, and improving the lives of these children is of urgent political and public health concern. This study used a bottom-up qualitative approach to develop a new stakeholder-led model of quality of life relevant to this population. Qualitative data were collected from 19 individuals participating in focus groups or individual interviews. Participants comprised 8 clinical academics, health and social care professionals or voluntary agency representatives; 5 parents and 6 young people (aged 13-18 yrs) with lived experience of severe parental mental illness. Data underwent inductive thematic analysis for the purposes of informing a population-specific quality of life model. Fifty nine individual themes were identified and grouped into 11 key 'meta-themes'. Mapping each meta-theme against existing child-centred quality of life concepts revealed a multi-dimensional model that endorsed, to a greater or lesser degree, the core domains of generic quality of life models. Three new population-specific priorities were also observed: i) the alleviation of parental mental health symptoms, ii) improved problem-based coping skills and iii) increased mental health literacy. The identification of these priorities raises questions regarding the validity of generic quality of life measures to monitor the effectiveness of services for families and children affected by severe mental illness. New, age-appropriate instruments that better reflect the life priorities and unique challenges faced by the children of parents with severe mental illness may need to be developed. Challenges then remain in augmenting and adapting service design and delivery mechanisms better to meet these needs. Future child and adult mental health services need to work seamlessly alongside statutory education and social care services and a growing number of relevant third sector providers to address fully the quality of life priorities of these vulnerable families.
The Stigma of Mental Illness as a Barrier to Self Labeling as Having a Mental Illness.
Stolzenburg, Susanne; Freitag, Simone; Evans-Lacko, Sara; Muehlan, Holger; Schmidt, Silke; Schomerus, Georg
2017-12-01
The aim of this study was to investigate whether personal stigma decreases self-identification as having a mental illness in individuals with untreated mental health problems. We interviewed 207 persons with a currently untreated mental health problem as confirmed by a structured diagnostic interview. Measures included symptom appraisal, self-identification as having a mental illness (SELFI), self-labeling (open-ended question on the nature of their problem) stigma-related variables (explicit and implicit), as well as sociodemographics, current symptom severity, and previous treatment. Support for discrimination and implicit stigmatizing attitude were both associated with lower likelihood of self-identification. More social distance and support for discrimination were associated with less self-labeling. Previous treatment was the strongest predictor of symptom appraisal, SELFI, and self-labeling. Destigmatizing mental illness could increase awareness of personal mental health problems, potentially leading to lower rates of untreated mental illness.
ERIC Educational Resources Information Center
Currie-Rubin, Rachel
2012-01-01
This dissertation examines the problem-solving processes of seven graduate student novices enrolled in a course in educational assessment and ten educational assessment experts. Using Jonassen's (1997) ill- and well-structured problem-solving frameworks, I analyze think-aloud protocols of experts and novices as they examine ill-structured…
Reducing Self-Stigma by Coming Out Proud
Kosyluk, Kristin A; Rüsch, Nicolas
2013-01-01
Self-stigma has a pernicious effect on the lives of people with mental illness. Although a medical perspective might discourage patients from identifying with their illness, public disclosure may promote empowerment and reduce self-stigma. We reviewed the extensive research that supports this assertion and assessed a program that might diminish stigma’s effect by helping some people to disclose to colleagues, neighbors, and others their experiences with mental illness, treatment, and recovery. The program encompasses weighing the costs and benefits of disclosure in deciding whether to come out, considering different strategies for coming out, and obtaining peer support through the disclosure process. This type of program may also pose challenges for public health research. PMID:23488488
Weerasundera, Rajiv; Yogaratnam, Jegan
2013-01-01
Psychotic illness has a low incidence in the puerperal period. Peripartum cardiomyopathy as a complication of pregnancy is also rare. We report a case where the above two conditions occurred simultaneously in a patient and posed significant difficulties in the clinical management. She was diagnosed as having paranoid schizophrenia and peripartum cardiomyopathy. Many of the antipsychotics were contraindicated, and electroconvulsive therapy could not be administered due to the added risks involved with regard to anesthesia. She was therefore managed with clonazepam and olanzapine. This case highlights the challenges in a patient with a psychiatric illness presenting with comorbid physical illness. Copyright © 2013 Elsevier Inc. All rights reserved.
Reducing self-stigma by coming out proud.
Corrigan, Patrick W; Kosyluk, Kristin A; Rüsch, Nicolas
2013-05-01
Self-stigma has a pernicious effect on the lives of people with mental illness. Although a medical perspective might discourage patients from identifying with their illness, public disclosure may promote empowerment and reduce self-stigma. We reviewed the extensive research that supports this assertion and assessed a program that might diminish stigma's effect by helping some people to disclose to colleagues, neighbors, and others their experiences with mental illness, treatment, and recovery. The program encompasses weighing the costs and benefits of disclosure in deciding whether to come out, considering different strategies for coming out, and obtaining peer support through the disclosure process. This type of program may also pose challenges for public health research.
Nunstedt, Håkan; Rudolfsson, Gudrun; Alsen, Pia; Pennbrant, Sandra
2017-01-01
Background: Patients' understanding of their illness is of great importance for recovery. Lacking understanding of the illness is linked with the patients' level of reflection about and interest in understanding their illness. Objective: To describe patients’ variations of reflection about and understanding of their illness and how this understanding affects their trust in themselves or others. Method: The study is based on the “Illness perception” model. Latent content analysis was used for the data analysis. Individual, semi-structured, open-ended and face-to-face interviews were conducted with patients (n=11) suffering from a long-term illness diagnosed at least six months prior to the interview. Data collection took place in the three primary healthcare centres treating the participants. Results: The results show variations in the degree of reflection about illness. Patients search for deeper understanding of the illness for causal explanations, compare different perspectives for preventing complication of their illness, trust healthcare providers, and develop own strategies to manage life. Conclusion: Whereas some patients search for deeper understanding of their illness, other patients are less reflective and feel they can manage the illness without further understanding. Patients' understanding of their illness is related to their degree of trust in themselves or others. Patients whose illness poses an existential threat are more likely to reflect more about their illness and what treatment methods are available. PMID:28567169
A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, G.; Siegel, M.; Tanveer, S.
1995-09-01
We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. The situation is disastrous for numerical computation, as small roundoff errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. Themore » method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out. 47 refs., 10 figs., 1 tab.« less
Human Pose Estimation from Monocular Images: A Comprehensive Survey
Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi
2016-01-01
Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003
NASA Astrophysics Data System (ADS)
Harper, Kathleen A.; Etkina, Eugenia
2002-10-01
As part of weekly reports,1 structured journals in which students answer three standard questions each week, they respond to the prompt, If I were the instructor, what questions would I ask or problems assign to determine if my students understood the material? An initial analysis of the results shows that some student-generated problems indicate fundamental misunderstandings of basic physical concepts. A further investigation explores the relevance of the problems to the week's material, whether the problems are solvable, and the type of problems (conceptual or calculation-based) written. Also, possible links between various characteristics of the problems and conceptual achievement are being explored. The results of this study spark many more questions for further work. A summary of current findings will be presented, along with its relationship to previous work concerning problem posing.2 1Etkina, E. Weekly Reports;A Two-Way Feedback Tool, Science Education, 84, 594-605 (2000). 2Mestre, J.P., Probing Adults Conceptual Understanding and Transfer of Learning Via Problem Posing, Journal of Applied Developmental Psychology, 23, 9-50 (2002).
Research Projects in Physics: A Mechanism for Teaching Ill-Structured Problem Solving
NASA Astrophysics Data System (ADS)
Milbourne, Jeff; Bennett, Jonathan
2017-10-01
Physics education research has a tradition of studying problem solving, exploring themes such as physical intuition and differences between expert and novice problem solvers. However, most of this work has focused on traditional, or well-structured, problems, similar to what might appear in a textbook. Less work has been done with open-ended, or ill-structured, problems, similar to the types of problems students might face in their professional lives. Given the national discourse on educational system reform aligned with 21st century skills, including problem solving, it is critical to provide educational experiences that help students learn to solve all types of problems, including ill-structured problems.
In-the-wild facial expression recognition in extreme poses
NASA Astrophysics Data System (ADS)
Yang, Fei; Zhang, Qian; Zheng, Chi; Qiu, Guoping
2018-04-01
In the computer research area, facial expression recognition is a hot research problem. Recent years, the research has moved from the lab environment to in-the-wild circumstances. It is challenging, especially under extreme poses. But current expression detection systems are trying to avoid the pose effects and gain the general applicable ability. In this work, we solve the problem in the opposite approach. We consider the head poses and detect the expressions within special head poses. Our work includes two parts: detect the head pose and group it into one pre-defined head pose class; do facial expression recognize within each pose class. Our experiments show that the recognition results with pose class grouping are much better than that of direct recognition without considering poses. We combine the hand-crafted features, SIFT, LBP and geometric feature, with deep learning feature as the representation of the expressions. The handcrafted features are added into the deep learning framework along with the high level deep learning features. As a comparison, we implement SVM and random forest to as the prediction models. To train and test our methodology, we labeled the face dataset with 6 basic expressions.
Predictive Analytics for Safer Food Supply
USDA-ARS?s Scientific Manuscript database
Science based risk analysis improves the USDA Food Safety Inspection Service’s ability to combat threats to public health from food-borne illness by allowing the Agency to focus resources on hazards that pose the greatest risk. Innovative algorithms enable detection and containment of threat by an...
Post, Robert M; Altshuler, Lori L; Kupka, Ralph; McElroy, Susan L; Frye, Mark A; Rowe, Michael; Grunze, Heinz; Suppes, Trisha; Keck, Paul E; Nolen, Willem A
2017-01-01
Patients with bipolar disorder from the US have more early-onset illness and a greater familial loading for psychiatric problems than those from the Netherlands or Germany (abbreviated here as Europe). We hypothesized that these regional differences in illness burden would extend to the patients siblings. Outpatients with bipolar disorder gave consent for participation in a treatment outcome network and for filling out detailed questionnaires. This included a family history of unipolar depression, bipolar disorder, suicide attempt, alcohol abuse/dependence, drug abuse/dependence, and "other" illness elicited for the patients' grandparents, parents, spouses, offspring, and siblings. Problems in the siblings were examined as a function of parental and grandparental problems and the patients' adverse illness characteristics or poor prognosis factors (PPFs). Each problem in the siblings was significantly (p<0.001) more prevalent in those from the US than in those from Europe. In the US, problems in the parents and grandparents were almost uniformly associated with the same problems in the siblings, and sibling problems were related to the number of PPFs observed in the patients. Family history was based on patient report. Increased familial loading for psychiatric problems extends through 4 generations of patients with bipolar disorder from the US compared to Europe, and appears to "breed true" into the siblings of the patients. In addition to early onset, a variety of PPFs are associated with the burden of psychiatric problems in the patients' siblings and offspring. Greater attention to the multigenerational prevalence of illness in patients from the US is indicated. Copyright © 2016 Elsevier B.V. All rights reserved.
Nonnegative least-squares image deblurring: improved gradient projection approaches
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.
2010-02-01
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D
2007-06-01
Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.
The Analysis of the Problems the Pre-Service Teachers Experience in Posing Problems about Equations
ERIC Educational Resources Information Center
Isik, Cemalettin; Kar, Tugrul
2012-01-01
The present study aimed to analyse the potential difficulties in the problems posed by pre-service teachers about first degree equations with one unknown and equation pairs with two unknowns. It was carried out with 20 pre-service teachers studying in the Department of Elementary Mathematics Educations at a university in Eastern Turkey. The…
A Machine Learning Approach to Evaluating Illness-Induced Religious Struggle
Glauser, Joshua; Connolly, Brian; Nash, Paul; Grossoehme, Daniel H
2017-01-01
Religious or spiritual struggles are clinically important to health care chaplains because they are related to poorer health outcomes, involving both mental and physical health problems. Identifying persons experiencing religious struggle poses a challenge for chaplains. One potentially underappreciated means of triaging chaplaincy effort are prayers written in chapel notebooks. We show that religious struggle can be identified in these notebooks through instances of negative religious coping, such as feeling anger or abandonment toward God. We built a data set of entries in chapel notebooks and classified them as showing religious struggle, or not. We show that natural language processing techniques can be used to automatically classify the entries with respect to whether or not they reflect religious struggle with as much accuracy as humans. The work has potential applications to triaging chapel notebook entries for further attention from pastoral care staff. PMID:28469429
Bellows, Spencer; Smith, Jordan; Mcguire, Peter; Smith, Andrew
2014-01-01
Accurate resuscitation of the critically-ill patient using intravenous fluids and blood products is a challenging, time sensitive task. Ultrasound of the inferior vena cava (IVC) is a non-invasive technique currently used to guide fluid administration, though multiple factors such as variable image quality, time, and operator skill challenge mainstream acceptance. This study represents a first attempt to develop and validate an algorithm capable of automatically tracking and measuring the IVC compared to human operators across a diverse range of image quality. Minimal tracking failures and high levels of agreement between manual and algorithm measurements were demonstrated on good quality videos. Addressing problems such as gaps in the vessel wall and intra-lumen speckle should result in improved performance in average and poor quality videos. Semi-automated measurement of the IVC for the purposes of non-invasive estimation of circulating blood volume poses challenges however is feasible.
Huang, Lei; Goldsmith, Jeff; Reiss, Philip T.; Reich, Daniel S.; Crainiceanu, Ciprian M.
2013-01-01
Diffusion tensor imaging (DTI) measures water diffusion within white matter, allowing for in vivo quantification of brain pathways. These pathways often subserve specific functions, and impairment of those functions is often associated with imaging abnormalities. As a method for predicting clinical disability from DTI images, we propose a hierarchical Bayesian “scalar-on-image” regression procedure. Our procedure introduces a latent binary map that estimates the locations of predictive voxels and penalizes the magnitude of effect sizes in these voxels, thereby resolving the ill-posed nature of the problem. By inducing a spatial prior structure, the procedure yields a sparse association map that also maintains spatial continuity of predictive regions. The method is demonstrated on a simulation study and on a study of association between fractional anisotropy and cognitive disability in a cross-sectional sample of 135 multiple sclerosis patients. PMID:23792220
Status, Alert System, and Prediction of Cyanobacterial Bloom in South Korea
Srivastava, Ankita; Ahn, Chi-Yong; Asthana, Ravi Kumar; Lee, Hyung-Gwan; Oh, Hee-Mock
2015-01-01
Bloom-forming freshwater cyanobacterial genera pose a major ecological problem due to their ability to produce toxins and other bioactive compounds, which can have important implications in illnesses of humans and livestock. Cyanobacteria such as Microcystis, Anabaena, Oscillatoria, Phormidium, and Aphanizomenon species producing microcystins and anatoxin-a have been predominantly documented from most South Korean lakes and reservoirs. With the increase in frequency of such blooms, various monitoring approaches, treatment processes, and prediction models have been developed in due course. In this paper we review the field studies and current knowledge on toxin producing cyanobacterial species and ecological variables that regulate toxin production and bloom formation in major rivers (Han, Geum, Nakdong, and Yeongsan) and reservoirs in South Korea. In addition, development of new, fast, and high-throughput techniques for effective monitoring is also discussed with cyanobacterial bloom advisory practices, current management strategies, and their implications in South Korean freshwater bodies. PMID:25705675
Sparse Bayesian Inference and the Temperature Structure of the Solar Corona
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Harry P.; Byers, Jeff M.; Crump, Nicholas A.
Measuring the temperature structure of the solar atmosphere is critical to understanding how it is heated to high temperatures. Unfortunately, the temperature of the upper atmosphere cannot be observed directly, but must be inferred from spectrally resolved observations of individual emission lines that span a wide range of temperatures. Such observations are “inverted” to determine the distribution of plasma temperatures along the line of sight. This inversion is ill posed and, in the absence of regularization, tends to produce wildly oscillatory solutions. We introduce the application of sparse Bayesian inference to the problem of inferring the temperature structure of themore » solar corona. Within a Bayesian framework a preference for solutions that utilize a minimum number of basis functions can be encoded into the prior and many ad hoc assumptions can be avoided. We demonstrate the efficacy of the Bayesian approach by considering a test library of 40 assumed temperature distributions.« less
Tomographic reconstruction of tokamak plasma light emission using wavelet-vaguelette decomposition
NASA Astrophysics Data System (ADS)
Schneider, Kai; Nguyen van Yen, Romain; Fedorczak, Nicolas; Brochard, Frederic; Bonhomme, Gerard; Farge, Marie; Monier-Garbet, Pascale
2012-10-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we proposed in Nguyen van yen et al., Nucl. Fus., 52 (2012) 013005, an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
NASA Astrophysics Data System (ADS)
Nguyen van yen, R.; Fedorczak, N.; Brochard, F.; Bonhomme, G.; Schneider, K.; Farge, M.; Monier-Garbet, P.
2012-01-01
Images acquired by cameras installed in tokamaks are difficult to interpret because the three-dimensional structure of the plasma is flattened in a non-trivial way. Nevertheless, taking advantage of the slow variation of the fluctuations along magnetic field lines, the optical transformation may be approximated by a generalized Abel transform, for which we propose an inversion technique based on the wavelet-vaguelette decomposition. After validation of the new method using an academic test case and numerical data obtained with the Tokam 2D code, we present an application to an experimental movie obtained in the tokamak Tore Supra. A comparison with a classical regularization technique for ill-posed inverse problems, the singular value decomposition, allows us to assess the efficiency. The superiority of the wavelet-vaguelette technique is reflected in preserving local features, such as blobs and fronts, in the denoised emissivity map.
Medium-scale traveling ionospheric disturbances by three-dimensional ionospheric GPS tomography
NASA Astrophysics Data System (ADS)
Chen, C. H.; Saito, A.; Lin, C. H.; Yamamoto, M.; Suzuki, S.; Seemala, G. K.
2016-02-01
In this study, we develop a three-dimensional ionospheric tomography with the ground-based global position system (GPS) total electron content observations. Because of the geometric limitation of GPS observation path, it is difficult to solve the ill-posed inverse problem for the ionospheric electron density. Different from methods given by pervious studies, we consider an algorithm combining the least-square method with a constraint condition, in which the gradient of electron density tends to be smooth in the horizontal direction and steep in the vicinity of the ionospheric F2 peak. This algorithm is designed to be independent of any ionospheric or plasmaspheric electron density models as the initial condition. An observation system simulation experiment method is applied to evaluate the performance of the GPS ionospheric tomography in detecting ionospheric electron density perturbation at the scale size of around 200 km in wavelength, such as the medium-scale traveling ionospheric disturbances.
High-resolution wavefront reconstruction using the frozen flow hypothesis
NASA Astrophysics Data System (ADS)
Liu, Xuewen; Liang, Yonghui; Liu, Jin; Xu, Jieping
2017-10-01
This paper describes an approach to reconstructing wavefronts on finer grid using the frozen flow hypothesis (FFH), which exploits spatial and temporal correlations between consecutive wavefront sensor (WFS) frames. Under the assumption of FFH, slope data from WFS can be connected to a finer, composite slope grid using translation and down sampling, and elements in transformation matrices are determined by wind information. Frames of slopes are then combined and slopes on finer grid are reconstructed by solving a sparse, large-scale, ill-posed least squares problem. By using reconstructed finer slope data and adopting Fried geometry of WFS, high-resolution wavefronts are then reconstructed. The results show that this method is robust even with detector noise and wind information inaccuracy, and under bad seeing conditions, high-frequency information in wavefronts can be recovered more accurately compared with when correlations in WFS frames are ignored.
Inferring subunit stoichiometry from single molecule photobleaching
2013-01-01
Single molecule photobleaching is a powerful tool for determining the stoichiometry of protein complexes. By attaching fluorophores to proteins of interest, the number of associated subunits in a complex can be deduced by imaging single molecules and counting fluorophore photobleaching steps. Because some bleaching steps might be unobserved, the ensemble of steps will be binomially distributed. In this work, it is shown that inferring the true composition of a complex from such data is nontrivial because binomially distributed observations present an ill-posed inference problem. That is, a unique and optimal estimate of the relevant parameters cannot be extracted from the observations. Because of this, a method has not been firmly established to quantify confidence when using this technique. This paper presents a general inference model for interpreting such data and provides methods for accurately estimating parameter confidence. The formalization and methods presented here provide a rigorous analytical basis for this pervasive experimental tool. PMID:23712552
Stokes paradox in electronic Fermi liquids
NASA Astrophysics Data System (ADS)
Lucas, Andrew
2017-03-01
The Stokes paradox is the statement that in a viscous two-dimensional fluid, the "linear response" problem of fluid flow around an obstacle is ill posed. We present a simple consequence of this paradox in the hydrodynamic regime of a Fermi liquid of electrons in two-dimensional metals. Using hydrodynamics and kinetic theory, we estimate the contribution of a single cylindrical obstacle to the global electrical resistance of a material, within linear response. Momentum relaxation, present in any realistic electron liquid, resolves the classical paradox. Nonetheless, this paradox imprints itself in the resistance, which can be parametrically larger than predicted by Ohmic transport theory. We find a remarkably rich set of behaviors, depending on whether or not the quasiparticle dynamics in the Fermi liquid should be treated as diffusive, hydrodynamic, or ballistic on the length scale of the obstacle. We argue that all three types of behavior are observable in present day experiments.
Royal ruptures: Caroline of Ansbach and the politics of illness in the 1730s.
Jones, Emrys D
2011-06-01
Caroline of Ansbach, wife of George II, occupied a crucial position in the public life of early 18th-century Britain. She was seen to exert considerable influence on the politics of the court and, as mother to the Hanoverian dynasty's next generation, she became an important emblem for the nation's political well-being. This paper examines how such emblematic significance was challenged and qualified when Caroline's body could no longer be portrayed as healthy and life giving. Using private memoirs and correspondence from the time of her death in 1737, the paper explores the metaphorical potential of the queen's strangulated hernia, as well as the particular problems it posed for the public image of her dynasty. Through these investigations, the paper will comment upon the haphazard nature of public discussion in the early 18th century, and reveal the complex relationship between political speculation and medical diagnosis.
Soft and hard classification by reproducing kernel Hilbert space methods.
Wahba, Grace
2002-12-24
Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.
ERIC Educational Resources Information Center
Milbourne, Jeffrey David
2016-01-01
The purpose of this dissertation study was to explore the experiences of high school physics students who were solving complex, ill-structured problems, in an effort to better understand how self-regulatory behavior mediated the project experience. Consistent with Voss, Green, Post, and Penner's (1983) conception of an ill-structured problem in…
Van Loon, L M A; Van De Ven, M O M; Van Doesum, K T M; Hosman, C M H; Witteman, C L M
Children of parents with mental illness have an elevated risk of developing a range of mental health and psychosocial problems. Yet many of these children remain mentally healthy. The present study aimed to get insight into factors that protect these children from developing internalizing and externalizing problems. Several possible individual, parent-child, and family protective factors were examined cross-sectionally and longitudinally in a sample of 112 adolescents. A control group of 122 adolescents whose parents have no mental illness was included to explore whether the protective factors were different between adolescents with and without a parent with mental illness. Cross-sectional analyses revealed that high self-esteem and low use of passive coping strategies were related to fewer internalizing and externalizing problems. Greater self-disclosure was related to fewer internalizing problems and more parental monitoring was related to fewer externalizing problems. Active coping strategies, parental support, and family factors such as cohesion were unrelated to adolescent problem behavior. Longitudinal analyses showed that active coping, parental monitoring, and self-disclosure were protective against developing internalizing problems 2 years later. We found no protective factors for externalizing problems. Moderation analyses showed that the relationships between possible protective factors and adolescent problem behavior were not different for adolescents with and without a parent with mental illness. The findings suggest that adolescents' active coping strategies and parent-child communication may be promising factors to focus on in interventions aimed at preventing the development of internalizing problems by adolescents who have a parent with mental illness.
Setting Up a Mental Health Clinic in the Heart of Rural Africa.
Enow, Humphrey; Thalitaya, Madhusudan Deepak; Mbatia, Wallace; Kirpekar, Sheetal
2015-09-01
The World Health Organization defines health as a state of complete physical, mental and social well-being and not merely the absence of disease or infirmity (WHO 1948). In Africa, mental health issues often come last on the list of priorities for policy-makers & people's attitudes towards mental illness are strongly influenced by traditional beliefs in supernatural causes/remedies. The massive burden attributed to mental illness in these communities, poses a huge moral, cultural/economic challenge and requires a concerted and integrated approach involving policy makers, mental health Practitioners, the general public, service users and their families and other stake holders to reverse the trend. Improving community awareness of mental illness. Change the negative perception of mental illness by the community. Providing a screening/referral pathway for mental illnesses. Providing supervision of patient care. Promote community participation on issues regarding mental health with a view to challenge existing traditional attitudes and beliefs, reduce stigma and promote health seeking behaviour.
Relativistic Causality and Quasi-Orthomodular Algebras
NASA Astrophysics Data System (ADS)
Nobili, Renato
2006-05-01
The concept of fractionability or decomposability in parts of a physical system has its mathematical counterpart in the lattice--theoretic concept of orthomodularity. Systems with a finite number of degrees of freedom can be decomposed in different ways, corresponding to different groupings of the degrees of freedom. The orthomodular structure of these simple systems is trivially manifest. The problem then arises as to whether the same property is shared by physical systems with an infinite number of degrees of freedom, in particular by the quantum relativistic ones. The latter case was approached several years ago by Haag and Schroer (1962; Haag, 1992) who started from noting that the causally complete sets of Minkowski spacetime form an orthomodular lattice and posed the question of whether the subalgebras of local observables, with topological supports on such subsets, form themselves a corresponding orthomodular lattice. Were it so, the way would be paved to interpreting spacetime as an intrinsic property of a local quantum field algebra. Surprisingly enough, however, the hoped property does not hold for local algebras of free fields with superselection rules. The possibility seems to be instead open if the local currents that govern the superselection rules are driven by gauge fields. Thus, in the framework of local quantum physics, the request for algebraic orthomodularity seems to imply physical interactions! Despite its charm, however, such a request appears plagued by ambiguities and criticities that make of it an ill--posed problem. The proposers themselves, indeed, concluded that the orthomodular correspondence hypothesis is too strong for having a chance of being practicable. Thus, neither the idea was taken seriously by the proposers nor further investigated by others up to a reasonable degree of clarification. This paper is an attempt to re--formulate and well--pose the problem. It will be shown that the idea is viable provided that the algebra of local observables: (1) is considered all over the whole range of its irreducible representations; (2) is widened with the addition of the elements of a suitable intertwining group of automorphisms; (3) the orthomodular correspondence requirement is modified to an extent sufficient to impart a natural topological structure to the intertwined algebra of observables so obtained. A novel scenario then emerges in which local quantum physics appears to provide a general framework for non--perturbative quantum field dynamics.
The World in a Tomato: Revisiting the Use of "Codes" in Freire's Problem-Posing Education.
ERIC Educational Resources Information Center
Barndt, Deborah
1998-01-01
Gives examples of the use of Freire's notion of codes or generative themes in problem-posing literacy education. Describes how these applications expand Freire's conceptions by involving students in code production, including multicultural perspectives, and rethinking codes as representations. (SK)
A genetic algorithm approach to estimate glacier mass variations from GRACE data
NASA Astrophysics Data System (ADS)
Reimond, Stefan; Klinger, Beate; Krauss, Sandro; Mayer-Gürr, Torsten
2017-04-01
The application of a genetic algorithm (GA) to the inference of glacier mass variations with a point-mass modeling method is described. GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology serve as input for this study. The reformulation of the point-mass inversion method in terms of an optimization problem is motivated by two reasons: first, an improved choice of the positions of the modeled point-masses (with a particular focus on the depth parameter) is expected to increase the signal-to-noise ratio. Considering these coordinates as additional unknown parameters (besides from the mass change magnitudes) results in a highly non-linear optimization problem. The second reason is that the mass inversion from satellite tracking data is an ill-posed problem, and hence regularization becomes necessary. The main task in this context is the determination of the regularization parameter, which is typically done by means of heuristic selection rules like, e.g., the L-curve criterion. In this study, however, the challenge of selecting a suitable balancing parameter (or even a matrix) is tackled by introducing regularization to the overall optimization problem. Based on this novel approach, estimations of ice-mass changes in various alpine glacier systems (e.g. Svalbard) are presented and compared to existing results and alternative inversion methods.
Communicating Climate Change: the Problem of Knowing and Doing.
NASA Astrophysics Data System (ADS)
Wildcat, D.
2008-12-01
The challenge of global warming and climate change may illustrate better than any recent phenomenon that quite independent of the science associated with our assessment, modeling, mitigation strategies and adaptation to the multiple complex processes that characterize this phenomenon, our greatest challenge resides in creating systems where knowledge can be usefully communicated to the general public. Knowledge transfer will pose significant challenges when addressing a topic that often leaves the ill-informed and non-scientist overwhelmed with pieces of information and paralyzed with a sense that there is nothing to be done to address this global problem. This communication problem is very acute in North American indigenous communities where a first-hand, on-the-ground, experience of climate change is indisputable, but where the charts, graphs and sophisticated models presented by scientists are treated with suspicion and often not explained very well. This presentation will discuss the efforts of the American Indian and Alaska Native Climate Change Working Group to prepare future generations of AI/AN geoscience professionals, educators, and a geoscience literate AI/AN workforce, while insuring that our Indigenous tribal knowledges of land- and sea-scapes, and climates are valued, used and incorporated into our tribal exercise of geoscience education and research. The Working Group's efforts are already suggesting the communication problem for Indigenous communities will best be solved by 'growing' our own culturally competent Indigenous geoscience professionals.
Assessment of a Problem Posing Task in a Jamaican Grade Four Mathematics Classroom
ERIC Educational Resources Information Center
Munroe, Kayan Lloyd
2016-01-01
This paper analyzes how a teacher of mathematics used problem posing in the assessment of the cognitive development of 26 students at the grade-four level. The students, ages 8 to 10 years, were from a rural elementary school in western Jamaica. Using a picture as a prompt, students were asked to generate three arithmetic problems and to offer…
ERIC Educational Resources Information Center
Collins, Rachel H.
2014-01-01
In a society that is becoming more dynamic, complex, and diverse, the ability to solve ill-structured problems has become an increasingly critical skill. Emerging adults are at a critical life stage that is an ideal time to develop the skills needed to solve ill-structured problems (ISPs) as they are transitioning to adult roles and starting to…
NASA Astrophysics Data System (ADS)
Saito, Takahiro; Takahashi, Hiromi; Komatsu, Takashi
2006-02-01
The Retinex theory was first proposed by Land, and deals with separation of irradiance from reflectance in an observed image. The separation problem is an ill-posed problem. Land and others proposed various Retinex separation algorithms. Recently, Kimmel and others proposed a variational framework that unifies the previous Retinex algorithms such as the Poisson-equation-type Retinex algorithms developed by Horn and others, and presented a Retinex separation algorithm with the time-evolution of a linear diffusion process. However, the Kimmel's separation algorithm cannot achieve physically rational separation, if true irradiance varies among color channels. To cope with this problem, we introduce a nonlinear diffusion process into the time-evolution. Moreover, as to its extension to color images, we present two approaches to treat color channels: the independent approach to treat each color channel separately and the collective approach to treat all color channels collectively. The latter approach outperforms the former. Furthermore, we apply our separation algorithm to a high quality chroma key in which before combining a foreground frame and a background frame into an output image a color of each pixel in the foreground frame are spatially adaptively corrected through transformation of the separated irradiance. Experiments demonstrate superiority of our separation algorithm over the Kimmel's separation algorithm.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-09-10
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
NASA Astrophysics Data System (ADS)
Cousquer, Yohann; Pryet, Alexandre; Atteia, Olivier; Ferré, Ty P. A.; Delbart, Célestine; Valois, Rémi; Dupuy, Alain
2018-03-01
The inverse problem of groundwater models is often ill-posed and model parameters are likely to be poorly constrained. Identifiability is improved if diverse data types are used for parameter estimation. However, some models, including detailed solute transport models, are further limited by prohibitive computation times. This often precludes the use of concentration data for parameter estimation, even if those data are available. In the case of surface water-groundwater (SW-GW) models, concentration data can provide SW-GW mixing ratios, which efficiently constrain the estimate of exchange flow, but are rarely used. We propose to reduce computational limits by simulating SW-GW exchange at a sink (well or drain) based on particle tracking under steady state flow conditions. Particle tracking is used to simulate advective transport. A comparison between the particle tracking surrogate model and an advective-dispersive model shows that dispersion can often be neglected when the mixing ratio is computed for a sink, allowing for use of the particle tracking surrogate model. The surrogate model was implemented to solve the inverse problem for a real SW-GW transport problem with heads and concentrations combined in a weighted hybrid objective function. The resulting inversion showed markedly reduced uncertainty in the transmissivity field compared to calibration on head data alone.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.
Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki
2015-01-01
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532
Chronically ill rural women: self-identified management problems and solutions.
Cudney, Shirley; Sullivan, Therese; Winters, Charlene A; Paul, Lynn; Oriet, Pat
2005-03-01
To add to the knowledge base of illness management of chronically ill, rural women by describing the self-identified problems and solutions reported by women participants in the online health-education segment of the Women to Women (WTW) computer outreach project. WTW is a research-based computer intervention providing health education and online peer support for rural women with chronic diseases. Messages posted to the online chat room were examined to determine the women's self-management problems and solutions. The self-identified problems were: (1) difficulties in carrying through on self-management programmes; (2) negative fears and feelings; (3) poor communication with care providers; and (4) disturbed relationships with family and friends. The self-identified solutions to these problems included problem-solving techniques that were tailored to the rural lifestyle. Although not all problems were 'solvable', they could be 'lived with' if the women's prescriptions for self-management were used. Glimpses into the women's day-to-day experiences of living with chronic illness gleaned from the interactive health-education discussions will give health professionals insights into the women's efforts to manage their illnesses. The data provide health professionals with information to heighten their sensitivity to their clients' day-to-day care and educational needs.
Conceptualizations of illness among relatives of patients diagnosed with schizophrenia.
Villares, Cecília C; Redko, Cristina P; Mari, Jair J
2017-06-01
Family concepts of a relative's illness are an important part of the coping process and reveal the cultural construction of the experience of illness. As part of a qualitative study conducted in the Schizophrenia Outpatient Clinic of the Department of Psychiatry, Escola Paulista de Medicina - UNIFESP, 14 relatives of eight outpatients diagnosed with schizophrenia were interviewed and invited to talk freely about their ideas and feelings concerning their relative's problem. Qualitative analysis was used to identify categories of illness representations. Three main categories were discussed, including Problema de Nervoso, Problema na Cabeça and Problema Espiritual (Problem of the Nerves, Problem in the Head and Spiritual Problem). The authors present evidence of these categories as cultural constructions, and discuss the relevance of popular notions of illness to the understanding of the course and outcome of schizophrenia, and the planning of culturally meaningful interventions.
Roex, Ann; Clarebout, Geraldine; Dory, Valerie; Degryse, Jan
2009-01-01
Background Epistemological beliefs (EB) are an individual's cognitions about knowledge and knowing. In several non-medical domains, EB have been found to contribute to the way individuals reason when faced with ill-structured problems (i.e. problems with no clear-cut, right or wrong solutions). Such problems are very common in medical practice. Determining whether EB are also influential in reasoning processes with regard to medical issues to which there is no straightforward answer, could have implications for medical education. This study focused on 2 research questions: 1. Can ill-structured problems be used to elicit general practice trainees' and trainers' EB? and 2. What are the views of general practice trainees and trainers about knowledge and how do they justify knowing? Methods 2 focus groups of trainees (n = 18) were convened on 3 occasions during their 1st year of postgraduate GP training. 2 groups of GP trainers (n = 11) met on one occasion. Based on the methodology of the Reflective Judgement Interview (RJI), participants were asked to comment on 11 ill-structured problems. The sessions were audio taped and transcribed and an adapted version of the RJI scoring rules was used to assess the trainees' reasoning about ill-structured problems. Results Participants made a number of statements illustrating their EB and their importance in clinical reasoning. The level of EB varied widely form one meeting to another and depending on the problem addressed. Overall, the EB expressed by trainees did not differ from those of trainers except on a particular ill-structured problem regarding shoulder pain. Conclusion The use of focus groups has entailed some difficulties in the interpretation of the results, but a number of preliminary conclusions can be drawn. Ill-structured medical problems can be used to elicit EB. Most trainees and trainers displayed pre-reflective and quasi-reflective EB. The way trainees and doctors view and justify knowledge are likely to be involved in medical reasoning processes. PMID:19775425
Mathematical Thinking and Creativity through Mathematical Problem Posing and Solving
ERIC Educational Resources Information Center
Ayllón, María F.; Gómez, Isabel A.; Ballesta-Claver, Julio
2016-01-01
This work shows the relationship between the development of mathematical thinking and creativity with mathematical problem posing and solving. Creativity and mathematics are disciplines that do not usually appear together. Both concepts constitute complex processes sharing elements, such as fluency (number of ideas), flexibility (range of ideas),…
Problem Posing Based on Investigation Activities by University Students
ERIC Educational Resources Information Center
da Ponte, Joao Pedro; Henriques, Ana
2013-01-01
This paper reports a classroom-based study involving investigation activities in a university numerical analysis course. The study aims to analyse students' mathematical processes and to understand how these activities provide opportunities for problem posing. The investigations were intended to stimulate students in asking questions, to trigger…
Examining Mathematics Classroom Interactions: Elevating Student Roles in Teaching and Learning
ERIC Educational Resources Information Center
Kent, Laura
2017-01-01
This article introduces a model entitled, "Responsive Teaching through Problem Posing" or RTPP, that addresses a type of reform oriented mathematics teaching based on posing relevant problems, positioning students as experts of mathematics, and facilitating discourse. RTPP incorporates decades of research on students' thinking in…
USDA-ARS?s Scientific Manuscript database
Vibrio parahaemolyticus is a marine and estuarine bacterium that poses a large threat to human health worldwide. It has been the leading bacterial cause of seafood-borne illness. This study investigated the prevalence and drug resistance of V. parahaemolyticus isolated from retail shellfish in Shang...
Vandermorris, Susan; Sheldon, Signy; Winocur, Gordon; Moscovitch, Morris
2013-11-01
The relationship of higher order problem solving to basic neuropsychological processes likely depends on the type of problems to be solved. Well-defined problems (e.g., completing a series of errands) may rely primarily on executive functions. Conversely, ill-defined problems (e.g., navigating socially awkward situations) may, in addition, rely on medial temporal lobe (MTL) mediated episodic memory processes. Healthy young (N = 18; M = 19; SD = 1.3) and old (N = 18; M = 73; SD = 5.0) adults completed a battery of neuropsychological tests of executive and episodic memory function, and experimental tests of problem solving. Correlation analyses and age group comparisons demonstrated differential contributions of executive and autobiographical episodic memory function to well-defined and ill-defined problem solving and evidence for an episodic simulation mechanism underlying ill-defined problem solving efficacy. Findings are consistent with the emerging idea that MTL-mediated episodic simulation processes support the effective solution of ill-defined problems, over and above the contribution of frontally mediated executive functions. Implications for the development of intervention strategies that target preservation of functional independence in older adults are discussed.
The Role of Content Knowledge in Ill-Structured Problem Solving for High School Physics Students
NASA Astrophysics Data System (ADS)
Milbourne, Jeff; Wiebe, Eric
2018-02-01
While Physics Education Research has a rich tradition of problem-solving scholarship, most of the work has focused on more traditional, well-defined problems. Less work has been done with ill-structured problems, problems that are better aligned with the engineering and design-based scenarios promoted by the Next Generation Science Standards. This study explored the relationship between physics content knowledge and ill-structured problem solving for two groups of high school students with different levels of content knowledge. Both groups of students completed an ill-structured problem set, using a talk-aloud procedure to narrate their thought process as they worked. Analysis of the data focused on identifying students' solution pathways, as well as the obstacles that prevented them from reaching "reasonable" solutions. Students with more content knowledge were more successful reaching reasonable solutions for each of the problems, experiencing fewer obstacles. These students also employed a greater variety of solution pathways than those with less content knowledge. Results suggest that a student's solution pathway choice may depend on how she perceives the problem.
A general approach to regularizing inverse problems with regional data using Slepian wavelets
NASA Astrophysics Data System (ADS)
Michel, Volker; Simons, Frederik J.
2017-12-01
Slepian functions are orthogonal function systems that live on subdomains (for example, geographical regions on the Earth’s surface, or bandlimited portions of the entire spectrum). They have been firmly established as a useful tool for the synthesis and analysis of localized (concentrated or confined) signals, and for the modeling and inversion of noise-contaminated data that are only regionally available or only of regional interest. In this paper, we consider a general abstract setup for inverse problems represented by a linear and compact operator between Hilbert spaces with a known singular-value decomposition (svd). In practice, such an svd is often only given for the case of a global expansion of the data (e.g. on the whole sphere) but not for regional data distributions. We show that, in either case, Slepian functions (associated to an arbitrarily prescribed region and the given compact operator) can be determined and applied to construct a regularization for the ill-posed regional inverse problem. Moreover, we describe an algorithm for constructing the Slepian basis via an algebraic eigenvalue problem. The obtained Slepian functions can be used to derive an svd for the combination of the regionalizing projection and the compact operator. As a result, standard regularization techniques relying on a known svd become applicable also to those inverse problems where the data are regionally given only. In particular, wavelet-based multiscale techniques can be used. An example for the latter case is elaborated theoretically and tested on two synthetic numerical examples.
ERIC Educational Resources Information Center
Koichu, Boris; Harel, Guershon; Manaster, Alfred
2013-01-01
Twenty-four mathematics teachers were asked to think aloud when posing a word problem whose solution could be found by computing 4/5 divided by 2/3. The data consisted of verbal protocols along with the written notes made by the subjects. The qualitative analysis of the data was focused on identifying the structures of the problems produced and…
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Cultural expressions of bodily awareness among chronically ill Filipino Americans.
Becker, Gay
2003-01-01
To describe Filipino Americans' cultural traditions surrounding bodily awareness, especially how the principle of balance informs their views, and the link to self-management of chronic illness. This qualitative study used semistructured interviews with 85 Filipino Americans between the ages of 46 and 97 years. Volunteers were recruited from numerous health care sites in 1 geographic location in the United States. Respondents had 1 or more chronic illnesses. Taped and transcribed interviews were coded and evaluated for themes. The concept of balance was central to Filipino Americans' portrayal of bodily awareness of signs and symptoms related to chronic illnesses, as well as to actions they took to manage their chronic illnesses. Efforts were made to control chronic illnesses through a variety of self-care practices. Diet posed a particular challenge because of the symbolic importance of food in Filipino culture and its use in the maintenance of social relationships. The ways in which Filipino Americans combine attention to the body, values of balance and harmony, and emphasis on social well-being result in heightened attention to bodily processes. Filipino Americans' emphasis on bodily awareness suggests that this particular cultural strength can be used to enhance chronic illness management. Awareness of the cultural traditions of Filipino Americans can facilitate patient education about how to manage chronic illnesses.
Sleep and Culture in Children with Medical Conditions
Koinis-Mitchell, Daphne
2010-01-01
Objectives To provide an integrative review of the existing literature on the interrelationships among sleep, culture, and medical conditions in children. Methods A comprehensive literature search was conducted using PubMed, Medline, and PsychINFO computerized databases and bibliographies of relevant articles. Results Children with chronic illnesses experience more sleep problems than healthy children. Cultural beliefs and practices are likely to impact the sleep of children with chronic illnesses. Few studies have examined cultural factors affecting the relationship between sleep and illness, but existing evidence suggests the relationship between sleep and illness is exacerbated for diverse groups. Conclusions Sleep is of critical importance to children with chronic illnesses. Cultural factors can predispose children both to sleep problems and to certain medical conditions. Additional research is needed to address the limitations of the existing literature, and to develop culturally sensitive interventions to treat sleep problems in children with chronic illnesses. PMID:20332222
ERIC Educational Resources Information Center
Ge, Xun; Law, Victor; Huang, Kun
2016-01-01
One of the goals for problem-based learning (PBL) is to promote self-regulation. Although self-regulation has been studied extensively, its interrelationships with ill-structured problem solving have been unclear. In order to clarify the interrelationships, this article proposes a conceptual framework illustrating the iterative processes among…
Meanings Given to Algebraic Symbolism in Problem-Posing
ERIC Educational Resources Information Center
Cañadas, María C.; Molina, Marta; del Río, Aurora
2018-01-01
Some errors in the learning of algebra suggest that students might have difficulties giving meaning to algebraic symbolism. In this paper, we use problem posing to analyze the students' capacity to assign meaning to algebraic symbolism and the difficulties that students encounter in this process, depending on the characteristics of the algebraic…
Enhancing Students' Communication Skills through Problem Posing and Presentation
ERIC Educational Resources Information Center
Sugito; E. S., Sri Mulyani; Hartono; Supartono
2017-01-01
This study was to explore how enhance communication skill through problem posing and presentation method. The subjects of this research were the seven grade students Junior High School, including 20 male and 14 female. This research was conducted in two cycles and each cycle consisted of four steps, they were: planning, action, observation, and…
Image-based aircraft pose estimation: a comparison of simulations and real-world data
NASA Astrophysics Data System (ADS)
Breuers, Marcel G. J.; de Reus, Nico
2001-10-01
The problem of estimating aircraft pose information from mono-ocular image data is considered using a Fourier descriptor based algorithm. The dependence of pose estimation accuracy on image resolution and aspect angle is investigated through simulations using sets of synthetic aircraft images. Further evaluation shows that god pose estimation accuracy can be obtained in real world image sequences.
The Role of Content Knowledge in Ill-Structured Problem Solving for High School Physics Students
ERIC Educational Resources Information Center
Milbourne, Jeff; Wiebe, Eric
2018-01-01
While Physics Education Research has a rich tradition of problem-solving scholarship, most of the work has focused on more traditional, well-defined problems. Less work has been done with ill-structured problems, problems that are better aligned with the engineering and design-based scenarios promoted by the Next Generation Science Standards. This…
Problem-Based Learning: Using Ill-Structured Problems in Biology Project Work
ERIC Educational Resources Information Center
Chin, Christine; Chia, Li-Gek
2006-01-01
This case study involved year 9 students carrying out project work in biology via problem-based learning. The purpose of the study was to (a) find out how students approach and work through ill-structured problems, (b) identify some issues and challenges related to the use of such problems, and (c) offer some practical suggestions on the…
Scaffolding Online Argumentation during Problem Solving
ERIC Educational Resources Information Center
Oh, S.; Jonassen, D. H.
2007-01-01
In this study, constraint-based argumentation scaffolding was proposed to facilitate online argumentation performance and ill-structured problem solving during online discussions. In addition, epistemological beliefs were presumed to play a role in solving ill-structured diagnosis-solution problems. Constraint-based discussion boards were…
The Spatial and the Visual in Mental Spatial Reasoning: An Ill-Posed Distinction
NASA Astrophysics Data System (ADS)
Schultheis, Holger; Bertel, Sven; Barkowsky, Thomas; Seifert, Inessa
It is an ongoing and controversial debate in cognitive science which aspects of knowledge humans process visually and which ones they process spatially. Similarly, artificial intelligence (AI) and cognitive science research, in building computational cognitive systems, tended to use strictly spatial or strictly visual representations. The resulting systems, however, were suboptimal both with respect to computational efficiency and cognitive plau sibility. In this paper, we propose that the problems in both research strands stem from a mis conception of the visual and the spatial in mental spatial knowl edge pro cessing. Instead of viewing the visual and the spatial as two clearly separable categories, they should be conceptualized as the extremes of a con tinuous dimension of representation. Regarding psychology, a continuous di mension avoids the need to exclusively assign processes and representations to either one of the cate gories and, thus, facilitates a more unambiguous rating of processes and rep resentations. Regarding AI and cognitive science, the con cept of a continuous spatial / visual dimension provides the possibility of rep re sentation structures which can vary continuously along the spatial / visual di mension. As a first step in exploiting these potential advantages of the pro posed conception we (a) introduce criteria allowing for a non-dichotomic judgment of processes and representations and (b) present an approach towards rep re sentation structures that can flexibly vary along the spatial / visual dimension.
Problems faced and coping strategies used by adolescents with mentally ill parents in Delhi.
George, Shoba; Shaiju, Bindu; Sharma, Veena
2012-01-01
The present study was conducted to assess the problems faced by adolescents whose parents suffer from major mental illness at selected mental health institutes of Delhi. The objectives also included assessment of the coping strategies of the adolescents in dealing with these problems. The Stuart Stress Adaptation Model of Psychiatric Nursing Care was used as the conceptual framework. A descriptive survey approach with cross-sectional design was used in the study. A structured interview schedule was prepared. Purposive non-probability sampling technique was employed to interview 50 adolescents whose parents suffer from major mental illness. Data gathered was analysed and interpreted using both descriptive and inferential statistics. The study showed that majority of the adolescents had moderate problems as a result of their parent's mental illness. Area-wise analysis of the problems revealed that the highest problems faced were in family relationship and support and majority of the adolescents used maladaptive coping strategies. A set of guidelines on effective coping strategies was disseminated to these adolescents.
Combination therapy for malaria in Africa: hype or hope?
Bloland, P. B.; Ettling, M.; Meek, S.
2000-01-01
The development of resistance to drugs poses one of the greatest threats to malaria control. In Africa, the efficacy of readily affordable antimalarial drugs is declining rapidly, while highly efficacious drugs tend to be too expensive. Cost-effective strategies are needed to extend the useful life spans of antimalarial drugs. Observations in South-East Asia on combination therapy with artemisinin derivatives and mefloquine indicate that the development of resistance to both components is slowed down. This suggests the possibility of a solution to the problem of drug resistance in Africa, where, however, there are major obstacles in the way of deploying combination therapy effectively. The rates of transmission are relatively high, a large proportion of asymptomatic infection occurs in semi-immune persons, the use of drugs is frequently inappropriate and ill-informed, there is a general lack of laboratory diagnoses, and public health systems in sub-Saharan Africa are generally weak. Furthermore, the cost of combination therapy is comparatively high. We review combination therapy as used in South-East Asia and outline the problems that have to be overcome in order to adopt it successfully in sub-Saharan Africa. PMID:11196485
NASA Astrophysics Data System (ADS)
Vogelgesang, Jonas; Schorr, Christian
2016-12-01
We present a semi-discrete Landweber-Kaczmarz method for solving linear ill-posed problems and its application to Cone Beam tomography and laminography. Using a basis function-type discretization in the image domain, we derive a semi-discrete model of the underlying scanning system. Based on this model, the proposed method provides an approximate solution of the reconstruction problem, i.e. reconstructing the density function of a given object from its projections, in suitable subspaces equipped with basis function-dependent weights. This approach intuitively allows the incorporation of additional information about the inspected object leading to a more accurate model of the X-rays through the object. Also, physical conditions of the scanning geometry, like flat detectors in computerized tomography as used in non-destructive testing applications as well as non-regular scanning curves e.g. appearing in computed laminography (CL) applications, are directly taken into account during the modeling process. Finally, numerical experiments of a typical CL application in three dimensions are provided to verify the proposed method. The introduction of geometric prior information leads to a significantly increased image quality and superior reconstructions compared to standard iterative methods.
Non-rigid image registration using graph-cuts.
Tang, Tommy W H; Chung, Albert C S
2007-01-01
Non-rigid image registration is an ill-posed yet challenging problem due to its supernormal high degree of freedoms and inherent requirement of smoothness. Graph-cuts method is a powerful combinatorial optimization tool which has been successfully applied into image segmentation and stereo matching. Under some specific constraints, graph-cuts method yields either a global minimum or a local minimum in a strong sense. Thus, it is interesting to see the effects of using graph-cuts in non-rigid image registration. In this paper, we formulate non-rigid image registration as a discrete labeling problem. Each pixel in the source image is assigned a displacement label (which is a vector) indicating which position in the floating image it is spatially corresponding to. A smoothness constraint based on first derivative is used to penalize sharp changes in displacement labels across pixels. The whole system can be optimized by using the graph-cuts method via alpha-expansions. We compare 2D and 3D registration results of our method with two state-of-the-art approaches. It is found that our method is more robust to different challenging non-rigid registration cases with higher registration accuracy.
NASA Astrophysics Data System (ADS)
Yang, Lei; Tian, Jie; Wang, Xiaoxiang; Hu, Jin
2005-04-01
The comprehensive understanding of human emotion processing needs consideration both in the spatial distribution and the temporal sequencing of neural activity. The aim of our work is to identify brain regions involved in emotional recognition as well as to follow the time sequence in the millisecond-range resolution. The effect of activation upon visual stimuli in different gender by International Affective Picture System (IAPS) has been examined. Hemodynamic and electrophysiological responses were measured in the same subjects. Both fMRI and ERP study were employed in an event-related study. fMRI have been obtained with 3.0 T Siemens Magnetom whole-body MRI scanner. 128-channel ERP data were recorded using an EGI system. ERP is sensitive to millisecond changes in mental activity, but the source localization and timing is limited by the ill-posed 'inversed' problem. We try to investigate the ERP source reconstruction problem in this study using fMRI constraint. We chose ICA as a pre-processing step of ERP source reconstruction to exclude the artifacts and provide a prior estimate of the number of dipoles. The results indicate that male and female show differences in neural mechanism during emotion visual stimuli.
Regularized two-step brain activity reconstruction from spatiotemporal EEG data
NASA Astrophysics Data System (ADS)
Alecu, Teodor I.; Voloshynovskiy, Sviatoslav; Pun, Thierry
2004-10-01
We are aiming at using EEG source localization in the framework of a Brain Computer Interface project. We propose here a new reconstruction procedure, targeting source (or equivalently mental task) differentiation. EEG data can be thought of as a collection of time continuous streams from sparse locations. The measured electric potential on one electrode is the result of the superposition of synchronized synaptic activity from sources in all the brain volume. Consequently, the EEG inverse problem is a highly underdetermined (and ill-posed) problem. Moreover, each source contribution is linear with respect to its amplitude but non-linear with respect to its localization and orientation. In order to overcome these drawbacks we propose a novel two-step inversion procedure. The solution is based on a double scale division of the solution space. The first step uses a coarse discretization and has the sole purpose of globally identifying the active regions, via a sparse approximation algorithm. The second step is applied only on the retained regions and makes use of a fine discretization of the space, aiming at detailing the brain activity. The local configuration of sources is recovered using an iterative stochastic estimator with adaptive joint minimum energy and directional consistency constraints.
Smart markers for watershed-based cell segmentation.
Koyuncu, Can Fahrettin; Arslan, Salim; Durmaz, Irem; Cetin-Atalay, Rengul; Gunduz-Demir, Cigdem
2012-01-01
Automated cell imaging systems facilitate fast and reliable analysis of biological events at the cellular level. In these systems, the first step is usually cell segmentation that greatly affects the success of the subsequent system steps. On the other hand, similar to other image segmentation problems, cell segmentation is an ill-posed problem that typically necessitates the use of domain-specific knowledge to obtain successful segmentations even by human subjects. The approaches that can incorporate this knowledge into their segmentation algorithms have potential to greatly improve segmentation results. In this work, we propose a new approach for the effective segmentation of live cells from phase contrast microscopy. This approach introduces a new set of "smart markers" for a marker-controlled watershed algorithm, for which the identification of its markers is critical. The proposed approach relies on using domain-specific knowledge, in the form of visual characteristics of the cells, to define the markers. We evaluate our approach on a total of 1,954 cells. The experimental results demonstrate that this approach, which uses the proposed definition of smart markers, is quite effective in identifying better markers compared to its counterparts. This will, in turn, be effective in improving the segmentation performance of a marker-controlled watershed algorithm.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-01-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452
Iterative Nonlinear Tikhonov Algorithm with Constraints for Electromagnetic Tomography
NASA Technical Reports Server (NTRS)
Xu, Feng; Deshpande, Manohar
2012-01-01
Low frequency electromagnetic tomography such as the capacitance tomography (ECT) has been proposed for monitoring and mass-gauging of gas-liquid two-phase system under microgravity condition in NASA's future long-term space missions. Due to the ill-posed inverse problem of ECT, images reconstructed using conventional linear algorithms often suffer from limitations such as low resolution and blurred edges. Hence, new efficient high resolution nonlinear imaging algorithms are needed for accurate two-phase imaging. The proposed Iterative Nonlinear Tikhonov Regularized Algorithm with Constraints (INTAC) is based on an efficient finite element method (FEM) forward model of quasi-static electromagnetic problem. It iteratively minimizes the discrepancy between FEM simulated and actual measured capacitances by adjusting the reconstructed image using the Tikhonov regularized method. More importantly, it enforces the known permittivity of two phases to the unknown pixels which exceed the reasonable range of permittivity in each iteration. This strategy does not only stabilize the converging process, but also produces sharper images. Simulations show that resolution improvement of over 2 times can be achieved by INTAC with respect to conventional approaches. Strategies to further improve spatial imaging resolution are suggested, as well as techniques to accelerate nonlinear forward model and thus increase the temporal resolution.
Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.
Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev
2013-05-01
Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.
Incorporating contact angles in the surface tension force with the ACES interface curvature scheme
NASA Astrophysics Data System (ADS)
Owkes, Mark
2017-11-01
In simulations of gas-liquid flows interacting with solid boundaries, the contact line dynamics effect the interface motion and flow field through the surface tension force. The surface tension force is directly proportional to the interface curvature and the problem of accurately imposing a contact angle must be incorporated into the interface curvature calculation. Many commonly used algorithms to compute interface curvatures (e.g., height function method) require extrapolating the interface, with defined contact angle, into the solid to allow for the calculation of a curvature near a wall. Extrapolating can be an ill-posed problem, especially in three-dimensions or when multiple contact lines are near each other. We have developed an accurate methodology to compute interface curvatures that allows for contact angles to be easily incorporated while avoiding extrapolation and the associated challenges. The method, known as Adjustable Curvature Evaluation Scale (ACES), leverages a least squares fit of a polynomial to points computed on the volume-of-fluid (VOF) representation of the gas-liquid interface. The method is tested by simulating canonical test cases and then applied to simulate the injection and motion of water droplets in a channel (relevant to PEM fuel cells).
Evaluation of uncertainty for regularized deconvolution: A case study in hydrophone measurements.
Eichstädt, S; Wilkens, V
2017-06-01
An estimation of the measurand in dynamic metrology usually requires a deconvolution based on a dynamic calibration of the measuring system. Since deconvolution is, mathematically speaking, an ill-posed inverse problem, some kind of regularization is required to render the problem stable and obtain usable results. Many approaches to regularized deconvolution exist in the literature, but the corresponding evaluation of measurement uncertainties is, in general, an unsolved issue. In particular, the uncertainty contribution of the regularization itself is a topic of great importance, because it has a significant impact on the estimation result. Here, a versatile approach is proposed to express prior knowledge about the measurand based on a flexible, low-dimensional modeling of an upper bound on the magnitude spectrum of the measurand. This upper bound allows the derivation of an uncertainty associated with the regularization method in line with the guidelines in metrology. As a case study for the proposed method, hydrophone measurements in medical ultrasound with an acoustic working frequency of up to 7.5 MHz are considered, but the approach is applicable for all kinds of estimation methods in dynamic metrology, where regularization is required and which can be expressed as a multiplication in the frequency domain.
Chambers, L W; Shimoda, F; Walter, S D; Pickard, L; Hunter, B; Ford, J; Deivanayagam, N; Cunningham, I
1989-01-01
The Hamilton-Wentworth regional health department was asked by one of its municipalities to determine whether the present water supply and sewage disposal methods used in a community without piped water and regional sewage disposal posed a threat to the health of its residents. Three approaches were used: assessments by public health inspectors of all households; bacteriological and chemical analyses of water samples; and completion of a specially designed questionnaire by residents in the target community and a control community. 89% of the 227 residences in the target community were found to have a drinking water supply that, according to the Ministry of Environment guidelines, was unsafe and/or unsatisfactory. According to on-site inspections, 32% of households had sewage disposal problems. Responses to the questionnaire revealed that the target community residents reported more symptoms associated with enteric infections due to the water supply. Two of these symptoms, diarrhea and stomach cramps, had a relative risk of 2.2 when compared to the control community (p less than 0.05). The study was successfully used by the municipality to argue for provincial funding of piped water.
Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms
NASA Astrophysics Data System (ADS)
Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy
2013-04-01
Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.
A Model of Self-Regulation for Control of Chronic Disease
ERIC Educational Resources Information Center
Clark, Noreen M.; Gong, Molly; Kaciroti, Niko
2014-01-01
Chronic disease poses increasing threat to individual and community health. The day-to-day manager of disease is the patient who undertakes actions with the guidance of a clinician. The ability of the patient to control the illness through an effective therapeutic plan is significantly influenced by social and behavioral factors. This article…
Influenza Vaccines: Challenges and Solutions
Houser, Katherine; Subbarao, Kanta
2015-01-01
Vaccination is the best method for the prevention and control of influenza. Vaccination can reduce illness and lessen severity of infection. This review focuses on how currently licensed influenza vaccines are generated in the U.S., why the biology of influenza poses vaccine challenges, and vaccine approaches on the horizon that address these challenges. PMID:25766291
Viewpoint Invariant Gesture Recognition and 3D Hand Pose Estimation Using RGB-D
ERIC Educational Resources Information Center
Doliotis, Paul
2013-01-01
The broad application domain of the work presented in this thesis is pattern classification with a focus on gesture recognition and 3D hand pose estimation. One of the main contributions of the proposed thesis is a novel method for 3D hand pose estimation using RGB-D. Hand pose estimation is formulated as a database retrieval problem. The proposed…
A perspective on the future public health: an integrative and ecological framework.
Hanlon, Phil; Carlisle, Sandra; Hannah, Margaret; Lyon, Andrew; Reilly, David
2012-11-01
Modernity has brought health and social benefits to many societies, not least through the insights of science and technology. Yet, modernity has also been associated with a number of cultural characteristics, such as materialism, individualism, consumerism and an addiction to continuing economic growth, that seem potentially harmful to health and well-being and inimical to social equity. There is an emerging body of evidence that suggests that, in the affluent world, some of our most intractable contemporary health problems are, in fact, the product of modernity. This suggests that the tools of modernity (its science and its technology) are ill suited to finding solutions. This poses a problem for public health, as this discipline is itself a product of modernity and thus appears ill equipped to deal with the conditions and challenges of a rapidly changing and unstable world, one where the very sustainability of human society is now in question. This paper argues that a new paradigm for the future public health is needed. It presents an integrative, ecological framework as a starting point from which public health might grasp the opportunities for change inherent in the 'modern' threats we face. It suggests a number of features that will need to underpin such a paradigm shift in thinking and practice. However, as this paper is written from the perspective of an affluent, developed society (albeit from a perspective that is explicitly critical of the goals, trends and values that seem to characterise such societies), other voices from other places need to be heard. We hope that others will want to engage with our arguments and suggestions, whether to challenge and refute these, or to further their development.
Care Coordination for the Chronically Ill: Understanding the Patient's Perspective
Maeng, Daniel D; Martsolf, Grant R; Scanlon, Dennis P; Christianson, Jon B
2012-01-01
Objective To identify factors associated with perception of care coordination problems among chronically ill patients. Methods Patient-level data were obtained from a random-digit dial telephone survey of adults with chronic conditions. The survey measured respondents' self-report of care coordination problems and level of patient activation, using the Patient Activation Measure (PAM-13). Logistic regression was used to assess association between respondents' self-report of care coordination problems and a set of patient characteristics. Results Respondents in the highest activation stage had roughly 30–40 percent lower odds of reporting care coordination problems compared to those in the lowest stage (p < .01). Respondents with multiple chronic conditions were significantly more likely to report coordination problems than those with hypertension only. Respondents' race/ethnicity, employment, insurance status, income, and length of illness were not significantly associated with self-reported care coordination problems. Conclusion We conclude that patient activation and complexity of chronic illness are strongly associated with patients' self-report of care coordination problems. Developing targeted strategies to improve care coordination around these patient characteristics may be an effective way to address the issue. PMID:22985032
Development of a Mobile Learning System Based on a Collaborative Problem-Posing Strategy
ERIC Educational Resources Information Center
Sung, Han-Yu; Hwang, Gwo-Jen; Chang, Ya-Chi
2016-01-01
In this study, a problem-posing strategy is proposed for supporting collaborative mobile learning activities. Accordingly, a mobile learning environment has been developed, and an experiment on a local culture course has been conducted to evaluate the effectiveness of the proposed approach. Three classes of an elementary school in southern Taiwan…
ERIC Educational Resources Information Center
Yilmaz, Yasemin; Durmus, Soner; Yaman, Hakan
2018-01-01
This study investigated the pattern problems posed by middle school mathematics preservice teachers using multiple representations to determine both their pattern knowledge levels and their abilities to transfer this knowledge to students. The design of the study is the survey method, one of the quantitative research methods. The study group was…
ERIC Educational Resources Information Center
Wang, Xiao-Ming; Hwang, Gwo-Jen
2017-01-01
Computer programming is a subject that requires problem-solving strategies and involves a great number of programming logic activities which pose challenges for learners. Therefore, providing learning support and guidance is important. Collaborative learning is widely believed to be an effective teaching approach; it can enhance learners' social…
ERIC Educational Resources Information Center
Solórzano, Lorena Salazar
2015-01-01
Beginning university training programs must focus on different competencies for mathematics teachers, i.e., not only on solving problems, but also on posing them and analyzing the mathematical activity. This paper reports the results of an exploratory study conducted with future secondary school mathematics teachers on the introduction of…
Seismic velocity estimation from time migration
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, Maria Kourkina
2007-01-01
This is concerned with imaging and wave propagation in nonhomogeneous media, and includes a collection of computational techniques, such as level set methods with material transport, Dijkstra-like Hamilton-Jacobi solvers for first arrival Eikonal equations and techniques for data smoothing. The theoretical components include aspects of seismic ray theory, and the results rely on careful comparison with experiment and incorporation as input into large production-style geophysical processing codes. Producing an accurate image of the Earth's interior is a challenging aspect of oil recovery and earthquake analysis. The ultimate computational goal, which is to accurately produce a detailed interior map of themore » Earth's makeup on the basis of external soundings and measurements, is currently out of reach for several reasons. First, although vast amounts of data have been obtained in some regions, this has not been done uniformly, and the data contain noise and artifacts. Simply sifting through the data is a massive computational job. Second, the fundamental inverse problem, namely to deduce the local sound speeds of the earth that give rise to measured reacted signals, is exceedingly difficult: shadow zones and complex structures can make for ill-posed problems, and require vast computational resources. Nonetheless, seismic imaging is a crucial part of the oil and gas industry. Typically, one makes assumptions about the earth's substructure (such as laterally homogeneous layering), and then uses this model as input to an iterative procedure to build perturbations that more closely satisfy the measured data. Such models often break down when the material substructure is significantly complex: not surprisingly, this is often where the most interesting geological features lie. Data often come in a particular, somewhat non-physical coordinate system, known as time migration coordinates. The construction of substructure models from these data is less and less reliable as the earth becomes horizontally nonconstant. Even mild lateral velocity variations can significantly distort subsurface structures on the time migrated images. Conversely, depth migration provides the potential for more accurate reconstructions, since it can handle significant lateral variations. However, this approach requires good input data, known as a 'velocity model'. We address the problem of estimating seismic velocities inside the earth, i.e., the problem of constructing a velocity model, which is necessary for obtaining seismic images in regular Cartesian coordinates. The main goals are to develop algorithms to convert time-migration velocities to true seismic velocities, and to convert time-migrated images to depth images in regular Cartesian coordinates. Our main results are three-fold. First, we establish a theoretical relation between the true seismic velocities and the 'time migration velocities' using the paraxial ray tracing. Second, we formulate an appropriate inverse problem describing the relation between time migration velocities and depth velocities, and show that this problem is mathematically ill-posed, i.e., unstable to small perturbations. Third, we develop numerical algorithms to solve regularized versions of these equations which can be used to recover smoothed velocity variations. Our algorithms consist of efficient time-to-depth conversion algorithms, based on Dijkstra-like Fast Marching Methods, as well as level set and ray tracing algorithms for transforming Dix velocities into seismic velocities. Our algorithms are applied to both two-dimensional and three-dimensional problems, and we test them on a collection of both synthetic examples and field data.« less
[Health problems and illness of female workers in textile industries].
Soonthorndhada, K
1989-07-01
This paper examines 3 major health-related issues: 1) existing health problems and illnesses resulting from physical environmental conditions at workplaces; 2) female workers' perception on illness and health protection; and 3) the relationship between illness and risk factors. The study area is textile factories in Bangkok and its peripheries. Data are drawn from the 1987 Survey of Occupational Health and Textile Industrial Development in Thailand: Effect on Health and Socioeconomics of Female Migrant Workers. This study shows that about 20% of female workers have ill-health problems and illness after a period of working mainly due to high levels of dust and noise, and inadequate light. These conditions are hazardous to the respiratory system (resulting in cough and chest tightness), the hearing system (pains as well as impaired and hearing loss), eye systems (irritation, reduced visual capacity) and skin allergy. Such illnesses are intensified in the long- run. The analysis of variances reveals that education, section of work, perception (particularly mask and ear plug) significantly affect these illnesses. This study concludes that health education and occupational health should be provided in factories with emphasis on health prevention and promotion.
Rethinking 'risk' and self-management for chronic illness.
Morden, Andrew; Jinks, Clare; Ong, Bie Nio
2012-02-01
Self-management for chronic illness is a current high profile UK healthcare policy. Policy and clinical recommendations relating to chronic illnesses are framed within a language of lifestyle risk management. This article argues the enactment of risk within current UK self-management policy is intimately related to neo-liberal ideology and is geared towards population governance. The approach that dominates policy perspectives to 'risk' management is critiqued for positioning people as rational subjects who calculate risk probabilities and act upon them. Furthermore this perspective fails to understand the lay person's construction and enactment of risk, their agenda and contextual needs when living with chronic illness. Of everyday relevance to lay people is the management of risk and uncertainty relating to social roles and obligations, the emotions involved when encountering the risk and uncertainty in chronic illness, and the challenges posed by social structural factors and social environments that have to be managed. Thus, clinical enactments of self-management policy would benefit from taking a more holistic view to patient need and seek to avoid solely communicating lifestyle risk factors to be self-managed.
ERIC Educational Resources Information Center
Lyons, Zaza; Hood, Sean
2011-01-01
The stigmatisation of mental illness in Australian and other Western societies is now well documented. This article presents a description of the "stigmatisation" problem associated with mental illness, and discusses the impact that this problem has had on the demand for Psychiatry as a career. The approach taken at UWA to address the…
Stigmatization of mental illness among Nigerian schoolchildren.
Ronzoni, Pablo; Dogra, Nisha; Omigbodun, Olayinka; Bella, Tolulope; Atitola, Olayinka
2010-09-01
Despite the fact that about 10% of children experience mental health problems, they tend to hold negative views about mental illness. The objective of this study was to investigate the views of Nigerian schoolchildren towards individuals with mental illness or mental health problems. A cross-sectional design was used. Junior and senior secondary schoolchildren from rural and urban southwest Nigeria were asked: 'What sorts of words or phrases might you use to describe someone who experiences mental health problems?' The responses were tabulated, grouped and interpreted by qualitative thematic analysis. Of 164 students, 132 (80.5%) responded to the question. Six major themes emerged from the answers. The most popular descriptions were 'derogatory terms' (33%). This was followed by 'abnormal appearance and behaviour' (29.6%); 'don't know' answers (13.6%); 'physical illness and disability' (13.6%); 'negative emotional states' (6.8%); and 'language and communication difficulties' (3.4%). The results suggest that, similar to findings elsewhere, stigmatization of mental illness is highly prevalent among Nigerian children. This may be underpinned by lack of knowledge regarding mental health problems and/or fuelled by the media. Educational interventions and encouraging contact with mentally ill persons could play a role in reducing stigma among schoolchildren.
Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.
Lee, Donghwa; Myung, Hyun
2014-07-11
In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.
Effects of adaptive refinement on the inverse EEG solution
NASA Astrophysics Data System (ADS)
Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.
1995-10-01
One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Greenberg, Henry; Shiau, Stephanie
2014-09-01
The Trans Pacific Partnership Agreement (TPPA) is a regional trade agreement currently being negotiated by 11 Pacific Rim countries, excluding China. While the negotiations are being conducted under a veil of secrecy, substantive leaks over the past 4 years have revealed a broad view of the proposed contents. As it stands the TPPA poses serious risks to global public health, particularly chronic, non-communicable diseases. At greatest risk are national tobacco regulations, regulations governing the emergence of generic drugs and controls over food imports by transnational corporations. Aside from a small group of public health professionals from Australia, the academic public health community has missed these threats to the global community, although many other health-related entities, international lawyers and health-conscious politicians have voiced serious concerns. As of mid-2014 there has been no comment in the leading public health journals. This large lacuna in interest or recognition reflects the larger problem that the public health education community has all but ignored global non-communicable diseases. Without such a focus, the risks are unseen and the threats not perceived. This cautionary tale of the TPPA reflects the vulnerability of being ill informed of contemporary realities. © The Author 2014. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Lavy, Ilana; Shriki, Atara
2010-01-01
In the present study we explore changes in perceptions of our class of prospective mathematics teachers (PTs) regarding their mathematical knowledge. The PTs engaged in problem posing activities in geometry, using the "What If Not?" (WIN) strategy, as part of their work on computerized inquiry-based activities. Data received from the PTs'…
Mathematical Problem Posing as a Measure of Curricular Effect on Students' Learning
ERIC Educational Resources Information Center
Cai, Jinfa; Moyer, John C.; Wang, Ning; Hwang, Stephen; Nie, Bikai; Garber, Tammy
2013-01-01
In this study, we used problem posing as a measure of the effect of middle-school curriculum on students' learning in high school. Students who had used a standards-based curriculum in middle school performed equally well or better in high school than students who had used more traditional curricula. The findings from this study not only show…
Pose-Invariant Face Recognition via RGB-D Images.
Sang, Gaoli; Li, Jing; Zhao, Qijun
2016-01-01
Three-dimensional (3D) face models can intrinsically handle large pose face recognition problem. In this paper, we propose a novel pose-invariant face recognition method via RGB-D images. By employing depth, our method is able to handle self-occlusion and deformation, both of which are challenging problems in two-dimensional (2D) face recognition. Texture images in the gallery can be rendered to the same view as the probe via depth. Meanwhile, depth is also used for similarity measure via frontalization and symmetric filling. Finally, both texture and depth contribute to the final identity estimation. Experiments on Bosphorus, CurtinFaces, Eurecom, and Kiwi databases demonstrate that the additional depth information has improved the performance of face recognition with large pose variations and under even more challenging conditions.
How Instructional Design Experts Use Knowledge and Experience to Solve Ill-Structured Problems
ERIC Educational Resources Information Center
Ertmer, Peggy A.; Stepich, Donald A.; York, Cindy S.; Stickman, Ann; Wu, Xuemei (Lily); Zurek, Stacey; Goktas, Yuksel
2008-01-01
This study examined how instructional design (ID) experts used their prior knowledge and previous experiences to solve an ill-structured instructional design problem. Seven experienced designers used a think-aloud procedure to articulate their problem-solving processes while reading a case narrative. Results, presented in the form of four…
Messy Problems and Lay Audiences: Teaching Critical Thinking within the Finance Curriculum
ERIC Educational Resources Information Center
Carrithers, David; Ling, Teresa; Bean, John C.
2008-01-01
This article investigates the critical thinking difficulties of finance majors when asked to address ill-structured finance problems. The authors build on previous research in which they asked students to analyze an ill-structured investment problem and recommend a course of action. The results revealed numerous critical thinking weaknesses,…
Chen, Yung-Chi
2014-05-01
This study used data from Waves I and II of the Taiwan Educational Panel Survey (TEPS) to explore the potential short-term and long-term effects of parental illness and health condition on children's behavioral and educational functioning. A sample of 11,018 junior high school students and their parents and teachers in Taiwan were included in this present study. The results supported previous work that parental illness may place children at slight risk for poor psychosocial adjustment and behavioral problems. Parental illness was associated with lower adaptive skills and more behavioral problems in children. Children of ill parents showed resilience in their educational functioning in the event of parental illness as children's academic achievement and learning skills were not related to parental illness/health condition.
ERIC Educational Resources Information Center
Walsh, Irene P.
2008-01-01
Background: Some people with schizophrenia are considered to have communication difficulties because of concomitant language impairment and/or because of suppressed or "unusual" communication skills due to the often-chronic nature and manifestation of the illness process. Conversations with a person with schizophrenia pose many pragmatic…
How Do I Know if My Student Is Dangerous?
ERIC Educational Resources Information Center
Gecker, Ellen
2007-01-01
As a nurse who has worked at several campus health centers, including, before last spring's shootings, the health center at Virginia Tech, the author knows that more students than ever are entering college with diagnosed mental illness and are taking psychiatric medications. Very few of these students pose any threat to themselves or others. Only…
How Do I Know if My Student Is Dangerous?
ERIC Educational Resources Information Center
Gecker, Ellen
2008-01-01
As a nurse who has worked at several campus health centers, including, before last spring's shootings, the health center at Virginia Tech, the author knows that more students than ever have diagnosed mental illness and take psychiatric medications. Very few pose any threat to themselves or others. Only in extremely rare cases does mental illness…
ERIC Educational Resources Information Center
Koch, Steven; Borg, Terry
2011-01-01
An Illinois district brings a local university into the district to craft advanced learning embedded in the needs of specific schools. Community High School District 155 in Crystal Lake, Ill., and Northern Illinois University (NIU) College of Education engaged in a partnership that has provided significant benefits, posed limited challenges, and…
Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy
2014-11-01
In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.
REVIEWS OF TOPICAL PROBLEMS: Prediction and discovery of new structures in spiral galaxies
NASA Astrophysics Data System (ADS)
Fridman, Aleksei M.
2007-02-01
A review is given of the last 20 years of published research into the nature, origin mechanisms, and observed features of spiral-vortex structures found in galaxies. The so-called rotating shallow water experiments are briefly discussed, carried out with a facility designed by the present author and built at the Russian Scientific Center 'Kurchatov Institute' to model the origin of galactic spiral structures. The discovery of new vortex-anticyclone structures in these experiments stimulated searching for them astronomically using the RAS Special Astrophysical Observatory's 6-meter BTA optical telescope, formerly the world's and now Europe's largest. Seven years after the pioneering experiments, Afanasyev and the present author discovered the predicted giant anticyclones in the galaxy Mrk 1040 by using BTA. Somewhat later, the theoretical prediction of giant cyclones in spiral galaxies was made, also to be verified by BTA afterwards. To use the observed line-of-sight velocity field for reconstructing the 3D velocity vector distribution in a galactic disk, a method for solving a problem from the class of ill-posed astrophysical problems was developed by the present author and colleagues. In addition to the vortex structure, other new features were discovered — in particular, slow bars (another theoretical prediction), for whose discovery an observational test capable of distinguishing them from their earlier-studied normal (fast) counterparts was designed.
Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart
2011-01-01
We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the “model-free” variational analysis (VA)-based image enhancement approach and the “model-based” descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations. PMID:22163859
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lawrence, Chris C.; Flaska, Marek; Pozzi, Sara A.
2016-08-14
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrixmore » condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.« less
NASA Astrophysics Data System (ADS)
Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.
2016-08-01
Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.
The use of the Kalman filter in the automated segmentation of EIT lung images.
Zifan, A; Liatsis, P; Chapman, B E
2013-06-01
In this paper, we present a new pipeline for the fast and accurate segmentation of impedance images of the lungs using electrical impedance tomography (EIT). EIT is an emerging, promising, non-invasive imaging modality that produces real-time, low spatial but high temporal resolution images of impedance inside a body. Recovering impedance itself constitutes a nonlinear ill-posed inverse problem, therefore the problem is usually linearized, which produces impedance-change images, rather than static impedance ones. Such images are highly blurry and fuzzy along object boundaries. We provide a mathematical reasoning behind the high suitability of the Kalman filter when it comes to segmenting and tracking conductivity changes in EIT lung images. Next, we use a two-fold approach to tackle the segmentation problem. First, we construct a global lung shape to restrict the search region of the Kalman filter. Next, we proceed with augmenting the Kalman filter by incorporating an adaptive foreground detection system to provide the boundary contours for the Kalman filter to carry out the tracking of the conductivity changes as the lungs undergo deformation in a respiratory cycle. The proposed method has been validated by using performance statistics such as misclassified area, and false positive rate, and compared to previous approaches. The results show that the proposed automated method can be a fast and reliable segmentation tool for EIT imaging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu Benzhuo; Holst, Michael J.; Center for Theoretical Biological Physics, University of California San Diego, La Jolla, CA 92093
2010-09-20
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for simulating electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised formore » time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.« less
Lu, Benzhuo; Holst, Michael J.; McCammon, J. Andrew; Zhou, Y. C.
2010-01-01
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems. PMID:21709855
Lu, Benzhuo; Holst, Michael J; McCammon, J Andrew; Zhou, Y C
2010-09-20
In this paper we developed accurate finite element methods for solving 3-D Poisson-Nernst-Planck (PNP) equations with singular permanent charges for electrodiffusion in solvated biomolecular systems. The electrostatic Poisson equation was defined in the biomolecules and in the solvent, while the Nernst-Planck equation was defined only in the solvent. We applied a stable regularization scheme to remove the singular component of the electrostatic potential induced by the permanent charges inside biomolecules, and formulated regular, well-posed PNP equations. An inexact-Newton method was used to solve the coupled nonlinear elliptic equations for the steady problems; while an Adams-Bashforth-Crank-Nicolson method was devised for time integration for the unsteady electrodiffusion. We numerically investigated the conditioning of the stiffness matrices for the finite element approximations of the two formulations of the Nernst-Planck equation, and theoretically proved that the transformed formulation is always associated with an ill-conditioned stiffness matrix. We also studied the electroneutrality of the solution and its relation with the boundary conditions on the molecular surface, and concluded that a large net charge concentration is always present near the molecular surface due to the presence of multiple species of charged particles in the solution. The numerical methods are shown to be accurate and stable by various test problems, and are applicable to real large-scale biophysical electrodiffusion problems.
The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.
Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre
2016-10-01
Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.
Experimental investigations on airborne gravimetry based on compressed sensing.
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-03-18
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.
Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing
Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun
2014-01-01
Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125
Roy's safety-first portfolio principle in financial risk management of disastrous events.
Chiu, Mei Choi; Wong, Hoi Ying; Li, Duan
2012-11-01
Roy pioneers the concept and practice of risk management of disastrous events via his safety-first principle for portfolio selection. More specifically, his safety-first principle advocates an optimal portfolio strategy generated from minimizing the disaster probability, while subject to the budget constraint and the mean constraint that the expected final wealth is not less than a preselected disaster level. This article studies the dynamic safety-first principle in continuous time and its application in asset and liability management. We reveal that the distortion resulting from dropping the mean constraint, as a common practice to approximate the original Roy's setting, either leads to a trivial case or changes the problem nature completely to a target-reaching problem, which produces a highly leveraged trading strategy. Recognizing the ill-posed nature of the corresponding Lagrangian method when retaining the mean constraint, we invoke a wisdom observed from a limited funding-level regulation of pension funds and modify the original safety-first formulation accordingly by imposing an upper bound on the funding level. This model revision enables us to solve completely the safety-first asset-liability problem by a martingale approach and to derive an optimal policy that follows faithfully the spirit of the safety-first principle and demonstrates a prominent nature of fighting for the best and preventing disaster from happening. © 2012 Society for Risk Analysis.
Application of kernel method in fluorescence molecular tomography
NASA Astrophysics Data System (ADS)
Zhao, Yue; Baikejiang, Reheman; Li, Changqing
2017-02-01
Reconstruction of fluorescence molecular tomography (FMT) is an ill-posed inverse problem. Anatomical guidance in the FMT reconstruction can improve FMT reconstruction efficiently. We have developed a kernel method to introduce the anatomical guidance into FMT robustly and easily. The kernel method is from machine learning for pattern analysis and is an efficient way to represent anatomical features. For the finite element method based FMT reconstruction, we calculate a kernel function for each finite element node from an anatomical image, such as a micro-CT image. Then the fluorophore concentration at each node is represented by a kernel coefficient vector and the corresponding kernel function. In the FMT forward model, we have a new system matrix by multiplying the sensitivity matrix with the kernel matrix. Thus, the kernel coefficient vector is the unknown to be reconstructed following a standard iterative reconstruction process. We convert the FMT reconstruction problem into the kernel coefficient reconstruction problem. The desired fluorophore concentration at each node can be calculated accordingly. Numerical simulation studies have demonstrated that the proposed kernel-based algorithm can improve the spatial resolution of the reconstructed FMT images. In the proposed kernel method, the anatomical guidance can be obtained directly from the anatomical image and is included in the forward modeling. One of the advantages is that we do not need to segment the anatomical image for the targets and background.
Design of Biomedical Robots for Phenotype Prediction Problems
deAndrés-Galiana, Enrique J.; Sonis, Stephen T.
2016-01-01
Abstract Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem. PMID:27347715
NASA Astrophysics Data System (ADS)
Franck, I. M.; Koutsourelakis, P. S.
2017-01-01
This paper is concerned with the numerical solution of model-based, Bayesian inverse problems. We are particularly interested in cases where the cost of each likelihood evaluation (forward-model call) is expensive and the number of unknown (latent) variables is high. This is the setting in many problems in computational physics where forward models with nonlinear PDEs are used and the parameters to be calibrated involve spatio-temporarily varying coefficients, which upon discretization give rise to a high-dimensional vector of unknowns. One of the consequences of the well-documented ill-posedness of inverse problems is the possibility of multiple solutions. While such information is contained in the posterior density in Bayesian formulations, the discovery of a single mode, let alone multiple, poses a formidable computational task. The goal of the present paper is two-fold. On one hand, we propose approximate, adaptive inference strategies using mixture densities to capture multi-modal posteriors. On the other, we extend our work in [1] with regard to effective dimensionality reduction techniques that reveal low-dimensional subspaces where the posterior variance is mostly concentrated. We validate the proposed model by employing Importance Sampling which confirms that the bias introduced is small and can be efficiently corrected if the analyst wishes to do so. We demonstrate the performance of the proposed strategy in nonlinear elastography where the identification of the mechanical properties of biological materials can inform non-invasive, medical diagnosis. The discovery of multiple modes (solutions) in such problems is critical in achieving the diagnostic objectives.
Design of Biomedical Robots for Phenotype Prediction Problems.
deAndrés-Galiana, Enrique J; Fernández-Martínez, Juan Luis; Sonis, Stephen T
2016-08-01
Genomics has been used with varying degrees of success in the context of drug discovery and in defining mechanisms of action for diseases like cancer and neurodegenerative and rare diseases in the quest for orphan drugs. To improve its utility, accuracy, and cost-effectiveness optimization of analytical methods, especially those that translate to clinically relevant outcomes, is critical. Here we define a novel tool for genomic analysis termed a biomedical robot in order to improve phenotype prediction, identifying disease pathogenesis and significantly defining therapeutic targets. Biomedical robot analytics differ from historical methods in that they are based on melding feature selection methods and ensemble learning techniques. The biomedical robot mathematically exploits the structure of the uncertainty space of any classification problem conceived as an ill-posed optimization problem. Given a classifier, there exist different equivalent small-scale genetic signatures that provide similar predictive accuracies. We perform the sensitivity analysis to noise of the biomedical robot concept using synthetic microarrays perturbed by different kinds of noises in expression and class assignment. Finally, we show the application of this concept to the analysis of different diseases, inferring the pathways and the correlation networks. The final aim of a biomedical robot is to improve knowledge discovery and provide decision systems to optimize diagnosis, treatment, and prognosis. This analysis shows that the biomedical robots are robust against different kinds of noises and particularly to a wrong class assignment of the samples. Assessing the uncertainty that is inherent to any phenotype prediction problem is the right way to address this kind of problem.
Essential oils as natural food antimicrobial agents: a review.
Vergis, Jess; Gokulakrishnan, P; Agarwal, R K; Kumar, Ashok
2015-01-01
Food-borne illnesses pose a real scourge in the present scenario as the consumerism of packaged food has increased to a great extend. Pathogens entering the packaged foods may survive longer, which needs a check. Antimicrobial agents either alone or in combination are added to the food or packaging materials for this purpose. Exploiting the antimicrobial property, essential oils are considered as a "natural" remedy to this problem other than its flavoring property instead of using synthetic agents. The essential oils are well known for its antibacterial, antiviral, antimycotic, antiparasitic, and antioxidant properties due to the presence of phenolic functional group. Gram-positive organisms are found more susceptible to the action of the essential oils. Essential oils improve the shelf-life of packaged products, control the microbial growth, and unriddle the consumer concerns regarding the use of chemical preservatives. This review is intended to provide an overview of the essential oils and their role as natural antimicrobial agents in the food industry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.
2016-05-15
A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
Atmospheric turbulence profiling with unknown power spectral density
NASA Astrophysics Data System (ADS)
Helin, Tapio; Kindermann, Stefan; Lehtonen, Jonatan; Ramlau, Ronny
2018-04-01
Adaptive optics (AO) is a technology in modern ground-based optical telescopes to compensate for the wavefront distortions caused by atmospheric turbulence. One method that allows to retrieve information about the atmosphere from telescope data is so-called SLODAR, where the atmospheric turbulence profile is estimated based on correlation data of Shack-Hartmann wavefront measurements. This approach relies on a layered Kolmogorov turbulence model. In this article, we propose a novel extension of the SLODAR concept by including a general non-Kolmogorov turbulence layer close to the ground with an unknown power spectral density. We prove that the joint estimation problem of the turbulence profile above ground simultaneously with the unknown power spectral density at the ground is ill-posed and propose three numerical reconstruction methods. We demonstrate by numerical simulations that our methods lead to substantial improvements in the turbulence profile reconstruction compared to the standard SLODAR-type approach. Also, our methods can accurately locate local perturbations in non-Kolmogorov power spectral densities.
Non-Parametric Blur Map Regression for Depth of Field Extension.
D'Andres, Laurent; Salvador, Jordi; Kochale, Axel; Susstrunk, Sabine
2016-04-01
Real camera systems have a limited depth of field (DOF) which may cause an image to be degraded due to visible misfocus or too shallow DOF. In this paper, we present a blind deblurring pipeline able to restore such images by slightly extending their DOF and recovering sharpness in regions slightly out of focus. To address this severely ill-posed problem, our algorithm relies first on the estimation of the spatially varying defocus blur. Drawing on local frequency image features, a machine learning approach based on the recently introduced regression tree fields is used to train a model able to regress a coherent defocus blur map of the image, labeling each pixel by the scale of a defocus point spread function. A non-blind spatially varying deblurring algorithm is then used to properly extend the DOF of the image. The good performance of our algorithm is assessed both quantitatively, using realistic ground truth data obtained with a novel approach based on a plenoptic camera, and qualitatively with real images.
Celik, Hasan; Bouhrara, Mustapha; Reiter, David A.; Fishbein, Kenneth W.; Spencer, Richard G.
2013-01-01
We propose a new approach to stabilizing the inverse Laplace transform of a multiexponential decay signal, a classically ill-posed problem, in the context of nuclear magnetic resonance relaxometry. The method is based on extension to a second, indirectly detected, dimension, that is, use of the established framework of two-dimensional relaxometry, followed by projection onto the desired axis. Numerical results for signals comprised of discrete T1 and T2 relaxation components and experiments performed on agarose gel phantoms are presented. We find markedly improved accuracy, and stability with respect to noise, as well as insensitivity to regularization in quantifying underlying relaxation components through use of the two-dimensional as compared to the one-dimensional inverse Laplace transform. This improvement is demonstrated separately for two different inversion algorithms, nonnegative least squares and non-linear least squares, to indicate the generalizability of this approach. These results may have wide applicability in approaches to the Fredholm integral equation of the first kind. PMID:24035004
Assigning uncertainties in the inversion of NMR relaxation data.
Parker, Robert L; Song, Yi-Qaio
2005-06-01
Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.
Work, Health, And Worker Well-Being: Roles And Opportunities For Employers.
McLellan, Robert K
2017-02-01
Work holds the promise of supporting and promoting health. It also carries the risk of injury, illness, and death. In addition to harms posed by traditional occupational health hazards, such as physically dangerous workplaces, work contributes to health problems with multifactorial origins such as unhealthy lifestyles, psychological distress, and chronic disease. Not only does work affect health, but the obverse is true: Unhealthy workers are more frequently disabled, absent, and less productive, and they use more health care resources, compared to their healthy colleagues. The costs of poor workforce health are collectively borne by workers, employers, and society. For business as well as altruistic reasons, employers may strive to cost-effectively achieve the safest, healthiest, and most productive workforce possible. Narrowly focused health goals are giving way to a broader concept of employee well-being. This article explores the relationship between health and work, outlines opportunities for employers to make this relationship health promoting, and identifies areas needing further exploration. Project HOPE—The People-to-People Health Foundation, Inc.
Yu, Feiqiao Brian; Blainey, Paul C; Schulz, Frederik; Woyke, Tanja; Horowitz, Mark A; Quake, Stephen R
2017-07-05
Metagenomics and single-cell genomics have enabled genome discovery from unknown branches of life. However, extracting novel genomes from complex mixtures of metagenomic data can still be challenging and represents an ill-posed problem which is generally approached with ad hoc methods. Here we present a microfluidic-based mini-metagenomic method which offers a statistically rigorous approach to extract novel microbial genomes while preserving single-cell resolution. We used this approach to analyze two hot spring samples from Yellowstone National Park and extracted 29 new genomes, including three deeply branching lineages. The single-cell resolution enabled accurate quantification of genome function and abundance, down to 1% in relative abundance. Our analyses of genome level SNP distributions also revealed low to moderate environmental selection. The scale, resolution, and statistical power of microfluidic-based mini-metagenomics make it a powerful tool to dissect the genomic structure of microbial communities while effectively preserving the fundamental unit of biology, the single cell.
Missing data reconstruction using Gaussian mixture models for fingerprint images
NASA Astrophysics Data System (ADS)
Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary
2016-05-01
Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.
2011-01-01
Attention-deficit hyperactive disorder (ADHD) is a psychiatric illness commonly diagnosed during the early years of childhood. In many adolescents with undiagnosed ADHD, presentation may not be entirely similar to that in younger children. These adolescents pose significant challenges to parents and teachers coping with their disability. Often adolescents with behavioural problems are brought to medical attention as a last resort. This case describes an adolescent who presented to a primary care clinic with school truancy. He was initially treated for depression with oppositional defiant disorder and sibling rivalry. Only following a careful detailed history and further investigations was the diagnosis of ADHD made. He showed a positive improvement with the use of methylphenidate for his ADHD and escitalopram for his depression. The success of his management was further supported by the use of behavioural therapy and parenting interventions. There is a need to increase public awareness of ADHD, especially among parents and teachers so that early intervention can be instituted in these children. PMID:23205066
Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Mueller-Lisse, Ullrich; Moeller, Knut
2016-06-01
Electrical impedance tomography (EIT) reconstructs the conductivity distribution of a domain using electrical data on its boundary. This is an ill-posed inverse problem usually solved on a finite element mesh. For this article, a special regularization method incorporating structural information of the targeted domain is proposed and evaluated. Structural information was obtained either from computed tomography images or from preliminary EIT reconstructions by a modified k-means clustering. The proposed regularization method integrates this structural information into the reconstruction as a soft constraint preferring sparsity in group level. A first evaluation with Monte Carlo simulations indicated that the proposed solver is more robust to noise and the resulting images show fewer artifacts. This finding is supported by real data analysis. The structure based regularization has the potential to balance structural a priori information with data driven reconstruction. It is robust to noise, reduces artifacts and produces images that reflect anatomy and are thus easier to interpret for physicians.
On the breakdown of the curvature perturbation ζ during reheating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Algan, Merve Tarman; Kaya, Ali; Kutluk, Emine Seyma, E-mail: merve.tarman@boun.edu.tr, E-mail: ali.kaya@boun.edu.tr, E-mail: seymakutluk@gmail.com
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0φ-dot =, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0φ=, where φ denotes the inflaton perturbation, breaks down when 0φ-dot =. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflatonmore » fluctuation φ in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.« less
Extracting a Purely Non-rigid Deformation Field of a Single Structure
NASA Astrophysics Data System (ADS)
Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir
During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and the stent graft. The problem definition of deformable registration of images covering the entire abdominal region, however, is highly ill-posed. We present a new method for extracting the deformation of an aneurysmatic aorta. The outline of the procedure includes initial rigid alignment of two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. Our non-rigid registration procedure then only computes local non-rigid deformation and leaves out all remaining global rigid transformations. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.
On the breakdown of the curvature perturbation ζ during reheating
NASA Astrophysics Data System (ADS)
Tarman Algan, Merve; Kaya, Ali; Seyma Kutluk, Emine
2015-04-01
It is known that in single scalar field inflationary models the standard curvature perturbation ζ, which is supposedly conserved at superhorizon scales, diverges during reheating at times 0dot phi=, i.e. when the time derivative of the background inflaton field vanishes. This happens because the comoving gauge 0varphi=, where varphi denotes the inflaton perturbation, breaks down when 0dot phi=. The issue is usually bypassed by averaging out the inflaton oscillations but strictly speaking the evolution of ζ is ill posed mathematically. We solve this problem in the free theory by introducing a family of smooth gauges that still eliminates the inflaton fluctuation varphi in the Hamiltonian formalism and gives a well behaved curvature perturbation ζ, which is now rigorously conserved at superhorizon scales. At the linearized level, this conserved variable can be used to unambiguously propagate the inflationary perturbations from the end of inflation to subsequent epochs. We discuss the implications of our results for the inflationary predictions.
A combined reconstruction-classification method for diffuse optical tomography.
Hiltunen, P; Prince, S J D; Arridge, S
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Muhammad, Noor Azimah; Wan Ismail, Wan Salwina; Tan, Chai Eng; Jaffar, Aida; Sharip, Shalisah; Omar, Khairani
2011-12-01
Attention-deficit hyperactive disorder (ADHD) is a psychiatric illness commonly diagnosed during the early years of childhood. In many adolescents with undiagnosed ADHD, presentation may not be entirely similar to that in younger children. These adolescents pose significant challenges to parents and teachers coping with their disability. Often adolescents with behavioural problems are brought to medical attention as a last resort. This case describes an adolescent who presented to a primary care clinic with school truancy. He was initially treated for depression with oppositional defiant disorder and sibling rivalry. Only following a careful detailed history and further investigations was the diagnosis of ADHD made. He showed a positive improvement with the use of methylphenidate for his ADHD and escitalopram for his depression. The success of his management was further supported by the use of behavioural therapy and parenting interventions. There is a need to increase public awareness of ADHD, especially among parents and teachers so that early intervention can be instituted in these children.