Sample records for ill-posed problems error

  1. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  2. A truncated generalized singular value decomposition algorithm for moving force identification with ill-posed problems

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.

    2017-08-01

    This paper proposes a new methodology for moving force identification (MFI) from the responses of bridge deck. Based on the existing time domain method (TDM), the MFI problem eventually becomes solving the linear algebraic equation in the form Ax = b . The vector b is usually contaminated by an unknown error e generating from measurement error, which often called the vector e as ''noise''. With the ill-posed problems that exist in the inverse problem, the identification force would be sensitive to the noise e . The proposed truncated generalized singular value decomposition method (TGSVD) aims at obtaining an acceptable solution and making the noise to be less sensitive to perturbations with the ill-posed problems. The illustrated results show that the TGSVD has many advantages such as higher precision, better adaptability and noise immunity compared with TDM. In addition, choosing a proper regularization matrix L and a truncation parameter k are very useful to improve the identification accuracy and to solve ill-posed problems when it is used to identify the moving force on bridge.

  3. Assimilating data into open ocean tidal models

    NASA Astrophysics Data System (ADS)

    Kivman, Gennady A.

    The problem of deriving tidal fields from observations by reason of incompleteness and imperfectness of every data set practically available has an infinitely large number of allowable solutions fitting the data within measurement errors and hence can be treated as ill-posed. Therefore, interpolating the data always relies on some a priori assumptions concerning the tides, which provide a rule of sampling or, in other words, a regularization of the ill-posed problem. Data assimilation procedures used in large scale tide modeling are viewed in a common mathematical framework as such regularizations. It is shown that they all (basis functions expansion, parameter estimation, nudging, objective analysis, general inversion, and extended general inversion), including those (objective analysis and general inversion) originally formulated in stochastic terms, may be considered as utilizations of one of the three general methods suggested by the theory of ill-posed problems. The problem of grid refinement critical for inverse methods and nudging is discussed.

  4. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  5. The Effect of the Ill-posed Problem on Quantitative Error Assessment in Digital Image Correlation

    DOE PAGES

    Lehoucq, R. B.; Reu, P. L.; Turner, D. Z.

    2017-11-27

    Here, this work explores the effect of the ill-posed problem on uncertainty quantification for motion estimation using digital image correlation (DIC) (Sutton et al. 2009). We develop a correction factor for standard uncertainty estimates based on the cosine of the angle between the true motion and the image gradients, in an integral sense over a subregion of the image. This correction factor accounts for variability in the DIC solution previously unaccounted for when considering only image noise, interpolation bias, contrast, and the software settings such as subset size and spacing.

  6. Error analysis and correction in wavefront reconstruction from the transport-of-intensity equation

    PubMed Central

    Barbero, Sergio; Thibos, Larry N.

    2007-01-01

    Wavefront reconstruction from the transport-of-intensity equation (TIE) is a well-posed inverse problem given smooth signals and appropriate boundary conditions. However, in practice experimental errors lead to an ill-condition problem. A quantitative analysis of the effects of experimental errors is presented in simulations and experimental tests. The relative importance of numerical, misalignment, quantization, and photodetection errors are shown. It is proved that reduction of photodetection noise by wavelet filtering significantly improves the accuracy of wavefront reconstruction from simulated and experimental data. PMID:20052302

  7. Reconstruction de defauts a partir de donnees issues de capteurs a courants de foucault avec modele direct differentiel

    NASA Astrophysics Data System (ADS)

    Trillon, Adrien

    Eddy current tomography can be employed to caracterize flaws in metal plates in steam generators of nuclear power plants. Our goal is to evaluate a map of the relative conductivity that represents the flaw. This nonlinear ill-posed problem is difficult to solve and a forward model is needed. First, we studied existing forward models to chose the one that is the most adapted to our case. Finite difference and finite element methods matched very good to our application. We adapted contrast source inversion (CSI) type methods to the chosen model and a new criterion was proposed. These methods are based on the minimization of the weighted errors of the model equations, coupling and observation. They allow an error on the equations. It appeared that reconstruction quality grows with the decay of the error on the coupling equation. We resorted to augmented Lagrangian techniques to constrain coupling equation and to avoid conditioning problems. In order to overcome the ill-posed character of the problem, prior information was introduced about the shape of the flaw and the values of the relative conductivity. Efficiency of the methods are illustrated with simulated flaws in 2D case.

  8. Application of the Discrete Regularization Method to the Inverse of the Chord Vibration Equation

    NASA Astrophysics Data System (ADS)

    Wang, Linjun; Han, Xu; Wei, Zhouchao

    The inverse problem of the initial condition about the boundary value of the chord vibration equation is ill-posed. First, we transform it into a Fredholm integral equation. Second, we discretize it by the trapezoidal formula method, and then obtain a severely ill-conditioned linear equation, which is sensitive to the disturbance of the data. In addition, the tiny error of right data causes the huge concussion of the solution. We cannot obtain good results by the traditional method. In this paper, we solve this problem by the Tikhonov regularization method, and the numerical simulations demonstrate that this method is feasible and effective.

  9. A Flexible and Efficient Method for Solving Ill-Posed Linear Integral Equations of the First Kind for Noisy Data

    NASA Astrophysics Data System (ADS)

    Antokhin, I. I.

    2017-06-01

    We propose an efficient and flexible method for solving Fredholm and Abel integral equations of the first kind, frequently appearing in astrophysics. These equations present an ill-posed problem. Our method is based on solving them on a so-called compact set of functions and/or using Tikhonov's regularization. Both approaches are non-parametric and do not require any theoretic model, apart from some very loose a priori constraints on the unknown function. The two approaches can be used independently or in a combination. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact one, as the errors of input data tend to zero. Simulated and astrophysical examples are presented.

  10. Retrieval of LAI and leaf chlorophyll content from remote sensing data by agronomy mechanism knowledge to solve the ill-posed inverse problem

    NASA Astrophysics Data System (ADS)

    Li, Zhenhai; Nie, Chenwei; Yang, Guijun; Xu, Xingang; Jin, Xiuliang; Gu, Xiaohe

    2014-10-01

    Leaf area index (LAI) and LCC, as the two most important crop growth variables, are major considerations in management decisions, agricultural planning and policy making. Estimation of canopy biophysical variables from remote sensing data was investigated using a radiative transfer model. However, the ill-posed problem is unavoidable for the unique solution of the inverse problem and the uncertainty of measurements and model assumptions. This study focused on the use of agronomy mechanism knowledge to restrict and remove the ill-posed inversion results. For this purpose, the inversion results obtained using the PROSAIL model alone (NAMK) and linked with agronomic mechanism knowledge (AMK) were compared. The results showed that AMK did not significantly improve the accuracy of LAI inversion. LAI was estimated with high accuracy, and there was no significant improvement after considering AMK. The validation results of the determination coefficient (R2) and the corresponding root mean square error (RMSE) between measured LAI and estimated LAI were 0.635 and 1.022 for NAMK, and 0.637 and 0.999 for AMK, respectively. LCC estimation was significantly improved with agronomy mechanism knowledge; the R2 and RMSE values were 0.377 and 14.495 μg cm-2 for NAMK, and 0.503 and 10.661 μg cm-2 for AMK, respectively. Results of the comparison demonstrated the need for agronomy mechanism knowledge in radiative transfer model inversion.

  11. Implication of adaptive smoothness constraint and Helmert variance component estimation in seismic slip inversion

    NASA Astrophysics Data System (ADS)

    Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi

    2017-10-01

    When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.

  12. A direct method for nonlinear ill-posed problems

    NASA Astrophysics Data System (ADS)

    Lakhal, A.

    2018-02-01

    We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.

  13. A quasi-spectral method for Cauchy problem of 2/D Laplace equation on an annulus

    NASA Astrophysics Data System (ADS)

    Saito, Katsuyoshi; Nakada, Manabu; Iijima, Kentaro; Onishi, Kazuei

    2005-01-01

    Real numbers are usually represented in the computer as a finite number of digits hexa-decimal floating point numbers. Accordingly the numerical analysis is often suffered from rounding errors. The rounding errors particularly deteriorate the precision of numerical solution in inverse and ill-posed problems. We attempt to use a multi-precision arithmetic for reducing the rounding error evil. The use of the multi-precision arithmetic system is by the courtesy of Dr Fujiwara of Kyoto University. In this paper we try to show effectiveness of the multi-precision arithmetic by taking two typical examples; the Cauchy problem of the Laplace equation in two dimensions and the shape identification problem by inverse scattering in three dimensions. It is concluded from a few numerical examples that the multi-precision arithmetic works well on the resolution of those numerical solutions, as it is combined with the high order finite difference method for the Cauchy problem and with the eigenfunction expansion method for the inverse scattering problem.

  14. Ill Posed Problems: Numerical and Statistical Methods for Mildly, Moderately and Severely Ill Posed Problems with Noisy Data.

    DTIC Science & Technology

    1980-02-01

    to estimate f -..ell, -noderately ,-ell, or- poorly. 1 ’The sansitivity *of a rec-ilarized estimate of f to the noise is made explicit. After giving the...AD-A 7 .SA92 925 WISCONSIN UN! V-MADISON DEFT OF STATISTICS F /S 11,’ 1 ILL POSED PRORLEMS: NUMERICAL ANn STATISTICAL METHODS FOR MILOL-ETC(U FEB 80 a...estimate f given z. We first define the 1 intrinsic rank of the problem where jK(tit) f (t)dt is known exactly. This 0 definition is used to provide insight

  15. Control and System Theory, Optimization, Inverse and Ill-Posed Problems

    DTIC Science & Technology

    1988-09-14

    Justlfleatlen Distribut ion/ Availability Codes # AFOSR-87-0350 Avat’ and/or1987-1988 Dist Special *CONTROL AND SYSTEM THEORY , ~ * OPTIMIZATION, * INVERSE...considerable va- riety of research investigations within the grant areas (Control and system theory , Optimization, and Ill-posed problems]. The

  16. The inverse problem of refraction travel times, part II: Quantifying refraction nonuniqueness using a three-layer model

    USGS Publications Warehouse

    Ivanov, J.; Miller, R.D.; Xia, J.; Steeples, D.

    2005-01-01

    This paper is the second of a set of two papers in which we study the inverse refraction problem. The first paper, "Types of Geophysical Nonuniqueness through Minimization," studies and classifies the types of nonuniqueness that exist when solving inverse problems depending on the participation of a priori information required to obtain reliable solutions of inverse geophysical problems. In view of the classification developed, in this paper we study the type of nonuniqueness associated with the inverse refraction problem. An approach for obtaining a realistic solution to the inverse refraction problem is offered in a third paper that is in preparation. The nonuniqueness of the inverse refraction problem is examined by using a simple three-layer model. Like many other inverse geophysical problems, the inverse refraction problem does not have a unique solution. Conventionally, nonuniqueness is considered to be a result of insufficient data and/or error in the data, for any fixed number of model parameters. This study illustrates that even for overdetermined and error free data, nonlinear inverse refraction problems exhibit exact-data nonuniqueness, which further complicates the problem of nonuniqueness. By evaluating the nonuniqueness of the inverse refraction problem, this paper targets the improvement of refraction inversion algorithms, and as a result, the achievement of more realistic solutions. The nonuniqueness of the inverse refraction problem is examined initially by using a simple three-layer model. The observations and conclusions of the three-layer model nonuniqueness study are used to evaluate the nonuniqueness of more complicated n-layer models and multi-parameter cell models such as in refraction tomography. For any fixed number of model parameters, the inverse refraction problem exhibits continuous ranges of exact-data nonuniqueness. Such an unfavorable type of nonuniqueness can be uniquely solved only by providing abundant a priori information. Insufficient a priori information during the inversion is the reason why refraction methods often may not produce desired results or even fail. This work also demonstrates that the application of the smoothing constraints, typical when solving ill-posed inverse problems, has a dual and contradictory role when applied to the ill-posed inverse problem of refraction travel times. This observation indicates that smoothing constraints may play such a two-fold role when applied to other inverse problems. Other factors that contribute to inverse-refraction-problem nonuniqueness are also considered, including indeterminacy, statistical data-error distribution, numerical error and instability, finite data, and model parameters. ?? Birkha??user Verlag, Basel, 2005.

  17. A validated non-linear Kelvin-Helmholtz benchmark for numerical hydrodynamics

    NASA Astrophysics Data System (ADS)

    Lecoanet, D.; McCourt, M.; Quataert, E.; Burns, K. J.; Vasil, G. M.; Oishi, J. S.; Brown, B. P.; Stone, J. M.; O'Leary, R. M.

    2016-02-01

    The non-linear evolution of the Kelvin-Helmholtz instability is a popular test for code verification. To date, most Kelvin-Helmholtz problems discussed in the literature are ill-posed: they do not converge to any single solution with increasing resolution. This precludes comparisons among different codes and severely limits the utility of the Kelvin-Helmholtz instability as a test problem. The lack of a reference solution has led various authors to assert the accuracy of their simulations based on ad hoc proxies, e.g. the existence of small-scale structures. This paper proposes well-posed two-dimensional Kelvin-Helmholtz problems with smooth initial conditions and explicit diffusion. We show that in many cases numerical errors/noise can seed spurious small-scale structure in Kelvin-Helmholtz problems. We demonstrate convergence to a reference solution using both ATHENA, a Godunov code, and DEDALUS, a pseudo-spectral code. Problems with constant initial density throughout the domain are relatively straightforward for both codes. However, problems with an initial density jump (which are the norm in astrophysical systems) exhibit rich behaviour and are more computationally challenging. In the latter case, ATHENA simulations are prone to an instability of the inner rolled-up vortex; this instability is seeded by grid-scale errors introduced by the algorithm, and disappears as resolution increases. Both ATHENA and DEDALUS exhibit late-time chaos. Inviscid simulations are riddled with extremely vigorous secondary instabilities which induce more mixing than simulations with explicit diffusion. Our results highlight the importance of running well-posed test problems with demonstrated convergence to a reference solution. To facilitate future comparisons, we include as supplementary material the resolved, converged solutions to the Kelvin-Helmholtz problems in this paper in machine-readable form.

  18. Regularization techniques for backward--in--time evolutionary PDE problems

    NASA Astrophysics Data System (ADS)

    Gustafsson, Jonathan; Protas, Bartosz

    2007-11-01

    Backward--in--time evolutionary PDE problems have applications in the recently--proposed retrograde data assimilation. We consider the terminal value problem for the Kuramoto--Sivashinsky equation (KSE) in a 1D periodic domain as our model system. The KSE, proposed as a model for interfacial and combustion phenomena, is also often adopted as a toy model for hydrodynamic turbulence because of its multiscale and chaotic dynamics. Backward--in--time problems are typical examples of ill-posed problem, where disturbances are amplified exponentially during the backward march. Regularization is required to solve such problems efficiently and we consider approaches in which the original ill--posed problem is approximated with a less ill--posed problem obtained by adding a regularization term to the original equation. While such techniques are relatively well--understood for linear problems, they less understood in the present nonlinear setting. We consider regularization terms with fixed magnitudes and also explore a novel approach in which these magnitudes are adapted dynamically using simple concepts from the Control Theory.

  19. An ambiguity of information content and error in an ill-posed satellite inversion

    NASA Astrophysics Data System (ADS)

    Koner, Prabhat

    According to Rodgers (2000, stochastic approach), the averaging kernel (AK) is the representational matrix to understand the information content in a scholastic inversion. On the other hand, in deterministic approach this is referred to as model resolution matrix (MRM, Menke 1989). The analysis of AK/MRM can only give some understanding of how much regularization is imposed on the inverse problem. The trace of the AK/MRM matrix, which is the so-called degree of freedom from signal (DFS; stochastic) or degree of freedom in retrieval (DFR; deterministic). There are no physical/mathematical explanations in the literature: why the trace of the matrix is a valid form to calculate this quantity? We will present an ambiguity between information and error using a real life problem of SST retrieval from GOES13. The stochastic information content calculation is based on the linear assumption. The validity of such mathematics in satellite inversion will be questioned because it is based on the nonlinear radiative transfer and ill-conditioned inverse problems. References: Menke, W., 1989: Geophysical data analysis: discrete inverse theory. San Diego academic press. Rodgers, C.D., 2000: Inverse methods for atmospheric soundings: theory and practice. Singapore :World Scientific.

  20. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  1. Inverse modeling for seawater intrusion in coastal aquifers: Insights about parameter sensitivities, variances, correlations and estimation procedures derived from the Henry problem

    USGS Publications Warehouse

    Sanz, E.; Voss, C.I.

    2006-01-01

    Inverse modeling studies employing data collected from the classic Henry seawater intrusion problem give insight into several important aspects of inverse modeling of seawater intrusion problems and effective measurement strategies for estimation of parameters for seawater intrusion. Despite the simplicity of the Henry problem, it embodies the behavior of a typical seawater intrusion situation in a single aquifer. Data collected from the numerical problem solution are employed without added noise in order to focus on the aspects of inverse modeling strategies dictated by the physics of variable-density flow and solute transport during seawater intrusion. Covariances of model parameters that can be estimated are strongly dependent on the physics. The insights gained from this type of analysis may be directly applied to field problems in the presence of data errors, using standard inverse modeling approaches to deal with uncertainty in data. Covariance analysis of the Henry problem indicates that in order to generally reduce variance of parameter estimates, the ideal places to measure pressure are as far away from the coast as possible, at any depth, and the ideal places to measure concentration are near the bottom of the aquifer between the center of the transition zone and its inland fringe. These observations are located in and near high-sensitivity regions of system parameters, which may be identified in a sensitivity analysis with respect to several parameters. However, both the form of error distribution in the observations and the observation weights impact the spatial sensitivity distributions, and different choices for error distributions or weights can result in significantly different regions of high sensitivity. Thus, in order to design effective sampling networks, the error form and weights must be carefully considered. For the Henry problem, permeability and freshwater inflow can be estimated with low estimation variance from only pressure or only concentration observations. Permeability, freshwater inflow, solute molecular diffusivity, and porosity can be estimated with roughly equivalent confidence using observations of only the logarithm of concentration. Furthermore, covariance analysis allows a logical reduction of the number of estimated parameters for ill-posed inverse seawater intrusion problems. Ill-posed problems may exhibit poor estimation convergence, have a non-unique solution, have multiple minima, or require excessive computational effort, and the condition often occurs when estimating too many or co-dependent parameters. For the Henry problem, such analysis allows selection of the two parameters that control system physics from among all possible system parameters. ?? 2005 Elsevier Ltd. All rights reserved.

  2. Sinc-Galerkin estimation of diffusivity in parabolic problems

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.; Bowers, Kenneth L.

    1991-01-01

    A fully Sinc-Galerkin method for the numerical recovery of spatially varying diffusion coefficients in linear partial differential equations is presented. Because the parameter recovery problems are inherently ill-posed, an output error criterion in conjunction with Tikhonov regularization is used to formulate them as infinite-dimensional minimization problems. The forward problems are discretized with a sinc basis in both the spatial and temporal domains thus yielding an approximate solution which displays an exponential convergence rate and is valid on the infinite time interval. The minimization problems are then solved via a quasi-Newton/trust region algorithm. The L-curve technique for determining an approximate value of the regularization parameter is briefly discussed, and numerical examples are given which show the applicability of the method both for problems with noise-free data as well as for those whose data contains white noise.

  3. On regularization and error estimates for the backward heat conduction problem with time-dependent thermal diffusivity factor

    NASA Astrophysics Data System (ADS)

    Karimi, Milad; Moradlou, Fridoun; Hajipour, Mojtaba

    2018-10-01

    This paper is concerned with a backward heat conduction problem with time-dependent thermal diffusivity factor in an infinite "strip". This problem is drastically ill-posed which is caused by the amplified infinitely growth in the frequency components. A new regularization method based on the Meyer wavelet technique is developed to solve the considered problem. Using the Meyer wavelet technique, some new stable estimates are proposed in the Hölder and Logarithmic types which are optimal in the sense of given by Tautenhahn. The stability and convergence rate of the proposed regularization technique are proved. The good performance and the high-accuracy of this technique is demonstrated through various one and two dimensional examples. Numerical simulations and some comparative results are presented.

  4. Pose-free structure from motion using depth from motion constraints.

    PubMed

    Zhang, Ji; Boutin, Mireille; Aliaga, Daniel G

    2011-10-01

    Structure from motion (SFM) is the problem of recovering the geometry of a scene from a stream of images taken from unknown viewpoints. One popular approach to estimate the geometry of a scene is to track scene features on several images and reconstruct their position in 3-D. During this process, the unknown camera pose must also be recovered. Unfortunately, recovering the pose can be an ill-conditioned problem which, in turn, can make the SFM problem difficult to solve accurately. We propose an alternative formulation of the SFM problem with fixed internal camera parameters known a priori. In this formulation, obtained by algebraic variable elimination, the external camera pose parameters do not appear. As a result, the problem is better conditioned in addition to involving much fewer variables. Variable elimination is done in three steps. First, we take the standard SFM equations in projective coordinates and eliminate the camera orientations from the equations. We then further eliminate the camera center positions. Finally, we also eliminate all 3-D point positions coordinates, except for their depths with respect to the camera center, thus obtaining a set of simple polynomial equations of degree two and three. We show that, when there are merely a few points and pictures, these "depth-only equations" can be solved in a global fashion using homotopy methods. We also show that, in general, these same equations can be used to formulate a pose-free cost function to refine SFM solutions in a way that is more accurate than by minimizing the total reprojection error, as done when using the bundle adjustment method. The generalization of our approach to the case of varying internal camera parameters is briefly discussed. © 2011 IEEE

  5. Validating a UAV artificial intelligence control system using an autonomous test case generator

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Huber, Justin

    2013-05-01

    The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.

  6. CREKID: A computer code for transient, gas-phase combustion of kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1984-01-01

    A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.

  7. Binary optimization for source localization in the inverse problem of ECG.

    PubMed

    Potyagaylo, Danila; Cortés, Elisenda Gil; Schulze, Walther H W; Dössel, Olaf

    2014-09-01

    The goal of ECG-imaging (ECGI) is to reconstruct heart electrical activity from body surface potential maps. The problem is ill-posed, which means that it is extremely sensitive to measurement and modeling errors. The most commonly used method to tackle this obstacle is Tikhonov regularization, which consists in converting the original problem into a well-posed one by adding a penalty term. The method, despite all its practical advantages, has however a serious drawback: The obtained solution is often over-smoothed, which can hinder precise clinical diagnosis and treatment planning. In this paper, we apply a binary optimization approach to the transmembrane voltage (TMV)-based problem. For this, we assume the TMV to take two possible values according to a heart abnormality under consideration. In this work, we investigate the localization of simulated ischemic areas and ectopic foci and one clinical infarction case. This affects only the choice of the binary values, while the core of the algorithms remains the same, making the approximation easily adjustable to the application needs. Two methods, a hybrid metaheuristic approach and the difference of convex functions (DC), algorithm were tested. For this purpose, we performed realistic heart simulations for a complex thorax model and applied the proposed techniques to the obtained ECG signals. Both methods enabled localization of the areas of interest, hence showing their potential for application in ECGI. For the metaheuristic algorithm, it was necessary to subdivide the heart into regions in order to obtain a stable solution unsusceptible to the errors, while the analytical DC scheme can be efficiently applied for higher dimensional problems. With the DC method, we also successfully reconstructed the activation pattern and origin of a simulated extrasystole. In addition, the DC algorithm enables iterative adjustment of binary values ensuring robust performance.

  8. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    NASA Astrophysics Data System (ADS)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  9. Experimental/clinical evaluation of EIT image reconstruction with l1 data and image norms

    NASA Astrophysics Data System (ADS)

    Mamatjan, Yasin; Borsic, Andrea; Gürsoy, Doga; Adler, Andy

    2013-04-01

    Electrical impedance tomography (EIT) image reconstruction is ill-posed, and the spatial resolution of reconstructed images is low due to the diffuse propagation of current and limited number of independent measurements. Generally, image reconstruction is formulated using a regularized scheme in which l2 norms are preferred for both the data misfit and image prior terms due to computational convenience which result in smooth solutions. However, recent work on a Primal Dual-Interior Point Method (PDIPM) framework showed its effectiveness in dealing with the minimization problem. l1 norms on data and regularization terms in EIT image reconstruction address both problems of reconstruction with sharp edges and dealing with measurement errors. We aim for a clinical and experimental evaluation of the PDIPM method by selecting scenarios (human lung and dog breathing) with known electrode errors, which require a rigorous regularization and cause the failure of reconstructions with l2 norm. Results demonstrate the applicability of PDIPM algorithms, especially l1 data and regularization norms for clinical applications of EIT showing that l1 solution is not only more robust to measurement errors in clinical setting, but also provides high contrast resolution on organ boundaries.

  10. SAFE HANDLING OF FOODS

    EPA Science Inventory

    Microbial food-borne illnesses pose a significant health problem in Japan. In 1996 the world's largest outbreak of Escherichia coli food illness occurred in Japan. Since then, new regulatory measures were established, including strict hygiene practices in meat and food processi...

  11. On the Soil Roughness Parameterization Problem in Soil Moisture Retrieval of Bare Surfaces from Synthetic Aperture Radar

    PubMed Central

    Verhoest, Niko E.C; Lievens, Hans; Wagner, Wolfgang; Álvarez-Mozos, Jesús; Moran, M. Susan; Mattia, Francesco

    2008-01-01

    Synthetic Aperture Radar has shown its large potential for retrieving soil moisture maps at regional scales. However, since the backscattered signal is determined by several surface characteristics, the retrieval of soil moisture is an ill-posed problem when using single configuration imagery. Unless accurate surface roughness parameter values are available, retrieving soil moisture from radar backscatter usually provides inaccurate estimates. The characterization of soil roughness is not fully understood, and a large range of roughness parameter values can be obtained for the same surface when different measurement methodologies are used. In this paper, a literature review is made that summarizes the problems encountered when parameterizing soil roughness as well as the reported impact of the errors made on the retrieved soil moisture. A number of suggestions were made for resolving issues in roughness parameterization and studying the impact of these roughness problems on the soil moisture retrieval accuracy and scale. PMID:27879932

  12. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Inverse statistical estimation via order statistics: a resolution of the ill-posed inverse problem of PERT scheduling

    NASA Astrophysics Data System (ADS)

    Pickard, William F.

    2004-10-01

    The classical PERT inverse statistics problem requires estimation of the mean, \\skew1\\bar{m} , and standard deviation, s, of a unimodal distribution given estimates of its mode, m, and of the smallest, a, and largest, b, values likely to be encountered. After placing the problem in historical perspective and showing that it is ill-posed because it is underdetermined, this paper offers an approach to resolve the ill-posedness: (a) by interpreting a and b modes of order statistic distributions; (b) by requiring also an estimate of the number of samples, N, considered in estimating the set {m, a, b}; and (c) by maximizing a suitable likelihood, having made the traditional assumption that the underlying distribution is beta. Exact formulae relating the four parameters of the beta distribution to {m, a, b, N} and the assumed likelihood function are then used to compute the four underlying parameters of the beta distribution; and from them, \\skew1\\bar{m} and s are computed using exact formulae.

  14. Robust penalty method for structural synthesis

    NASA Technical Reports Server (NTRS)

    Kamat, M. P.

    1983-01-01

    The Sequential Unconstrained Minimization Technique (SUMT) offers an easy way of solving nonlinearly constrained problems. However, this algorithm frequently suffers from the need to minimize an ill-conditioned penalty function. An ill-conditioned minimization problem can be solved very effectively by posing the problem as one of integrating a system of stiff differential equations utilizing concepts from singular perturbation theory. This paper evaluates the robustness and the reliability of such a singular perturbation based SUMT algorithm on two different problems of structural optimization of widely separated scales. The report concludes that whereas conventional SUMT can be bogged down by frequent ill-conditioning, especially in large scale problems, the singular perturbation SUMT has no such difficulty in converging to very accurate solutions.

  15. Cone Beam X-Ray Luminescence Tomography Imaging Based on KA-FEM Method for Small Animals.

    PubMed

    Chen, Dongmei; Meng, Fanzhen; Zhao, Fengjun; Xu, Cao

    2016-01-01

    Cone beam X-ray luminescence tomography can realize fast X-ray luminescence tomography imaging with relatively low scanning time compared with narrow beam X-ray luminescence tomography. However, cone beam X-ray luminescence tomography suffers from an ill-posed reconstruction problem. First, the feasibility of experiments with different penetration and multispectra in small animal has been tested using nanophosphor material. Then, the hybrid reconstruction algorithm with KA-FEM method has been applied in cone beam X-ray luminescence tomography for small animals to overcome the ill-posed reconstruction problem, whose advantage and property have been demonstrated in fluorescence tomography imaging. The in vivo mouse experiment proved the feasibility of the proposed method.

  16. Ill-posedness of the 3D incompressible hyperdissipative Navier–Stokes system in critical Fourier-Herz spaces

    NASA Astrophysics Data System (ADS)

    Nie, Yao; Zheng, Xiaoxin

    2018-07-01

    We study the Cauchy problem for the 3D incompressible hyperdissipative Navier–Stokes equations and consider the well-posedness and ill-posedness in critical Fourier-Herz spaces . We prove that if and , the system is locally well-posed for large initial data as well as globally well-posed for small initial data. Also, we obtain the same result for and . More importantly, we show that the system is ill-posed in the sense of norm inflation for and q  >  2. The proof relies heavily on particular structure of initial data u 0 that we construct, which makes the first iteration of solution inflate. Specifically, the special structure of u 0 transforms an infinite sum into a finite sum in ‘remainder term’, which permits us to control the remainder.

  17. Multicollinearity in hierarchical linear models.

    PubMed

    Yu, Han; Jiang, Shanhe; Land, Kenneth C

    2015-09-01

    This study investigates an ill-posed problem (multicollinearity) in Hierarchical Linear Models from both the data and the model perspectives. We propose an intuitive, effective approach to diagnosing the presence of multicollinearity and its remedies in this class of models. A simulation study demonstrates the impacts of multicollinearity on coefficient estimates, associated standard errors, and variance components at various levels of multicollinearity for finite sample sizes typical in social science studies. We further investigate the role multicollinearity plays at each level for estimation of coefficient parameters in terms of shrinkage. Based on these analyses, we recommend a top-down method for assessing multicollinearity in HLMs that first examines the contextual predictors (Level-2 in a two-level model) and then the individual predictors (Level-1) and uses the results for data collection, research problem redefinition, model re-specification, variable selection and estimation of a final model. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Analysis of the Hessian for Aerodynamic Optimization: Inviscid Flow

    NASA Technical Reports Server (NTRS)

    Arian, Eyal; Ta'asan, Shlomo

    1996-01-01

    In this paper we analyze inviscid aerodynamic shape optimization problems governed by the full potential and the Euler equations in two and three dimensions. The analysis indicates that minimization of pressure dependent cost functions results in Hessians whose eigenvalue distributions are identical for the full potential and the Euler equations. However the optimization problems in two and three dimensions are inherently different. While the two dimensional optimization problems are well-posed the three dimensional ones are ill-posed. Oscillations in the shape up to the smallest scale allowed by the design space can develop in the direction perpendicular to the flow, implying that a regularization is required. A natural choice of such a regularization is derived. The analysis also gives an estimate of the Hessian's condition number which implies that the problems at hand are ill-conditioned. Infinite dimensional approximations for the Hessians are constructed and preconditioners for gradient based methods are derived from these approximate Hessians.

  19. Statistical analysis of nonlinearly reconstructed near-infrared tomographic images: Part I--Theory and simulations.

    PubMed

    Pogue, Brian W; Song, Xiaomei; Tosteson, Tor D; McBride, Troy O; Jiang, Shudong; Paulsen, Keith D

    2002-07-01

    Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores non-invasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE.

  20. Meanings Given to Algebraic Symbolism in Problem-Posing

    ERIC Educational Resources Information Center

    Cañadas, María C.; Molina, Marta; del Río, Aurora

    2018-01-01

    Some errors in the learning of algebra suggest that students might have difficulties giving meaning to algebraic symbolism. In this paper, we use problem posing to analyze the students' capacity to assign meaning to algebraic symbolism and the difficulties that students encounter in this process, depending on the characteristics of the algebraic…

  1. Effects of adaptive refinement on the inverse EEG solution

    NASA Astrophysics Data System (ADS)

    Weinstein, David M.; Johnson, Christopher R.; Schmidt, John A.

    1995-10-01

    One of the fundamental problems in electroencephalography can be characterized by an inverse problem. Given a subset of electrostatic potentials measured on the surface of the scalp and the geometry and conductivity properties within the head, calculate the current vectors and potential fields within the cerebrum. Mathematically the generalized EEG problem can be stated as solving Poisson's equation of electrical conduction for the primary current sources. The resulting problem is mathematically ill-posed i.e., the solution does not depend continuously on the data, such that small errors in the measurement of the voltages on the scalp can yield unbounded errors in the solution, and, for the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions the general treatment of a solution of Poisson's equation, the solution is non-unique. However, if accurate solutions to such problems could be obtained, neurologists would gain noninvasive accesss to patient-specific cortical activity. Access to such data would ultimately increase the number of patients who could be effectively treated for pathological cortical conditions such as temporal lobe epilepsy. In this paper, we present the effects of spatial adaptive refinement on the inverse EEG problem and show that the use of adaptive methods allow for significantly better estimates of electric and potential fileds within the brain through an inverse procedure. To test these methods, we have constructed several finite element head models from magneteic resonance images of a patient. The finite element meshes ranged in size from 2724 nodes and 12,812 elements to 5224 nodes and 29,135 tetrahedral elements, depending on the level of discretization. We show that an adaptive meshing algorithm minimizes the error in the forward problem due to spatial discretization and thus increases the accuracy of the inverse solution.

  2. Hysteresis and Phase Transitions in a Lattice Regularization of an Ill-Posed Forward-Backward Diffusion Equation

    NASA Astrophysics Data System (ADS)

    Helmers, Michael; Herrmann, Michael

    2018-03-01

    We consider a lattice regularization for an ill-posed diffusion equation with a trilinear constitutive law and study the dynamics of phase interfaces in the parabolic scaling limit. Our main result guarantees for a certain class of single-interface initial data that the lattice solutions satisfy asymptotically a free boundary problem with a hysteretic Stefan condition. The key challenge in the proof is to control the microscopic fluctuations that are inevitably produced by the backward diffusion when a particle passes the spinodal region.

  3. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  4. Sparse Reconstruction of Regional Gravity Signal Based on Stabilized Orthogonal Matching Pursuit (SOMP)

    NASA Astrophysics Data System (ADS)

    Saadat, S. A.; Safari, A.; Needell, D.

    2016-06-01

    The main role of gravity field recovery is the study of dynamic processes in the interior of the Earth especially in exploration geophysics. In this paper, the Stabilized Orthogonal Matching Pursuit (SOMP) algorithm is introduced for sparse reconstruction of regional gravity signals of the Earth. In practical applications, ill-posed problems may be encountered regarding unknown parameters that are sensitive to the data perturbations. Therefore, an appropriate regularization method needs to be applied to find a stabilized solution. The SOMP algorithm aims to regularize the norm of the solution vector, while also minimizing the norm of the corresponding residual vector. In this procedure, a convergence point of the algorithm that specifies optimal sparsity-level of the problem is determined. The results show that the SOMP algorithm finds the stabilized solution for the ill-posed problem at the optimal sparsity-level, improving upon existing sparsity based approaches.

  5. Iterative updating of model error for Bayesian inversion

    NASA Astrophysics Data System (ADS)

    Calvetti, Daniela; Dunlop, Matthew; Somersalo, Erkki; Stuart, Andrew

    2018-02-01

    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations when the data is finite dimensional. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit under a simplifying assumption. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.

  6. A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).

    PubMed

    Wang, Qi; Wang, Huaxiang

    2011-04-01

    During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  7. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  8. An efficient method for model refinement in diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Zirak, A. R.; Khademi, M.

    2007-11-01

    Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.

  9. Feasibility of inverse problem solution for determination of city emission function from night sky radiance measurements

    NASA Astrophysics Data System (ADS)

    Petržala, Jaromír

    2018-07-01

    The knowledge of the emission function of a city is crucial for simulation of sky glow in its vicinity. The indirect methods to achieve this function from radiances measured over a part of the sky have been recently developed. In principle, such methods represent an ill-posed inverse problem. This paper deals with the theoretical feasibility study of various approaches to solving of given inverse problem. Particularly, it means testing of fitness of various stabilizing functionals within the Tikhonov's regularization. Further, the L-curve and generalized cross validation methods were investigated as indicators of an optimal regularization parameter. At first, we created the theoretical model for calculation of a sky spectral radiance in the form of a functional of an emission spectral radiance. Consequently, all the mentioned approaches were examined in numerical experiments with synthetical data generated for the fictitious city and loaded by random errors. The results demonstrate that the second order Tikhonov's regularization method together with regularization parameter choice by the L-curve maximum curvature criterion provide solutions which are in good agreement with the supposed model emission functions.

  10. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  11. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  12. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    NASA Astrophysics Data System (ADS)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  13. Sizing aerosolized fractal nanoparticle aggregates through Bayesian analysis of wide-angle light scattering (WALS) data

    NASA Astrophysics Data System (ADS)

    Huber, Franz J. T.; Will, Stefan; Daun, Kyle J.

    2016-11-01

    Inferring the size distribution of aerosolized fractal aggregates from the angular distribution of elastically scattered light is a mathematically ill-posed problem. This paper presents a procedure for analyzing Wide-Angle Light Scattering (WALS) data using Bayesian inference. The outcome is probability densities for the recovered size distribution and aggregate morphology parameters. This technique is applied to both synthetic data and experimental data collected on soot-laden aerosols, using a measurement equation derived from Rayleigh-Debye-Gans fractal aggregate (RDG-FA) theory. In the case of experimental data, the recovered aggregate size distribution parameters are generally consistent with TEM-derived values, but the accuracy is impaired by the well-known limited accuracy of RDG-FA theory. Finally, we show how this bias could potentially be avoided using the approximation error technique.

  14. The mean field theory in EM procedures for blind Markov random field image restoration.

    PubMed

    Zhang, J

    1993-01-01

    A Markov random field (MRF) model-based EM (expectation-maximization) procedure for simultaneously estimating the degradation model and restoring the image is described. The MRF is a coupled one which provides continuity (inside regions of smooth gray tones) and discontinuity (at region boundaries) constraints for the restoration problem which is, in general, ill posed. The computational difficulty associated with the EM procedure for MRFs is resolved by using the mean field theory from statistical mechanics. An orthonormal blur decomposition is used to reduce the chances of undesirable locally optimal estimates. Experimental results on synthetic and real-world images show that this approach provides good blur estimates and restored images. The restored images are comparable to those obtained by a Wiener filter in mean-square error, but are most visually pleasing.

  15. Analysis and algorithms for a regularized Cauchy problem arising from a non-linear elliptic PDE for seismic velocity estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cameron, M.K.; Fomel, S.B.; Sethian, J.A.

    2009-01-01

    In the present work we derive and study a nonlinear elliptic PDE coming from the problem of estimation of sound speed inside the Earth. The physical setting of the PDE allows us to pose only a Cauchy problem, and hence is ill-posed. However we are still able to solve it numerically on a long enough time interval to be of practical use. We used two approaches. The first approach is a finite difference time-marching numerical scheme inspired by the Lax-Friedrichs method. The key features of this scheme is the Lax-Friedrichs averaging and the wide stencil in space. The second approachmore » is a spectral Chebyshev method with truncated series. We show that our schemes work because of (1) the special input corresponding to a positive finite seismic velocity, (2) special initial conditions corresponding to the image rays, (3) the fact that our finite-difference scheme contains small error terms which damp the high harmonics; truncation of the Chebyshev series, and (4) the need to compute the solution only for a short interval of time. We test our numerical scheme on a collection of analytic examples and demonstrate a dramatic improvement in accuracy in the estimation of the sound speed inside the Earth in comparison with the conventional Dix inversion. Our test on the Marmousi example confirms the effectiveness of the proposed approach.« less

  16. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    NASA Astrophysics Data System (ADS)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  17. Expanding the Space of Plausible Solutions in a Medical Tutoring System for Problem-Based Learning

    ERIC Educational Resources Information Center

    Kazi, Hameedullah; Haddawy, Peter; Suebnukarn, Siriwan

    2009-01-01

    In well-defined domains such as Physics, Mathematics, and Chemistry, solutions to a posed problem can objectively be classified as correct or incorrect. In ill-defined domains such as medicine, the classification of solutions to a patient problem as correct or incorrect is much more complex. Typical tutoring systems accept only a small set of…

  18. Minimum mean squared error (MSE) adjustment and the optimal Tykhonov-Phillips regularization parameter via reproducing best invariant quadratic uniformly unbiased estimates (repro-BIQUUE)

    NASA Astrophysics Data System (ADS)

    Schaffrin, Burkhard

    2008-02-01

    In a linear Gauss-Markov model, the parameter estimates from BLUUE (Best Linear Uniformly Unbiased Estimate) are not robust against possible outliers in the observations. Moreover, by giving up the unbiasedness constraint, the mean squared error (MSE) risk may be further reduced, in particular when the problem is ill-posed. In this paper, the α-weighted S-homBLE (Best homogeneously Linear Estimate) is derived via formulas originally used for variance component estimation on the basis of the repro-BIQUUE (reproducing Best Invariant Quadratic Uniformly Unbiased Estimate) principle in a model with stochastic prior information. In the present model, however, such prior information is not included, which allows the comparison of the stochastic approach (α-weighted S-homBLE) with the well-established algebraic approach of Tykhonov-Phillips regularization, also known as R-HAPS (Hybrid APproximation Solution), whenever the inverse of the “substitute matrix” S exists and is chosen as the R matrix that defines the relative impact of the regularizing term on the final result.

  19. Solving ill-posed inverse problems using iterative deep neural networks

    NASA Astrophysics Data System (ADS)

    Adler, Jonas; Öktem, Ozan

    2017-12-01

    We propose a partially learned approach for the solution of ill-posed inverse problems with not necessarily linear forward operators. The method builds on ideas from classical regularisation theory and recent advances in deep learning to perform learning while making use of prior information about the inverse problem encoded in the forward operator, noise model and a regularising functional. The method results in a gradient-like iterative scheme, where the ‘gradient’ component is learned using a convolutional network that includes the gradients of the data discrepancy and regulariser as input in each iteration. We present results of such a partially learned gradient scheme on a non-linear tomographic inversion problem with simulated data from both the Sheep-Logan phantom as well as a head CT. The outcome is compared against filtered backprojection and total variation reconstruction and the proposed method provides a 5.4 dB PSNR improvement over the total variation reconstruction while being significantly faster, giving reconstructions of 512 × 512 pixel images in about 0.4 s using a single graphics processing unit (GPU).

  20. Ill-posed problem and regularization in reconstruction of radiobiological parameters from serial tumor imaging data

    NASA Astrophysics Data System (ADS)

    Chvetsov, Alevei V.; Sandison, George A.; Schwartz, Jeffrey L.; Rengan, Ramesh

    2015-11-01

    The main objective of this article is to improve the stability of reconstruction algorithms for estimation of radiobiological parameters using serial tumor imaging data acquired during radiation therapy. Serial images of tumor response to radiation therapy represent a complex summation of several exponential processes as treatment induced cell inactivation, tumor growth rates, and the rate of cell loss. Accurate assessment of treatment response would require separation of these processes because they define radiobiological determinants of treatment response and, correspondingly, tumor control probability. However, the estimation of radiobiological parameters using imaging data can be considered an inverse ill-posed problem because a sum of several exponentials would produce the Fredholm integral equation of the first kind which is ill posed. Therefore, the stability of reconstruction of radiobiological parameters presents a problem even for the simplest models of tumor response. To study stability of the parameter reconstruction problem, we used a set of serial CT imaging data for head and neck cancer and a simplest case of a two-level cell population model of tumor response. Inverse reconstruction was performed using a simulated annealing algorithm to minimize a least squared objective function. Results show that the reconstructed values of cell surviving fractions and cell doubling time exhibit significant nonphysical fluctuations if no stabilization algorithms are applied. However, after applying a stabilization algorithm based on variational regularization, the reconstruction produces statistical distributions for survival fractions and doubling time that are comparable to published in vitro data. This algorithm is an advance over our previous work where only cell surviving fractions were reconstructed. We conclude that variational regularization allows for an increase in the number of free parameters in our model which enables development of more-advanced parameter reconstruction algorithms.

  1. Reducing errors in the GRACE gravity solutions using regularization

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2012-09-01

    The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4 solutions (RL04) from the Center for Space Research (CSR). Post-fit residual analysis shows that the regularized solutions fit the data to within the noise level of GRACE. A time series of filtered hydrological model is used to confirm that signal attenuation for basins in the Total Runoff Integrating Pathways (TRIP) database over 320 km radii is less than 1 cm equivalent water height RMS, which is within the noise level of GRACE.

  2. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  3. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)

  4. Proceedings of Colloquium on Stable Solutions of Some Ill-Posed Problems, October 9, 1979.

    DTIC Science & Technology

    1980-06-30

    4. In (24] iterative process (9) was applied for calculation of the magnetization of thin magnetic films . This problem is of interest for computer...equation fl I (x-t) -f(t) = g(x), x > 1. (i) Its multidimensional analogue fmX-tK-if(t)dt = g(x), xEA, AnD (2) can be intepreted as the problem of

  5. Atmospheric inverse modeling via sparse reconstruction

    NASA Astrophysics Data System (ADS)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  6. Backward semi-linear parabolic equations with time-dependent coefficients and local Lipschitz source

    NASA Astrophysics Data System (ADS)

    Nho Hào, Dinh; Van Duc, Nguyen; Van Thang, Nguyen

    2018-05-01

    Let H be a Hilbert space with the inner product and the norm , a positive self-adjoint unbounded time-dependent operator on H and . We establish stability estimates of Hölder type and propose a regularization method with error estimates of Hölder type for the ill-posed backward semi-linear parabolic equation with the source function f satisfying a local Lipschitz condition.

  7. A Tale of Three Cases: Examining Accuracy, Efficiency, and Process Differences in Diagnosing Virtual Patient Cases

    ERIC Educational Resources Information Center

    Doleck, Tenzin; Jarrell, Amanda; Poitras, Eric G.; Chaouachi, Maher; Lajoie, Susanne P.

    2016-01-01

    Clinical reasoning is a central skill in diagnosing cases. However, diagnosing a clinical case poses several challenges that are inherent to solving multifaceted ill-structured problems. In particular, when solving such problems, the complexity stems from the existence of multiple paths to arriving at the correct solution (Lajoie, 2003). Moreover,…

  8. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  9. Three-dimensional ionospheric tomography reconstruction using the model function approach in Tikhonov regularization

    NASA Astrophysics Data System (ADS)

    Wang, Sicheng; Huang, Sixun; Xiang, Jie; Fang, Hanxian; Feng, Jian; Wang, Yu

    2016-12-01

    Ionospheric tomography is based on the observed slant total electron content (sTEC) along different satellite-receiver rays to reconstruct the three-dimensional electron density distributions. Due to incomplete measurements provided by the satellite-receiver geometry, it is a typical ill-posed problem, and how to overcome the ill-posedness is still a crucial content of research. In this paper, Tikhonov regularization method is used and the model function approach is applied to determine the optimal regularization parameter. This algorithm not only balances the weights between sTEC observations and background electron density field but also converges globally and rapidly. The background error covariance is given by multiplying background model variance and location-dependent spatial correlation, and the correlation model is developed by using sample statistics from an ensemble of the International Reference Ionosphere 2012 (IRI2012) model outputs. The Global Navigation Satellite System (GNSS) observations in China are used to present the reconstruction results, and measurements from two ionosondes are used to make independent validations. Both the test cases using artificial sTEC observations and actual GNSS sTEC measurements show that the regularization method can effectively improve the background model outputs.

  10. Image reconstruction

    NASA Astrophysics Data System (ADS)

    Vasilenko, Georgii Ivanovich; Taratorin, Aleksandr Markovich

    Linear, nonlinear, and iterative image-reconstruction (IR) algorithms are reviewed. Theoretical results are presented concerning controllable linear filters, the solution of ill-posed functional minimization problems, and the regularization of iterative IR algorithms. Attention is also given to the problem of superresolution and analytical spectrum continuation, the solution of the phase problem, and the reconstruction of images distorted by turbulence. IR in optical and optical-digital systems is discussed with emphasis on holographic techniques.

  11. Least Squares Computations in Science and Engineering

    DTIC Science & Technology

    1994-02-01

    iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory

  12. An efficient and flexible Abel-inversion method for noisy data

    NASA Astrophysics Data System (ADS)

    Antokhin, Igor I.

    2016-12-01

    We propose an efficient and flexible method for solving the Abel integral equation of the first kind, frequently appearing in many fields of astrophysics, physics, chemistry, and applied sciences. This equation represents an ill-posed problem, thus solving it requires some kind of regularization. Our method is based on solving the equation on a so-called compact set of functions and/or using Tikhonov's regularization. A priori constraints on the unknown function, defining a compact set, are very loose and can be set using simple physical considerations. Tikhonov's regularization in itself does not require any explicit a priori constraints on the unknown function and can be used independently of such constraints or in combination with them. Various target degrees of smoothness of the unknown function may be set, as required by the problem at hand. The advantage of the method, apart from its flexibility, is that it gives uniform convergence of the approximate solution to the exact solution, as the errors of input data tend to zero. The method is illustrated on several simulated models with known solutions. An example of astrophysical application of the method is also given.

  13. A model of recovering the parameters of fast nonlocal heat transport in magnetic fusion plasmas

    NASA Astrophysics Data System (ADS)

    Kukushkin, A. B.; Kulichenko, A. A.; Sdvizhenskii, P. A.; Sokolov, A. V.; Voloshinov, V. V.

    2017-12-01

    A model is elaborated for interpreting the initial stage of the fast nonlocal transport events, which exhibit immediate response, in the diffusion time scale, of the spatial profile of electron temperature to its local perturbation, while the net heat flux is directed opposite to ordinary diffusion (i.e. along the temperature gradient). We solve the inverse problem of recovering the kernel of the integral equation, which describes nonlocal (superdiffusive) transport of energy due to emission and absorption of electromagnetic (EM) waves with long free path and strong reflection from the vacuum vessel’s wall. To allow for the errors of experimental data, we use the method based on the regularized (in the framework of an ill-posed problem, using the parametric models) approximation of available experimental data. The model is applied to interpreting the data from stellarator LHD and tokamak TFTR. The EM wave transport is considered here in the single-group approximation, however the limitations of the physics model enable us to identify the spectral range of the EM waves which might be responsible for the observed phenomenon.

  14. Time-Domain Impedance Boundary Conditions for Computational Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Tam, Christopher K. W.; Auriault, Laurent

    1996-01-01

    It is an accepted practice in aeroacoustics to characterize the properties of an acoustically treated surface by a quantity known as impedance. Impedance is a complex quantity. As such, it is designed primarily for frequency-domain analysis. Time-domain boundary conditions that are the equivalent of the frequency-domain impedance boundary condition are proposed. Both single frequency and model broadband time-domain impedance boundary conditions are provided. It is shown that the proposed boundary conditions, together with the linearized Euler equations, form well-posed initial boundary value problems. Unlike ill-posed problems, they are free from spurious instabilities that would render time-marching computational solutions impossible.

  15. Chemical approaches to solve mycotoxin problems and improve food safety

    USDA-ARS?s Scientific Manuscript database

    Foodborne illnesses are experienced by most of the population and are preventable. Agricultural produce can occasionally become contaminated with fungi capable of making mycotoxins that pose health risks and reduce values. Many strategies are employed to keep food safe from mycotoxin contamination. ...

  16. An ill-posed problem for the Black-Scholes equation for a profitable forecast of prices of stock options on real market data

    NASA Astrophysics Data System (ADS)

    Klibanov, Michael V.; Kuzhuget, Andrey V.; Golubnichiy, Kirill V.

    2016-01-01

    A new empirical mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock, new initial and new boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. To verify the validity of our model, real market data for 368 randomly selected liquid options are used. A new trading strategy is proposed. Our results indicates that our method is profitable on those options. Furthermore, it is shown that the performance of two simple extrapolation-based techniques is much worse. We conjecture that our method might lead to significant profits of those financial insitutions which trade large amounts of options. We caution, however, that further studies are necessary to verify this conjecture.

  17. Color correction pipeline optimization for digital cameras

    NASA Astrophysics Data System (ADS)

    Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo

    2013-04-01

    The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.

  18. Inverse problems and optimal experiment design in unsteady heat transfer processes identification

    NASA Technical Reports Server (NTRS)

    Artyukhin, Eugene A.

    1991-01-01

    Experimental-computational methods for estimating characteristics of unsteady heat transfer processes are analyzed. The methods are based on the principles of distributed parameter system identification. The theoretical basis of such methods is the numerical solution of nonlinear ill-posed inverse heat transfer problems and optimal experiment design problems. Numerical techniques for solving problems are briefly reviewed. The results of the practical application of identification methods are demonstrated when estimating effective thermophysical characteristics of composite materials and thermal contact resistance in two-layer systems.

  19. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    NASA Technical Reports Server (NTRS)

    Baker, Gregory; Siegel, Michael; Tanveer, Saleh

    1995-01-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. This situation is disastrous for numerical computation, as small round-off errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. The method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out.

  20. Determination of the Geometric Form of a Plane of a Tectonic Gap as the Inverse III-posed Problem of Mathematical Physics

    NASA Astrophysics Data System (ADS)

    Sirota, Dmitry; Ivanov, Vadim

    2017-11-01

    Any mining operations influence stability of natural and technogenic massifs are the reason of emergence of the sources of differences of mechanical tension. These sources generate a quasistationary electric field with a Newtonian potential. The paper reviews the method of determining the shape and size of a flat source field with this kind of potential. This common problem meets in many fields of mining: geological exploration mineral resources, ore deposits, control of mining by underground method, determining coal self-heating source, localization of the rock crack's sources and other applied problems of practical physics. This problems are ill-posed and inverse and solved by converting to Fredholm-Uryson integral equation of the first kind. This equation will be solved by A.N. Tikhonov regularization method.

  1. Moving force identification based on modified preconditioned conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Chan, Tommy H. T.; Nguyen, Andy

    2018-06-01

    This paper develops a modified preconditioned conjugate gradient (M-PCG) method for moving force identification (MFI) by improving the conjugate gradient (CG) and preconditioned conjugate gradient (PCG) methods with a modified Gram-Schmidt algorithm. The method aims to obtain more accurate and more efficient identification results from the responses of bridge deck caused by vehicles passing by, which are known to be sensitive to ill-posed problems that exist in the inverse problem. A simply supported beam model with biaxial time-varying forces is used to generate numerical simulations with various analysis scenarios to assess the effectiveness of the method. Evaluation results show that regularization matrix L and number of iterations j are very important influence factors to identification accuracy and noise immunity of M-PCG. Compared with the conventional counterpart SVD embedded in the time domain method (TDM) and the standard form of CG, the M-PCG with proper regularization matrix has many advantages such as better adaptability and more robust to ill-posed problems. More importantly, it is shown that the average optimal numbers of iterations of M-PCG can be reduced by more than 70% compared with PCG and this apparently makes M-PCG a preferred choice for field MFI applications.

  2. Application of L1-norm regularization to epicardial potential reconstruction based on gradient projection.

    PubMed

    Wang, Liansheng; Qin, Jing; Wong, Tien Tsin; Heng, Pheng Ann

    2011-10-07

    The epicardial potential (EP)-targeted inverse problem of electrocardiography (ECG) has been widely investigated as it is demonstrated that EPs reflect underlying myocardial activity. It is a well-known ill-posed problem as small noises in input data may yield a highly unstable solution. Traditionally, L2-norm regularization methods have been proposed to solve this ill-posed problem. But the L2-norm penalty function inherently leads to considerable smoothing of the solution, which reduces the accuracy of distinguishing abnormalities and locating diseased regions. Directly using the L1-norm penalty function, however, may greatly increase computational complexity due to its non-differentiability. We propose an L1-norm regularization method in order to reduce the computational complexity and make rapid convergence possible. Variable splitting is employed to make the L1-norm penalty function differentiable based on the observation that both positive and negative potentials exist on the epicardial surface. Then, the inverse problem of ECG is further formulated as a bound-constrained quadratic problem, which can be efficiently solved by gradient projection in an iterative manner. Extensive experiments conducted on both synthetic data and real data demonstrate that the proposed method can handle both measurement noise and geometry noise and obtain more accurate results than previous L2- and L1-norm regularization methods, especially when the noises are large.

  3. Study of the influence of the parameters of an experiment on the simulation of pole figures of polycrystalline materials using electron microscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.

    2016-05-15

    A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less

  4. Singular value decomposition: a diagnostic tool for ill-posed inverse problems in optical computed tomography

    NASA Astrophysics Data System (ADS)

    Lanen, Theo A.; Watt, David W.

    1995-10-01

    Singular value decomposition has served as a diagnostic tool in optical computed tomography by using its capability to provide insight into the condition of ill-posed inverse problems. Various tomographic geometries are compared to one another through the singular value spectrum of their weight matrices. The number of significant singular values in the singular value spectrum of a weight matrix is a quantitative measure of the condition of the system of linear equations defined by a tomographic geometery. The analysis involves variation of the following five parameters, characterizing a tomographic geometry: 1) the spatial resolution of the reconstruction domain, 2) the number of views, 3) the number of projection rays per view, 4) the total observation angle spanned by the views, and 5) the selected basis function. Five local basis functions are considered: the square pulse, the triangle, the cubic B-spline, the Hanning window, and the Gaussian distribution. Also items like the presence of noise in the views, the coding accuracy of the weight matrix, as well as the accuracy of the accuracy of the singular value decomposition procedure itself are assessed.

  5. Human Pose Estimation from Monocular Images: A Comprehensive Survey

    PubMed Central

    Gong, Wenjuan; Zhang, Xuena; Gonzàlez, Jordi; Sobral, Andrews; Bouwmans, Thierry; Tu, Changhe; Zahzah, El-hadi

    2016-01-01

    Human pose estimation refers to the estimation of the location of body parts and how they are connected in an image. Human pose estimation from monocular images has wide applications (e.g., image indexing). Several surveys on human pose estimation can be found in the literature, but they focus on a certain category; for example, model-based approaches or human motion analysis, etc. As far as we know, an overall review of this problem domain has yet to be provided. Furthermore, recent advancements based on deep learning have brought novel algorithms for this problem. In this paper, a comprehensive survey of human pose estimation from monocular images is carried out including milestone works and recent advancements. Based on one standard pipeline for the solution of computer vision problems, this survey splits the problem into several modules: feature extraction and description, human body models, and modeling methods. Problem modeling methods are approached based on two means of categorization in this survey. One way to categorize includes top-down and bottom-up methods, and another way includes generative and discriminative methods. Considering the fact that one direct application of human pose estimation is to provide initialization for automatic video surveillance, there are additional sections for motion-related methods in all modules: motion features, motion models, and motion-based methods. Finally, the paper also collects 26 publicly available data sets for validation and provides error measurement methods that are frequently used. PMID:27898003

  6. Visualizing the ill-posedness of the inversion of a canopy radiative transfer model: A case study for Sentinel-2

    NASA Astrophysics Data System (ADS)

    Zurita-Milla, R.; Laurent, V. C. E.; van Gijsel, J. A. E.

    2015-12-01

    Monitoring biophysical and biochemical vegetation variables in space and time is key to understand the earth system. Operational approaches using remote sensing imagery rely on the inversion of radiative transfer models, which describe the interactions between light and vegetation canopies. The inversion required to estimate vegetation variables is, however, an ill-posed problem because of variable compensation effects that can cause different combinations of soil and canopy variables to yield extremely similar spectral responses. In this contribution, we present a novel approach to visualise the ill-posed problem using self-organizing maps (SOM), which are a type of unsupervised neural network. The approach is demonstrated with simulations for Sentinel-2 data (13 bands) made with the Soil-Leaf-Canopy (SLC) radiative transfer model. A look-up table of 100,000 entries was built by randomly sampling 14 SLC model input variables between their minimum and maximum allowed values while using both a dark and a bright soil. The Sentinel-2 spectral simulations were used to train a SOM of 200 × 125 neurons. The training projected similar spectral signatures onto either the same, or contiguous, neuron(s). Tracing back the inputs that generated each spectral signature, we created a 200 × 125 map for each of the SLC variables. The lack of spatial patterns and the variability in these maps indicate ill-posed situations, where similar spectral signatures correspond to different canopy variables. For Sentinel-2, our results showed that leaf area index, crown cover and leaf chlorophyll, water and brown pigment content are less confused in the inversion than variables with noisier maps like fraction of brown canopy area, leaf dry matter content and the PROSPECT mesophyll parameter. This study supports both educational and on-going research activities on inversion algorithms and might be useful to evaluate the uncertainties of retrieved canopy biophysical and biochemical state variables.

  7. Local well-posedness for dispersion generalized Benjamin-Ono equations in Sobolev spaces

    NASA Astrophysics Data System (ADS)

    Guo, Zihua

    We prove that the Cauchy problem for the dispersion generalized Benjamin-Ono equation ∂u+|∂u+uu=0, u(x,0)=u(x), is locally well-posed in the Sobolev spaces H for s>1-α if 0⩽α⩽1. The new ingredient is that we generalize the methods of Ionescu, Kenig and Tataru (2008) [13] to approach the problem in a less perturbative way, in spite of the ill-posedness results of Molinet, Saut and Tzvetkov (2001) [21]. Moreover, as a bi-product we prove that if 0<α⩽1 the corresponding modified equation (with the nonlinearity ±uuu) is locally well-posed in H for s⩾1/2-α/4.

  8. Regolith thermal property inversion in the LUNAR-A heat-flow experiment

    NASA Astrophysics Data System (ADS)

    Hagermann, A.; Tanaka, S.; Yoshida, S.; Fujimura, A.; Mizutani, H.

    2001-11-01

    In 2003, two penetrators of the LUNAR--A mission of ISAS will investigate the internal structure of the Moon by conducting seismic and heat--flow experiments. Heat-flow is the product of thermal gradient tial T / tial z, and thermal conductivity λ of the lunar regolith. For measuring the thermal conductivity (or dissusivity), each penetrator will carry five thermal property sensors, consisting of small disc heaters. The thermal response Ts(t) of the heater itself to the constant known power supply of approx. 50 mW serves as the data for the subsequent data interpretation. Horai et al. (1991) found a forward analytical solution to the problem of determining the thermal inertia λ ρ c of the regolith for constant thermal properties and a simplyfied geometry. In the inversion, the problem of deriving the unknown thermal properties of a medium from known heat sources and temperatures is an Identification Heat Conduction Problem (IDHCP), an ill--posed inverse problem. Assuming that thermal conductivity λ and heat capacity ρ c are linear functions of temperature (which is reasonable in most cases), one can apply a Kirchhoff transformation to linearize the heat conduction equation, which minimizes computing time. Then the error functional, i.e. the difference between the measured temperature response of the heater and the predicted temperature response, can be minimized, thus solving for thermal dissusivity κ = λ / (ρ c), wich will complete the set of parameters needed for a detailed description of thermal properties of the lunar regolith. Results of model calculations will be presented, in which synthetic data and calibration data are used to invert the unknown thermal diffusivity of the unknown medium by means of a modified Newton Method. Due to the ill-posedness of the problem, the number of parameters to be solved for should be limited. As the model calculations reveal, a homogeneous regolith allows for a fast and accurate inversion.

  9. A well-posed numerical method to track isolated conformal map singularities in Hele-Shaw flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, G.; Siegel, M.; Tanveer, S.

    1995-09-01

    We present a new numerical method for calculating an evolving 2D Hele-Shaw interface when surface tension effects are neglected. In the case where the flow is directed from the less viscous fluid into the more viscous fluid, the motion of the interface is ill-posed; small deviations in the initial condition will produce significant changes in the ensuing motion. The situation is disastrous for numerical computation, as small roundoff errors can quickly lead to large inaccuracies in the computed solution. Our method of computation is most easily formulated using a conformal map from the fluid domain into a unit disk. Themore » method relies on analytically continuing the initial data and equations of motion into the region exterior to the disk, where the evolution problem becomes well-posed. The equations are then numerically solved in the extended domain. The presence of singularities in the conformal map outside of the disk introduces specific structures along the fluid interface. Our method can explicitly track the location of isolated pole and branch point singularities, allowing us to draw connections between the development of interfacial patterns and the motion of singularities as they approach the unit disk. In particular, we are able to relate physical features such as finger shape, side-branch formation, and competition between fingers to the nature and location of the singularities. The usefulness of this method in studying the formation of topological singularities (self-intersections of the interface) is also pointed out. 47 refs., 10 figs., 1 tab.« less

  10. Maximum likelihood bolometric tomography for the determination of the uncertainties in the radiation emission on JET TOKAMAK

    NASA Astrophysics Data System (ADS)

    Craciunescu, Teddy; Peluso, Emmanuele; Murari, Andrea; Gelfusa, Michela; JET Contributors

    2018-05-01

    The total emission of radiation is a crucial quantity to calculate the power balances and to understand the physics of any Tokamak. Bolometric systems are the main tool to measure this important physical quantity through quite sophisticated tomographic inversion methods. On the Joint European Torus, the coverage of the bolometric diagnostic, due to the availability of basically only two projection angles, is quite limited, rendering the inversion a very ill-posed mathematical problem. A new approach, based on the maximum likelihood, has therefore been developed and implemented to alleviate one of the major weaknesses of traditional tomographic techniques: the difficulty to determine routinely the confidence intervals in the results. The method has been validated by numerical simulations with phantoms to assess the quality of the results and to optimise the configuration of the parameters for the main types of emissivity encountered experimentally. The typical levels of statistical errors, which may significantly influence the quality of the reconstructions, have been identified. The systematic tests with phantoms indicate that the errors in the reconstructions are quite limited and their effect on the total radiated power remains well below 10%. A comparison with other approaches to the inversion and to the regularization has also been performed.

  11. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    NASA Astrophysics Data System (ADS)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  12. Aerosol Retrievals from Proposed Satellite Bistatic Lidar Observations: Algorithm and Information Content

    NASA Astrophysics Data System (ADS)

    Alexandrov, M. D.; Mishchenko, M. I.

    2017-12-01

    Accurate aerosol retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. We suggested to address this ill-posedness by flying a bistatic lidar system. Such a system would consist of formation flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and an additional platform hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar. Thus, bistatic lidar observations will be free of deficiencies affecting both monostatic lidar measurements (caused by the highly limited information content) and passive photopolarimetric measurements (caused by vertical integration and surface reflection).We present a preliminary aerosol retrieval algorithm for a bistatic lidar system consisting of a high spectral resolution lidar (HSRL) and an additional receiver flown in formation with it at a scattering angle of 165 degrees. This algorithm was applied to synthetic data generated using Mie-theory computations. The model/retrieval parameters in our tests were the effective radius and variance of the aerosol size distribution, complex refractive index of the particles, and their number concentration. Both mono- and bimodal aerosol mixtures were considered. Our algorithm allowed for definitive evaluation of error propagation from measurements to retrievals using a Monte Carlo technique, which involves random distortion of the observations and statistical characterization of the resulting retrieval errors. Our tests demonstrated that supplementing a conventional monostatic HSRL with an additional receiver dramatically increases the information content of the measurements and allows for a sufficiently accurate characterization of tropospheric aerosols.

  13. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.

  14. TOPICAL REVIEW: The stability for the Cauchy problem for elliptic equations

    NASA Astrophysics Data System (ADS)

    Alessandrini, Giovanni; Rondi, Luca; Rosset, Edi; Vessella, Sergio

    2009-12-01

    We discuss the ill-posed Cauchy problem for elliptic equations, which is pervasive in inverse boundary value problems modeled by elliptic equations. We provide essentially optimal stability results, in wide generality and under substantially minimal assumptions. As a general scheme in our arguments, we show that all such stability results can be derived by the use of a single building brick, the three-spheres inequality. Due to the current absence of research funding from the Italian Ministry of University and Research, this work has been completed without any financial support.

  15. Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction

    NASA Technical Reports Server (NTRS)

    Oliver, A Brandon; Amar, Adam J.

    2016-01-01

    Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of specifying boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation nuances will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of one-dimensional and multi-dimensional problems

  16. Weight-matrix structured regularization provides optimal generalized least-squares estimate in diffuse optical tomography.

    PubMed

    Yalavarthy, Phaneendra K; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2007-06-01

    Diffuse optical tomography (DOT) involves estimation of tissue optical properties using noninvasive boundary measurements. The image reconstruction procedure is a nonlinear, ill-posed, and ill-determined problem, so overcoming these difficulties requires regularization of the solution. While the methods developed for solving the DOT image reconstruction procedure have a long history, there is less direct evidence on the optimal regularization methods, or exploring a common theoretical framework for techniques which uses least-squares (LS) minimization. A generalized least-squares (GLS) method is discussed here, which takes into account the variances and covariances among the individual data points and optical properties in the image into a structured weight matrix. It is shown that most of the least-squares techniques applied in DOT can be considered as special cases of this more generalized LS approach. The performance of three minimization techniques using the same implementation scheme is compared using test problems with increasing noise level and increasing complexity within the imaging field. Techniques that use spatial-prior information as constraints can be also incorporated into the GLS formalism. It is also illustrated that inclusion of spatial priors reduces the image error by at least a factor of 2. The improvement of GLS minimization is even more apparent when the noise level in the data is high (as high as 10%), indicating that the benefits of this approach are important for reconstruction of data in a routine setting where the data variance can be known based upon the signal to noise properties of the instruments.

  17. Self-reported medical, medication and laboratory error in eight countries: risk factors for chronically ill adults.

    PubMed

    Scobie, Andrea

    2011-04-01

    To identify risk factors associated with self-reported medical, medication and laboratory error in eight countries. The Commonwealth Fund's 2008 International Health Policy Survey of chronically ill patients in eight countries. None. A multi-country telephone survey was conducted between 3 March and 30 May 2008 with patients in Australia, Canada, France, Germany, the Netherlands, New Zealand, the UK and the USA who self-reported being chronically ill. A bivariate analysis was performed to determine significant explanatory variables of medical, medication and laboratory error (P < 0.01) for inclusion in a binary logistic regression model. The final regression model included eight risk factors for self-reported error: age 65 and under, education level of some college or less, presence of two or more chronic conditions, high prescription drug use (four+ drugs), four or more doctors seen within 2 years, a care coordination problem, poor doctor-patient communication and use of an emergency department. Risk factors with the greatest ability to predict experiencing an error encompassed issues with coordination of care and provider knowledge of a patient's medical history. The identification of these risk factors could help policymakers and organizations to proactively reduce the likelihood of error through greater examination of system- and organization-level practices.

  18. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking

    PubMed Central

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-01-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. PMID:29723974

  19. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.

    PubMed

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Darwish, Walid; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-05-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.

  20. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  1. A New Understanding for the Rain Rate retrieval of Attenuating Radars Measurement

    NASA Astrophysics Data System (ADS)

    Koner, P.; Battaglia, A.; Simmer, C.

    2009-04-01

    The retrieval of rain rate from the attenuated radar (e.g. Cloud Profiling Radar on board of CloudSAT in orbit since June 2006) is a challenging problem. ĹEcuyer and Stephens [1] underlined this difficulty (for rain rates larger than 1.5 mm/h) and suggested the need of additional information (like path-integrated attenuations (PIA) derived from surface reference techniques or precipitation water path estimated from co-located passive microwave radiometer) to constrain the retrieval. It is generally discussed based on the optimal estimation theory that there are no solutions without constraining the problem in a case of visible attenuation because there is no enough information content to solve the problem. However, when the problem is constrained by the additional measurement of PIA, there is a reasonable solution. This raises the spontaneous question: Is all information enclosed in this additional measurement? This also contradicts with the information theory because one measurement can introduce only one degree of freedom in the retrieval. Why is one degree of freedom so important in the above problem? This question cannot be explained using the estimation and information theories of OEM. On the other hand, Koner and Drummond [2] argued that the OEM is basically a regularization method, where a-priori covariance is used as a stabilizer and the regularization strength is determined by the choices of the a-priori and error covariance matrices. The regularization is required for the reduction of the condition number of Jacobian, which drives the noise injection from the measurement and inversion spaces to the state space in an ill-posed inversion. In this work, the above mentioned question will be discussed based on the regularization theory, error mitigation and eigenvalue mathematics. References 1. L'Ecuyer TS and Stephens G. An estimation based precipitation retrieval algorithm for attenuating radar. J. Appl. Met., 2002, 41, 272-85. 2. Koner PK, Drummond JR. A comparison of regularization techniques for atmospheric trace gases retrievals. JQSRT 2008; 109:514-26.

  2. Assessment of thyroid function in dogs with low plasma thyroxine concentration.

    PubMed

    Diaz Espineira, M M; Mol, J A; Peeters, M E; Pollak, Y W E A; Iversen, L; van Dijk, J E; Rijnberk, A; Kooistra, H S

    2007-01-01

    Differentiation between hypothyroidism and nonthyroidal illness in dogs poses specific problems, because plasma total thyroxine (TT4) concentrations are often low in nonthyroidal illness, and plasma thyroid stimulating hormone (TSH) concentrations are frequently not high in primary hypothyroidism. The serum concentrations of the common basal biochemical variables (TT4, freeT4 [fT4], and TSH) overlap between dogs with hypothyroidism and dogs with nonthyroidal illness, but, with stimulation tests and quantitative measurement of thyroidal 99mTcO4(-) uptake, differentiation will be possible. In 30 dogs with low plasma TT4 concentration, the final diagnosis was based upon histopathologic examination of thyroid tissue obtained by biopsy. Fourteen dogs had primary hypothyroidism, and 13 dogs had nonthyroidal illness. Two dogs had secondary hypothyroidism, and 1 dog had metastatic thyroid cancer. The diagnostic value was assessed for (1) plasma concentrations of TT4, fT4, and TSH; (2) TSH-stimulation test; (3) plasma TSH concentration after stimulation with TSH-releasing hormone (TRH); (4) occurrence of thyroglobulin antibodies (TgAbs); and (5) thyroidal 99mTcO4(-) uptake. Plasma concentrations of TT4, fT4, TSH, and the hormone pairs TT4/TSH and fT4/TSH overlapped in the 2 groups, whereas, with TgAbs, there was 1 false-negative result. Results of the TSH- and TRH-stimulation tests did not meet earlier established diagnostic criteria, overlapped, or both. With a quantitative measurement of thyroidal 99mTcO4(-) uptake, there was no overlap between dogs with primary hypothyroidism and dogs with nonthyroidal illness. The results of this study confirm earlier observations that, in dogs, accurate biochemical diagnosis of primary hypothyroidism poses specific problems. Previous studies, in which the TSH-stimulation test was used as the "gold standard" for the diagnosis of hypothyroidism may have suffered from misclassification. Quantitative measurement of thyroidal 99mTcO- uptake has the highest discriminatory power with regard to the differentiation between primary hypothyroidism and nonthyroidal illness.

  3. History matching by spline approximation and regularization in single-phase areal reservoirs

    NASA Technical Reports Server (NTRS)

    Lee, T. Y.; Kravaris, C.; Seinfeld, J.

    1986-01-01

    An automatic history matching algorithm is developed based on bi-cubic spline approximations of permeability and porosity distributions and on the theory of regularization to estimate permeability or porosity in a single-phase, two-dimensional real reservoir from well pressure data. The regularization feature of the algorithm is used to convert the ill-posed history matching problem into a well-posed problem. The algorithm employs the conjugate gradient method as its core minimization method. A number of numerical experiments are carried out to evaluate the performance of the algorithm. Comparisons with conventional (non-regularized) automatic history matching algorithms indicate the superiority of the new algorithm with respect to the parameter estimates obtained. A quasioptimal regularization parameter is determined without requiring a priori information on the statistical properties of the observations.

  4. The Analysis and Construction of Perfectly Matched Layers for the Linearized Euler Equations

    NASA Technical Reports Server (NTRS)

    Hesthaven, J. S.

    1997-01-01

    We present a detailed analysis of a recently proposed perfectly matched layer (PML) method for the absorption of acoustic waves. The split set of equations is shown to be only weakly well-posed, and ill-posed under small low order perturbations. This analysis provides the explanation for the stability problems associated with the split field formulation and illustrates why applying a filter has a stabilizing effect. Utilizing recent results obtained within the context of electromagnetics, we develop strongly well-posed absorbing layers for the linearized Euler equations. The schemes are shown to be perfectly absorbing independent of frequency and angle of incidence of the wave in the case of a non-convecting mean flow. In the general case of a convecting mean flow, a number of techniques is combined to obtain a absorbing layers exhibiting PML-like behavior. The efficacy of the proposed absorbing layers is illustrated though computation of benchmark problems in aero-acoustics.

  5. Convex Relaxation For Hard Problem In Data Mining And Sensor Localization

    DTIC Science & Technology

    2017-04-13

    Drusvyatskiy, S.A. Vavasis, and H. Wolkowicz. Extreme point in- equalities and geometry of the rank sparsity ball. Math . Program., 152(1-2, Ser. A...521–544, 2015. [3] M-H. Lin and H. Wolkowicz. Hiroshima’s theorem and matrix norm inequalities. Acta Sci. Math . (Szeged), 81(1-2):45–53, 2015. [4] D...9867-4. [8] D. Drusvyatskiy, G. Li, and H. Wolkowicz. Alternating projections for ill-posed semidenite feasibility problems. Math . Program., 2016

  6. A Comparison of the Pencil-of-Function Method with Prony’s Method, Wiener Filters and Other Identification Techniques,

    DTIC Science & Technology

    1977-12-01

    exponentials encountered are complex and zhey are approximately at harmonic frequencies. Moreover, the real parts of the complex exponencials are much...functions as a basis for expanding the current distribution on an antenna by the method of moments results in a regularized ill-posed problem with respect...to the current distribution on the antenna structure. However, the problem is not regularized with respect to chaoge because the chaPge distribution

  7. A prefiltering version of the Kalman filter with new numerical integration formulas for Riccati equations

    NASA Technical Reports Server (NTRS)

    Womble, M. E.; Potter, J. E.

    1975-01-01

    A prefiltering version of the Kalman filter is derived for both discrete and continuous measurements. The derivation consists of determining a single discrete measurement that is equivalent to either a time segment of continuous measurements or a set of discrete measurements. This prefiltering version of the Kalman filter easily handles numerical problems associated with rapid transients and ill-conditioned Riccati matrices. Therefore, the derived technique for extrapolating the Riccati matrix from one time to the next constitutes a new set of integration formulas which alleviate ill-conditioning problems associated with continuous Riccati equations. Furthermore, since a time segment of continuous measurements is converted into a single discrete measurement, Potter's square root formulas can be used to update the state estimate and its error covariance matrix. Therefore, if having the state estimate and its error covariance matrix at discrete times is acceptable, the prefilter extends square root filtering with all its advantages, to continuous measurement problems.

  8. Application of Turchin's method of statistical regularization

    NASA Astrophysics Data System (ADS)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  9. Transition from the labor market: older workers and retirement.

    PubMed

    Peterson, Chris L; Murphy, Greg

    2010-01-01

    The new millennium has seen the projected growth of older populations as a source of many problems, not the least of which is how to sustain this increasingly aging population. Some decades ago, early retirement from work posed few problems for governments, but most nations are now trying to ensure that workers remain in the workforce longer. In this context, the role played by older employees can be affected by at least two factors: their productivity (or perceived productivity) and their acceptance by younger workers and management. If the goal of maintaining employees into older age is to be achieved and sustained, opportunities must be provided, for example, for more flexible work arrangements and more possibilities to pursue bridge employment (work after formal retirement). The retirement experience varies, depending on people's circumstances. Some people, for example, have retirement forced upon them by illness or injury at work, by ill-health (such as chronic illnesses), or by downsizing and associated redundancies. This article focuses on the problems and opportunities associated with working to an older age or leaving the workforce early, particularly due to factors beyond one's control.

  10. Potential challenges facing distributed leadership in health care: evidence from the UK National Health Service.

    PubMed

    Martin, Graeme; Beech, Nic; MacIntosh, Robert; Bushfield, Stacey

    2015-01-01

    The discourse of leaderism in health care has been a subject of much academic and practical debate. Recently, distributed leadership (DL) has been adopted as a key strand of policy in the UK National Health Service (NHS). However, there is some confusion over the meaning of DL and uncertainty over its application to clinical and non-clinical staff. This article examines the potential for DL in the NHS by drawing on qualitative data from three co-located health-care organisations that embraced DL as part of their organisational strategy. Recent theorising positions DL as a hybrid model combining focused and dispersed leadership; however, our data raise important challenges for policymakers and senior managers who are implementing such a leadership policy. We show that there are three distinct forms of disconnect and that these pose a significant problem for DL. However, we argue that instead of these disconnects posing a significant problem for the discourse of leaderism, they enable a fantasy of leadership that draws on and supports the discourse. © 2014 The Authors. Sociology of Health & Illness © 2014 Foundation for the Sociology of Health & Illness/John Wiley & Sons Ltd.

  11. Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.

    PubMed

    Lee, Donghwa; Myung, Hyun

    2014-07-11

    In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system.

  12. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology

    NASA Astrophysics Data System (ADS)

    Barker, T.; Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.

    2017-05-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities.

  13. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology

    PubMed Central

    Schaeffer, D. G.; Shearer, M.; Gray, J. M. N. T.

    2017-01-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ(I)-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I-dependent rheology. When the I-dependence comes from a specific friction coefficient μ(I), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ(I) satisfies certain minimal, physically natural, inequalities. PMID:28588402

  14. Well-posed continuum equations for granular flow with compressibility and μ(I)-rheology.

    PubMed

    Barker, T; Schaeffer, D G; Shearer, M; Gray, J M N T

    2017-05-01

    Continuum modelling of granular flow has been plagued with the issue of ill-posed dynamic equations for a long time. Equations for incompressible, two-dimensional flow based on the Coulomb friction law are ill-posed regardless of the deformation, whereas the rate-dependent μ ( I )-rheology is ill-posed when the non-dimensional inertial number I is too high or too low. Here, incorporating ideas from critical-state soil mechanics, we derive conditions for well-posedness of partial differential equations that combine compressibility with I -dependent rheology. When the I -dependence comes from a specific friction coefficient μ ( I ), our results show that, with compressibility, the equations are well-posed for all deformation rates provided that μ ( I ) satisfies certain minimal, physically natural, inequalities.

  15. Adaptive relative pose control of spacecraft with model couplings and uncertainties

    NASA Astrophysics Data System (ADS)

    Sun, Liang; Zheng, Zewei

    2018-02-01

    The spacecraft pose tracking control problem for an uncertain pursuer approaching to a space target is researched in this paper. After modeling the nonlinearly coupled dynamics for relative translational and rotational motions between two spacecraft, position tracking and attitude synchronization controllers are developed independently by using a robust adaptive control approach. The unknown kinematic couplings, parametric uncertainties, and bounded external disturbances are handled with adaptive updating laws. It is proved via Lyapunov method that the pose tracking errors converge to zero asymptotically. Spacecraft close-range rendezvous and proximity operations are introduced as an example to validate the effectiveness of the proposed control approach.

  16. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  17. Regularization strategies for hyperplane classifiers: application to cancer classification with gene expression data.

    PubMed

    Andries, Erik; Hagstrom, Thomas; Atlas, Susan R; Willman, Cheryl

    2007-02-01

    Linear discrimination, from the point of view of numerical linear algebra, can be treated as solving an ill-posed system of linear equations. In order to generate a solution that is robust in the presence of noise, these problems require regularization. Here, we examine the ill-posedness involved in the linear discrimination of cancer gene expression data with respect to outcome and tumor subclasses. We show that a filter factor representation, based upon Singular Value Decomposition, yields insight into the numerical ill-posedness of the hyperplane-based separation when applied to gene expression data. We also show that this representation yields useful diagnostic tools for guiding the selection of classifier parameters, thus leading to improved performance.

  18. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  19. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    NASA Astrophysics Data System (ADS)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to create technology «no frost», realizing a steady stream of direct and inverse problems: solving the direct problem, the visualization and comparison with observed data, to solve the inverse problem (correction of the model parameters). The main objective of further work is the creation of a workstation operating emergency tool that could be used by an emergency duty person in real time.

  20. Parallelized Bayesian inversion for three-dimensional dental X-ray imaging.

    PubMed

    Kolehmainen, Ville; Vanne, Antti; Siltanen, Samuli; Järvenpää, Seppo; Kaipio, Jari P; Lassas, Matti; Kalke, Martti

    2006-02-01

    Diagnostic and operational tasks based on dental radiology often require three-dimensional (3-D) information that is not available in a single X-ray projection image. Comprehensive 3-D information about tissues can be obtained by computerized tomography (CT) imaging. However, in dental imaging a conventional CT scan may not be available or practical because of high radiation dose, low-resolution or the cost of the CT scanner equipment. In this paper, we consider a novel type of 3-D imaging modality for dental radiology. We consider situations in which projection images of the teeth are taken from a few sparsely distributed projection directions using the dentist's regular (digital) X-ray equipment and the 3-D X-ray attenuation function is reconstructed. A complication in these experiments is that the reconstruction of the 3-D structure based on a few projection images becomes an ill-posed inverse problem. Bayesian inversion is a well suited framework for reconstruction from such incomplete data. In Bayesian inversion, the ill-posed reconstruction problem is formulated in a well-posed probabilistic form in which a priori information is used to compensate for the incomplete information of the projection data. In this paper we propose a Bayesian method for 3-D reconstruction in dental radiology. The method is partially based on Kolehmainen et al. 2003. The prior model for dental structures consist of a weighted l1 and total variation (TV)-prior together with the positivity prior. The inverse problem is stated as finding the maximum a posteriori (MAP) estimate. To make the 3-D reconstruction computationally feasible, a parallelized version of an optimization algorithm is implemented for a Beowulf cluster computer. The method is tested with projection data from dental specimens and patient data. Tomosynthetic reconstructions are given as reference for the proposed method.

  1. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  2. Load identification approach based on basis pursuit denoising algorithm

    NASA Astrophysics Data System (ADS)

    Ginsberg, D.; Ruby, M.; Fritzen, C. P.

    2015-07-01

    The information of the external loads is of great interest in many fields of structural analysis, such as structural health monitoring (SHM) systems or assessment of damage after extreme events. However, in most cases it is not possible to measure the external forces directly, so they need to be reconstructed. Load reconstruction refers to the problem of estimating an input to a dynamic system when the system output and the impulse response functions are usually the knowns. Generally, this leads to a so called ill-posed inverse problem, which involves solving an underdetermined linear system of equations. For most practical applications it can be assumed that the applied loads are not arbitrarily distributed in time and space, at least some specific characteristics about the external excitation are known a priori. In this contribution this knowledge was used to develop a more suitable force reconstruction method, which allows identifying the time history and the force location simultaneously by employing significantly fewer sensors compared to other reconstruction approaches. The properties of the external force are used to transform the ill-posed problem into a sparse recovery task. The sparse solution is acquired by solving a minimization problem known as basis pursuit denoising (BPDN). The possibility of reconstructing loads based on noisy structural measurement signals will be demonstrated by considering two frequently occurring loading conditions: harmonic excitation and impact events, separately and combined. First a simulation study of a simple plate structure is carried out and thereafter an experimental investigation of a real beam is performed.

  3. Hand-Eye Calibration in Visually-Guided Robot Grinding.

    PubMed

    Li, Wen-Long; Xie, He; Zhang, Gang; Yan, Si-Jie; Yin, Zhou-Ping

    2016-11-01

    Visually-guided robot grinding is a novel and promising automation technique for blade manufacturing. One common problem encountered in robot grinding is hand-eye calibration, which establishes the pose relationship between the end effector (hand) and the scanning sensor (eye). This paper proposes a new calibration approach for robot belt grinding. The main contribution of this paper is its consideration of both joint parameter errors and pose parameter errors in a hand-eye calibration equation. The objective function of the hand-eye calibration is built and solved, from which 30 compensated values (corresponding to 24 joint parameters and six pose parameters) are easily calculated in a closed solution. The proposed approach is economic and simple because only a criterion sphere is used to calculate the calibration parameters, avoiding the need for an expensive and complicated tracking process using a laser tracker. The effectiveness of this method is verified using a calibration experiment and a blade grinding experiment. The code used in this approach is attached in the Appendix.

  4. Wavelet-sparsity based regularization over time in the inverse problem of electrocardiography.

    PubMed

    Cluitmans, Matthijs J M; Karel, Joël M H; Bonizzi, Pietro; Volders, Paul G A; Westra, Ronald L; Peeters, Ralf L M

    2013-01-01

    Noninvasive, detailed assessment of electrical cardiac activity at the level of the heart surface has the potential to revolutionize diagnostics and therapy of cardiac pathologies. Due to the requirement of noninvasiveness, body-surface potentials are measured and have to be projected back to the heart surface, yielding an ill-posed inverse problem. Ill-posedness ensures that there are non-unique solutions to this problem, resulting in a problem of choice. In the current paper, it is proposed to restrict this choice by requiring that the time series of reconstructed heart-surface potentials is sparse in the wavelet domain. A local search technique is introduced that pursues a sparse solution, using an orthogonal wavelet transform. Epicardial potentials reconstructed from this method are compared to those from existing methods, and validated with actual intracardiac recordings. The new technique improves the reconstructions in terms of smoothness and recovers physiologically meaningful details. Additionally, reconstruction of activation timing seems to be improved when pursuing sparsity of the reconstructed signals in the wavelet domain.

  5. Validating an artificial intelligence human proximity operations system with test cases

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    2013-05-01

    An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.

  6. Tomographic iterative reconstruction of a passive scalar in a 3D turbulent flow

    NASA Astrophysics Data System (ADS)

    Pisso, Ignacio; Kylling, Arve; Cassiani, Massimo; Solveig Dinger, Anne; Stebel, Kerstin; Schmidbauer, Norbert; Stohl, Andreas

    2017-04-01

    Turbulence in stable planetary boundary layers often encountered in high latitudes influences the exchange fluxes of heat, momentum, water vapor and greenhouse gases between the Earth's surface and the atmosphere. In climate and meteorological models, such effects of turbulence need to be parameterized, ultimately based on experimental data. A novel experimental approach is being developed within the COMTESSA project in order to study turbulence statistics at high resolution. Using controlled tracer releases, high-resolution camera images and estimates of the background radiation, different tomographic algorithms can be applied in order to obtain time series of 3D representations of the scalar dispersion. In this preliminary work, using synthetic data, we investigate different reconstruction algorithms with emphasis on algebraic methods. We study the dependence of the reconstruction quality on the discretization resolution and the geometry of the experimental device in both 2 and 3-D cases. We assess the computational aspects of the iterative algorithms focusing of the phenomenon of semi-convergence applying a variety of stopping rules. We discuss different strategies for error reduction and regularization of the ill-posed problem.

  7. Successive Over-Relaxation Technique for High-Performance Blind Image Deconvolution

    DTIC Science & Technology

    2015-06-08

    deconvolution, space surveillance, Gauss - Seidel iteration 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF ABSTRACT SAR 18, NUMBER OF PAGES 5...sensible approximate solutions to the ill-posed nonlinear inverse problem. These solutions are addresses as fixed points of the iteration which consists in...alternating approximations (AA) for the object and for the PSF performed with a prescribed number of inner iterative descents from trivial (zero

  8. Prediction, Error, and Adaptation during Online Sentence Comprehension

    ERIC Educational Resources Information Center

    Fine, Alex Brabham

    2013-01-01

    A fundamental challenge for human cognition is perceiving and acting in a world in which the statistics that characterize available sensory data are non-stationary. This thesis focuses on this problem specifically in the domain of sentence comprehension, where linguistic variability poses computational challenges to the processes underlying…

  9. White light-informed optical properties improve ultrasound-guided fluorescence tomography of photoactive protoporphyrin IX

    NASA Astrophysics Data System (ADS)

    Flynn, Brendan P.; DSouza, Alisha V.; Kanick, Stephen C.; Davis, Scott C.; Pogue, Brian W.

    2013-04-01

    Subsurface fluorescence imaging is desirable for medical applications, including protoporphyrin-IX (PpIX)-based skin tumor diagnosis, surgical guidance, and dosimetry in photodynamic therapy. While tissue optical properties and heterogeneities make true subsurface fluorescence mapping an ill-posed problem, ultrasound-guided fluorescence-tomography (USFT) provides regional fluorescence mapping. Here USFT is implemented with spectroscopic decoupling of fluorescence signals (auto-fluorescence, PpIX, photoproducts), and white light spectroscopy-determined bulk optical properties. Segmented US images provide a priori spatial information for fluorescence reconstruction using region-based, diffuse FT. The method was tested in simulations, tissue homogeneous and inclusion phantoms, and an injected-inclusion animal model. Reconstructed fluorescence yield was linear with PpIX concentration, including the lowest concentration used, 0.025 μg/ml. White light spectroscopy informed optical properties, which improved fluorescence reconstruction accuracy compared to the use of fixed, literature-based optical properties, reduced reconstruction error and reconstructed fluorescence standard deviation by factors of 8.9 and 2.0, respectively. Recovered contrast-to-background error was 25% and 74% for inclusion phantoms without and with a 2-mm skin-like layer, respectively. Preliminary mouse-model imaging demonstrated system feasibility for subsurface fluorescence measurement in vivo. These data suggest that this implementation of USFT is capable of regional PpIX mapping in human skin tumors during photodynamic therapy, to be used in dosimetric evaluations.

  10. Determination of the aerosol size distribution by analytic inversion of the extinction spectrum in the complex anomalous diffraction approximation.

    PubMed

    Franssens, G; De Maziére, M; Fonteyn, D

    2000-08-20

    A new derivation is presented for the analytical inversion of aerosol spectral extinction data to size distributions. It is based on the complex analytic extension of the anomalous diffraction approximation (ADA). We derive inverse formulas that are applicable to homogeneous nonabsorbing and absorbing spherical particles. Our method simplifies, generalizes, and unifies a number of results obtained previously in the literature. In particular, we clarify the connection between the ADA transform and the Fourier and Laplace transforms. Also, the effect of the particle refractive-index dispersion on the inversion is examined. It is shown that, when Lorentz's model is used for this dispersion, the continuous ADA inverse transform is mathematically well posed, whereas with a constant refractive index it is ill posed. Further, a condition is given, in terms of Lorentz parameters, for which the continuous inverse operator does not amplify the error.

  11. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed Central

    Wouda, Frank J.; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H.

    2016-01-01

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7∘. Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances. PMID:27983676

  12. Estimation of Full-Body Poses Using Only Five Inertial Sensors: An Eager or Lazy Learning Approach?

    PubMed

    Wouda, Frank J; Giuberti, Matteo; Bellusci, Giovanni; Veltink, Peter H

    2016-12-15

    Human movement analysis has become easier with the wide availability of motion capture systems. Inertial sensing has made it possible to capture human motion without external infrastructure, therefore allowing measurements in any environment. As high-quality motion capture data is available in large quantities, this creates possibilities to further simplify hardware setups, by use of data-driven methods to decrease the number of body-worn sensors. In this work, we contribute to this field by analyzing the capabilities of using either artificial neural networks (eager learning) or nearest neighbor search (lazy learning) for such a problem. Sparse orientation features, resulting from sensor fusion of only five inertial measurement units with magnetometers, are mapped to full-body poses. Both eager and lazy learning algorithms are shown to be capable of constructing this mapping. The full-body output poses are visually plausible with an average joint position error of approximately 7 cm, and average joint angle error of 7 ∘ . Additionally, the effects of magnetic disturbances typical in orientation tracking on the estimation of full-body poses was also investigated, where nearest neighbor search showed better performance for such disturbances.

  13. A practical method to assess model sensitivity and parameter uncertainty in C cycle models

    NASA Astrophysics Data System (ADS)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2015-04-01

    The carbon cycle combines multiple spatial and temporal scales, from minutes to hours for the chemical processes occurring in plant cells to several hundred of years for the exchange between the atmosphere and the deep ocean and finally to millennia for the formation of fossil fuels. Together with our knowledge of the transformation processes involved in the carbon cycle, many Earth Observation systems are now available to help improving models and predictions using inverse modelling techniques. A generic inverse problem consists in finding a n-dimensional state vector x such that h(x) = y, for a given N-dimensional observation vector y, including random noise, and a given model h. The problem is well posed if the three following conditions hold: 1) there exists a solution, 2) the solution is unique and 3) the solution depends continuously on the input data. If at least one of these conditions is violated the problem is said ill-posed. The inverse problem is often ill-posed, a regularization method is required to replace the original problem with a well posed problem and then a solution strategy amounts to 1) constructing a solution x, 2) assessing the validity of the solution, 3) characterizing its uncertainty. The data assimilation linked ecosystem carbon (DALEC) model is a simple box model simulating the carbon budget allocation for terrestrial ecosystems. Intercomparison experiments have demonstrated the relative merit of various inverse modelling strategies (MCMC, ENKF) to estimate model parameters and initial carbon stocks for DALEC using eddy covariance measurements of net ecosystem exchange of CO2 and leaf area index observations. Most results agreed on the fact that parameters and initial stocks directly related to fast processes were best estimated with narrow confidence intervals, whereas those related to slow processes were poorly estimated with very large uncertainties. While other studies have tried to overcome this difficulty by adding complementary data streams or by considering longer observation windows no systematic analysis has been carried out so far to explain the large differences among results. We consider adjoint based methods to investigate inverse problems using DALEC and various data streams. Using resolution matrices we study the nature of the inverse problems (solution existence, uniqueness and stability) and show how standard regularization techniques affect resolution and stability properties. Instead of using standard prior information as a penalty term in the cost function to regularize the problems we constraint the parameter space using ecological balance conditions and inequality constraints. The efficiency and rapidity of this approach allows us to compute ensembles of solutions to the inverse problems from which we can establish the robustness of the variational method and obtain non Gaussian posterior distributions for the model parameters and initial carbon stocks.

  14. Multiple-generator errors are unavoidable under model misspecification.

    PubMed

    Jewett, D L; Zhang, Z

    1995-08-01

    Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.

  15. Solving Inverse Kinematics of Robot Manipulators by Means of Meta-Heuristic Optimisation

    NASA Astrophysics Data System (ADS)

    Wichapong, Kritsada; Bureerat, Sujin; Pholdee, Nantiwat

    2018-05-01

    This paper presents the use of meta-heuristic algorithms (MHs) for solving inverse kinematics of robot manipulators based on using forward kinematic. Design variables are joint angular displacements used to move a robot end-effector to the target in the Cartesian space while the design problem is posed to minimize error between target points and the positions of the robot end-effector. The problem is said to be a dynamic problem as the target points always changed by a robot user. Several well established MHs are used to solve the problem and the results obtained from using different meta-heuristics are compared based on the end-effector error and searching speed of the algorithms. From the study, the best performer will be obtained for setting as the baseline for future development of MH-based inverse kinematic solving.

  16. Correcting electrode modelling errors in EIT on realistic 3D head models.

    PubMed

    Jehl, Markus; Avery, James; Malone, Emma; Holder, David; Betcke, Timo

    2015-12-01

    Electrical impedance tomography (EIT) is a promising medical imaging technique which could aid differentiation of haemorrhagic from ischaemic stroke in an ambulance. One challenge in EIT is the ill-posed nature of the image reconstruction, i.e., that small measurement or modelling errors can result in large image artefacts. It is therefore important that reconstruction algorithms are improved with regard to stability to modelling errors. We identify that wrongly modelled electrode positions constitute one of the biggest sources of image artefacts in head EIT. Therefore, the use of the Fréchet derivative on the electrode boundaries in a realistic three-dimensional head model is investigated, in order to reconstruct electrode movements simultaneously to conductivity changes. We show a fast implementation and analyse the performance of electrode position reconstructions in time-difference and absolute imaging for simulated and experimental voltages. Reconstructing the electrode positions and conductivities simultaneously increased the image quality significantly in the presence of electrode movement.

  17. Treatment of Nuclear Data Covariance Information in Sample Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swiler, Laura Painton; Adams, Brian M.; Wieselquist, William

    This report summarizes a NEAMS (Nuclear Energy Advanced Modeling and Simulation) project focused on developing a sampling capability that can handle the challenges of generating samples from nuclear cross-section data. The covariance information between energy groups tends to be very ill-conditioned and thus poses a problem using traditional methods for generated correlated samples. This report outlines a method that addresses the sample generation from cross-section matrices.

  18. The determination of pair-distance distribution by double electron-electron resonance: regularization by the length of distance discretization with Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Dzuba, Sergei A.

    2016-08-01

    Pulsed double electron-electron resonance technique (DEER, or PELDOR) is applied to study conformations and aggregation of peptides, proteins, nucleic acids, and other macromolecules. For a pair of spin labels, experimental data allows for the determination of their distance distribution function, P(r). P(r) is derived as a solution of a first-kind Fredholm integral equation, which is an ill-posed problem. Here, we suggest regularization by increasing the distance discretization length to its upper limit where numerical integration still provides agreement with experiment. This upper limit is found to be well above the lower limit for which the solution instability appears because of the ill-posed nature of the problem. For solving the integral equation, Monte Carlo trials of P(r) functions are employed; this method has an obvious advantage of the fulfillment of the non-negativity constraint for P(r). The regularization by the increasing of distance discretization length for the case of overlapping broad and narrow distributions may be employed selectively, with this length being different for different distance ranges. The approach is checked for model distance distributions and for experimental data taken from literature for doubly spin-labeled DNA and peptide antibiotics.

  19. Commentary: Definitely More than Measurement Error--But How Should We Understand and Deal with Informant Discrepancies?

    ERIC Educational Resources Information Center

    Achenbach, Thomas M.

    2011-01-01

    The special section articles demonstrate the importance of informant discrepancies. They also illustrate challenges posed by discrepancies, plus opportunities for advancing research and practice. This commentary addresses these cross-cutting issues: (a) Discrepancies affect many kinds of assessment besides ratings of children's problems. (b)…

  20. Sickness absence management: encouraging attendance or 'risk-taking' presenteeism in employees with chronic illness?

    PubMed

    Munir, Fehmidah; Yarker, Joanna; Haslam, Cheryl

    2008-01-01

    To investigate the organizational perspectives on the effectiveness of their attendance management policies for chronically ill employees. A mixed-method approach was employed involving questionnaire survey with employees and in-depth interviews with key stakeholders of the organizational policies. Participants reported that attendance management polices and the point at which systems were triggered, posed problems for employees managing chronic illness. These systems presented risk to health: employees were more likely to turn up for work despite feeling unwell (presenteeism) to avoid a disciplinary situation but absence-related support was only provided once illness progressed to long-term sick leave. Attendance management polices also raised ethical concerns for 'forced' illness disclosure and immense pressures on line managers to manage attendance. Participants felt their current attendance management polices were unfavourable toward those managing a chronic illness. The policies heavily focused on attendance despite illness and on providing return to work support following long-term sick leave. Drawing on the results, the authors conclude that attendance management should promote job retention rather than merely prevent absence per se. They outline areas of improvement in the attendance management of employees with chronic illness.

  1. Neutrino tomography - Tevatron mapping versus the neutrino sky. [for X-rays of earth interior

    NASA Technical Reports Server (NTRS)

    Wilson, T. L.

    1984-01-01

    The feasibility of neutrino tomography of the earth's interior is discussed, taking the 80-GeV W-boson mass determined by Arnison (1983) and Banner (1983) into account. The opacity of earth zones is calculated on the basis of the preliminary reference earth model of Dziewonski and Anderson (1981), and the results are presented in tables and graphs. Proposed tomography schemes are evaluated in terms of the well-posedness of the inverse-Radon-transform problems involved, the neutrino generators and detectors required, and practical and economic factors. The ill-posed schemes are shown to be infeasible; the well-posed schemes (using Tevatrons or the neutrino sky as sources) are considered feasible but impractical.

  2. "It Was Not Me That Was Sick, It Was the Building": Rhetorical Identity Management Strategies in the Context of Observed or Suspected Indoor Air Problems in Workplaces.

    PubMed

    Finell, Eerika; Seppälä, Tuija; Suoninen, Eero

    2018-07-01

    Suffering from a contested illness poses a serious threat to one's identity. We analyzed the rhetorical identity management strategies respondents used when depicting their health problems and lives in the context of observed or suspected indoor air (IA) problems in the workplace. The data consisted of essays collected by the Finnish Literature Society. We used discourse-oriented methods to interpret a variety of language uses in the construction of identity strategies. Six strategies were identified: respondents described themselves as normal and good citizens with strong characters, and as IA sufferers who received acknowledge from others, offered positive meanings to their in-group, and demanded recognition. These identity strategies located on two continua: (a) individual- and collective-level strategies and (b) dissolved and emphasized (sub)category boundaries. The practical conclusion is that professionals should be aware of these complex coping strategies when aiming to interact effectively with people suffering from contested illnesses.

  3. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE PAGES

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    2017-07-10

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  4. Constraining DALECv2 using multiple data streams and ecological constraints: analysis and application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delahaies, Sylvain; Roulstone, Ian; Nichols, Nancy

    We use a variational method to assimilate multiple data streams into the terrestrial ecosystem carbon cycle model DALECv2 (Data Assimilation Linked Ecosystem Carbon). Ecological and dynamical constraints have recently been introduced to constrain unresolved components of this otherwise ill-posed problem. We recast these constraints as a multivariate Gaussian distribution to incorporate them into the variational framework and we demonstrate their advantage through a linear analysis. By using an adjoint method we study a linear approximation of the inverse problem: firstly we perform a sensitivity analysis of the different outputs under consideration, and secondly we use the concept of resolution matricesmore » to diagnose the nature of the ill-posedness and evaluate regularisation strategies. We then study the non-linear problem with an application to real data. Finally, we propose a modification to the model: introducing a spin-up period provides us with a built-in formulation of some ecological constraints which facilitates the variational approach.« less

  5. Pre-service teachers’ challenges in presenting mathematical problems

    NASA Astrophysics Data System (ADS)

    Desfitri, R.

    2018-01-01

    The purpose of this study was to analyzed how pre-service teachers prepare and assigned tasks or assignments in teaching practice situations. This study was also intended to discuss about kind of tasks or assignments they gave to students. Participants of this study were 15 selected pre-service mathematics teachers from mathematics education department who took part on microteaching class as part of teaching preparation program. Based on data obtained, it was occasionally found that there were hidden errors on questions or tasks assigned by pre-service teachers which might lead their students not to be able to reach a logical or correct answer. Although some answers might seem to be true, they were illogical or unfavourable. It is strongly recommended that pre-service teachers be more careful when posing mathematical problems so that students do not misunderstand the problems or the concepts, since both teachers and students were sometimes unaware of errors in problems being worked on.

  6. Estimation of the parameters of disturbances on long-range radio-communication paths

    NASA Astrophysics Data System (ADS)

    Gerasimov, Iu. S.; Gordeev, V. A.; Kristal, V. S.

    1982-09-01

    Radio propagation on long-range paths is disturbed by such phenomena as ionospheric density fluctuations, meteor trails, and the Faraday effect. In the present paper, the determination of the characteristics of such disturbances on the basis of received-signal parameters is considered as an inverse and ill-posed problem. A method for investigating the indeterminacy which arises in such determinations is proposed, and a quantitative analysis of this indeterminacy is made.

  7. Spotted star mapping by light curve inversion: Tests and application to HD 12545

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.

    2013-06-01

    A code for mapping the surfaces of spotted stars is developed. The concept of the code is to analyze rotational-modulated light curves. We simulate the process of reconstruction for the star surface and the results of simulation are presented. The reconstruction atrifacts caused by the ill-posed nature of the problem are deduced. The surface of the spotted component of system HD 12545 is mapped using the procedure.

  8. Using the Hilbert uniqueness method in a reconstruction algorithm for electrical impedance tomography.

    PubMed

    Dai, W W; Marsili, P M; Martinez, E; Morucci, J P

    1994-05-01

    This paper presents a new version of the layer stripping algorithm in the sense that it works essentially by repeatedly stripping away the outermost layer of the medium after having determined the conductivity value in this layer. In order to stabilize the ill posed boundary value problem related to each layer, we base our algorithm on the Hilbert uniqueness method (HUM) and implement it with the boundary element method (BEM).

  9. Point Cloud Based Relative Pose Estimation of a Satellite in Close Range

    PubMed Central

    Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming

    2016-01-01

    Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633

  10. The Analysis of Ratings Using Generalizability Theory for Student Outcome Assessment. AIR 1988 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Erwin, T. Dary

    Rating scales are a typical method for evaluating a student's performance in outcomes assessment. The analysis of the quality of information from rating scales poses special measurement problems when researchers work with faculty in their development. Generalizability measurement theory offers a set of techniques for estimating errors or…

  11. Greedy algorithms for diffuse optical tomography reconstruction

    NASA Astrophysics Data System (ADS)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of computational efficiency. The main advantage of this study is that the forward diffusion equation solver need not be repeatedly solved.

  12. Ill-posedness in modeling mixed sediment river morphodynamics

    NASA Astrophysics Data System (ADS)

    Chavarrías, Víctor; Stecca, Guglielmo; Blom, Astrid

    2018-04-01

    In this paper we analyze the Hirano active layer model used in mixed sediment river morphodynamics concerning its ill-posedness. Ill-posedness causes the solution to be unstable to short-wave perturbations. This implies that the solution presents spurious oscillations, the amplitude of which depends on the domain discretization. Ill-posedness not only produces physically unrealistic results but may also cause failure of numerical simulations. By considering a two-fraction sediment mixture we obtain analytical expressions for the mathematical characterization of the model. Using these we show that the ill-posed domain is larger than what was found in previous analyses, not only comprising cases of bed degradation into a substrate finer than the active layer but also in aggradational cases. Furthermore, by analyzing a three-fraction model we observe ill-posedness under conditions of bed degradation into a coarse substrate. We observe that oscillations in the numerical solution of ill-posed simulations grow until the model becomes well-posed, as the spurious mixing of the active layer sediment and substrate sediment acts as a regularization mechanism. Finally we conduct an eigenstructure analysis of a simplified vertically continuous model for mixed sediment for which we show that ill-posedness occurs in a wider range of conditions than the active layer model.

  13. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    PubMed

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  14. A spatially adaptive total variation regularization method for electrical resistance tomography

    NASA Astrophysics Data System (ADS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2015-12-01

    The total variation (TV) regularization method has been used to solve the ill-posed inverse problem of electrical resistance tomography (ERT), owing to its good ability to preserve edges. However, the quality of the reconstructed images, especially in the flat region, is often degraded by noise. To optimize the regularization term and the regularization factor according to the spatial feature and to improve the resolution of reconstructed images, a spatially adaptive total variation (SATV) regularization method is proposed. A kind of effective spatial feature indicator named difference curvature is used to identify which region is a flat or edge region. According to different spatial features, the SATV regularization method can automatically adjust both the regularization term and regularization factor. At edge regions, the regularization term is approximate to the TV functional to preserve the edges; in flat regions, it is approximate to the first-order Tikhonov (FOT) functional to make the solution stable. Meanwhile, the adaptive regularization factor determined by the spatial feature is used to constrain the regularization strength of the SATV regularization method for different regions. Besides, a numerical scheme is adopted for the implementation of the second derivatives of difference curvature to improve the numerical stability. Several reconstruction image metrics are used to quantitatively evaluate the performance of the reconstructed results. Both simulation and experimental results indicate that, compared with the TV (mean relative error 0.288, mean correlation coefficient 0.627) and FOT (mean relative error 0.295, mean correlation coefficient 0.638) regularization methods, the proposed SATV (mean relative error 0.259, mean correlation coefficient 0.738) regularization method can endure a relatively high level of noise and improve the resolution of reconstructed images.

  15. Projected Regression Methods for Inverting Fredholm Integrals: Formalism and Application to Analytical Continuation

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-Francois; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    We present a machine learning-based statistical regression approach to the inversion of Fredholm integrals of the first kind by studying an important example for the quantum materials community, the analytical continuation problem of quantum many-body physics. It involves reconstructing the frequency dependence of physical excitation spectra from data obtained at specific points in the complex frequency plane. The approach provides a natural regularization in cases where the inverse of the Fredholm kernel is ill-conditioned and yields robust error metrics. The stability of the forward problem permits the construction of a large database of input-output pairs. Machine learning methods applied to this database generate approximate solutions which are projected onto the subspace of functions satisfying relevant constraints. We show that for low input noise the method performs as well or better than Maximum Entropy (MaxEnt) under standard error metrics, and is substantially more robust to noise. We expect the methodology to be similarly effective for any problem involving a formally ill-conditioned inversion, provided that the forward problem can be efficiently solved. AJM was supported by the Office of Science of the U.S. Department of Energy under Subcontract No. 3F-3138 and LFA by the Columbia Univeristy IDS-ROADS project, UR009033-05 which also provided part support to RN and LH.

  16. High-resolution CSR GRACE RL05 mascons

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.

    2016-10-01

    The determination of the gravity model for the Gravity Recovery and Climate Experiment (GRACE) is susceptible to modeling errors, measurement noise, and observability issues. The ill-posed GRACE estimation problem causes the unconstrained GRACE RL05 solutions to have north-south stripes. We discuss the development of global equal area mascon solutions to improve the GRACE gravity information for the study of Earth surface processes. These regularized mascon solutions are developed with a 1° resolution using Tikhonov regularization in a geodesic grid domain. These solutions are derived from GRACE information only, and no external model or data is used to inform the constraints. The regularization matrix is time variable and will not bias or attenuate future regional signals to some past statistics from GRACE or other models. The resulting Center for Space Research (CSR) mascon solutions have no stripe errors and capture all the signals observed by GRACE within the measurement noise level. The solutions are not tailored for specific applications and are global in nature. This study discusses the solution approach and compares the resulting solutions with postprocessed results from the RL05 spherical harmonic solutions and other global mascon solutions for studies of Arctic ice sheet processes, ocean bottom pressure variation, and land surface total water storage change. This suite of comparisons leads to the conclusion that the mascon solutions presented here are an enhanced representation of the RL05 GRACE solutions and provide accurate surface-based gridded information that can be used without further processing.

  17. A deep learning approach for pose estimation from volumetric OCT data.

    PubMed

    Gessert, Nils; Schlüter, Matthias; Schlaefer, Alexander

    2018-05-01

    Tracking the pose of instruments is a central problem in image-guided surgery. For microscopic scenarios, optical coherence tomography (OCT) is increasingly used as an imaging modality. OCT is suitable for accurate pose estimation due to its micrometer range resolution and volumetric field of view. However, OCT image processing is challenging due to speckle noise and reflection artifacts in addition to the images' 3D nature. We address pose estimation from OCT volume data with a new deep learning-based tracking framework. For this purpose, we design a new 3D convolutional neural network (CNN) architecture to directly predict the 6D pose of a small marker geometry from OCT volumes. We use a hexapod robot to automatically acquire labeled data points which we use to train 3D CNN architectures for multi-output regression. We use this setup to provide an in-depth analysis on deep learning-based pose estimation from volumes. Specifically, we demonstrate that exploiting volume information for pose estimation yields higher accuracy than relying on 2D representations with depth information. Supporting this observation, we provide quantitative and qualitative results that 3D CNNs effectively exploit the depth structure of marker objects. Regarding the deep learning aspect, we present efficient design principles for 3D CNNs, making use of insights from the 2D deep learning community. In particular, we present Inception3D as a new architecture which performs best for our application. We show that our deep learning approach reaches errors at our ground-truth label's resolution. We achieve a mean average error of 14.89 ± 9.3 µm and 0.096 ± 0.072° for position and orientation learning, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    NASA Astrophysics Data System (ADS)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  19. Investigating the impact of spatial priors on the performance of model-based IVUS elastography

    PubMed Central

    Richards, M S; Doyley, M M

    2012-01-01

    This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648

  20. Recovering the 3d Pose and Shape of Vehicles from Stereo Images

    NASA Astrophysics Data System (ADS)

    Coenen, M.; Rottensteiner, F.; Heipke, C.

    2018-05-01

    The precise reconstruction and pose estimation of vehicles plays an important role, e.g. for autonomous driving. We tackle this problem on the basis of street level stereo images obtained from a moving vehicle. Starting from initial vehicle detections, we use a deformable vehicle shape prior learned from CAD vehicle data to fully reconstruct the vehicles in 3D and to recover their 3D pose and shape. To fit a deformable vehicle model to each detection by inferring the optimal parameters for pose and shape, we define an energy function leveraging reconstructed 3D data, image information, the vehicle model and derived scene knowledge. To minimise the energy function, we apply a robust model fitting procedure based on iterative Monte Carlo model particle sampling. We evaluate our approach using the object detection and orientation estimation benchmark of the KITTI dataset (Geiger et al., 2012). Our approach can deal with very coarse pose initialisations and we achieve encouraging results with up to 82 % correct pose estimations. Moreover, we are able to deliver very precise orientation estimation results with an average absolute error smaller than 4°.

  1. Moving from pixel to object scale when inverting radiative transfer models for quantitative estimation of biophysical variables in vegetation (Invited)

    NASA Astrophysics Data System (ADS)

    Atzberger, C.

    2013-12-01

    The robust and accurate retrieval of vegetation biophysical variables using RTM is seriously hampered by the ill-posedness of the inverse problem. The contribution presents our object-based inversion approach and evaluate it against measured data. The proposed method takes advantage of the fact that nearby pixels are generally more similar than those at a larger distance. For example, within a given vegetation patch, nearby pixels often share similar leaf angular distributions. This leads to spectral co-variations in the n-dimensional spectral features space, which can be used for regularization purposes. Using a set of leaf area index (LAI) measurements (n=26) acquired over alfalfa, sugar beet and garlic crops of the Barrax test site (Spain), it is demonstrated that the proposed regularization using neighbourhood information yields more accurate results compared to the traditional pixel-based inversion. Principle of the ill-posed inverse problem and the proposed solution illustrated in the red-nIR feature space using (PROSAIL). [A] spectral trajectory ('soil trajectory') obtained for one leaf angle (ALA) and one soil brightness (αsoil), when LAI varies between 0 and 10, [B] 'soil trajectories' for 5 soil brightness values and three leaf angles, [C] ill-posed inverse problem: different combinations of ALA × αsoil yield an identical crossing point, [D] object-based RTM inversion; only one 'soil trajectory' fits all nine pixelswithin a gliding (3×3) window. The black dots (plus the rectangle=central pixel) represent the hypothetical position of nine pixels within a 3×3 (gliding) window. Assuming that over short distances (× 1 pixel) variations in soil brightness can be neglected, the proposed object-based inversion searches for one common set of ALA × αsoil so that the resulting 'soil trajectory' best fits the nine measured pixels. Ground measured vs. retrieved LAI values for three crops. Left: proposed object-based approach. Right: pixel-based inversion

  2. [Multidisciplinary approach in public health research. The example of accidents and safety at work].

    PubMed

    Lert, F; Thebaud, A; Dassa, S; Goldberg, M

    1982-01-01

    This article critically analyses the various scientific approaches taken to industrial accidents, particularly in epidemiology, ergonomie and sociology, by attempting to outline the epistemological limitations in each respective field. An occupational accident is by its very nature not only a physical injury but also an economic, social and legal phenomenon, which more so than illness, enables us to examine the problems posed by the need for a multidisciplinary approach in Public Health research.

  3. Controlled wavelet domain sparsity for x-ray tomography

    NASA Astrophysics Data System (ADS)

    Purisha, Zenith; Rimpeläinen, Juho; Bubba, Tatiana; Siltanen, Samuli

    2018-01-01

    Tomographic reconstruction is an ill-posed inverse problem that calls for regularization. One possibility is to require sparsity of the unknown in an orthonormal wavelet basis. This, in turn, can be achieved by variational regularization, where the penalty term is the sum of the absolute values of the wavelet coefficients. The primal-dual fixed point algorithm showed that the minimizer of the variational regularization functional can be computed iteratively using a soft-thresholding operation. Choosing the soft-thresholding parameter \

  4. Model-based elastography: a survey of approaches to the inverse elasticity problem

    PubMed Central

    Doyley, M M

    2012-01-01

    Elastography is emerging as an imaging modality that can distinguish normal versus diseased tissues via their biomechanical properties. This article reviews current approaches to elastography in three areas — quasi-static, harmonic, and transient — and describes inversion schemes for each elastographic imaging approach. Approaches include: first-order approximation methods; direct and iterative inversion schemes for linear elastic; isotropic materials; and advanced reconstruction methods for recovering parameters that characterize complex mechanical behavior. The paper’s objective is to document efforts to develop elastography within the framework of solving an inverse problem, so that elastography may provide reliable estimates of shear modulus and other mechanical parameters. We discuss issues that must be addressed if model-based elastography is to become the prevailing approach to quasi-static, harmonic, and transient elastography: (1) developing practical techniques to transform the ill-posed problem with a well-posed one; (2) devising better forward models to capture the transient behavior of soft tissue; and (3) developing better test procedures to evaluate the performance of modulus elastograms. PMID:22222839

  5. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  6. Predicting 3D pose in partially overlapped X-ray images of knee prostheses using model-based Roentgen stereophotogrammetric analysis (RSA).

    PubMed

    Hsu, Chi-Pin; Lin, Shang-Chih; Shih, Kao-Shang; Huang, Chang-Hung; Lee, Chian-Her

    2014-12-01

    After total knee replacement, the model-based Roentgen stereophotogrammetric analysis (RSA) technique has been used to monitor the status of prosthetic wear, misalignment, and even failure. However, the overlap of the prosthetic outlines inevitably increases errors in the estimation of prosthetic poses due to the limited amount of available outlines. In the literature, quite a few studies have investigated the problems induced by the overlapped outlines, and manual adjustment is still the mainstream. This study proposes two methods to automate the image processing of overlapped outlines prior to the pose registration of prosthetic models. The outline-separated method defines the intersected points and segments the overlapped outlines. The feature-recognized method uses the point and line features of the remaining outlines to initiate registration. Overlap percentage is defined as the ratio of overlapped to non-overlapped outlines. The simulated images with five overlapping percentages are used to evaluate the robustness and accuracy of the proposed methods. Compared with non-overlapped images, overlapped images reduce the number of outlines available for model-based RSA calculation. The maximum and root mean square errors for a prosthetic outline are 0.35 and 0.04 mm, respectively. The mean translation and rotation errors are 0.11 mm and 0.18°, respectively. The errors of the model-based RSA results are increased when the overlap percentage is beyond about 9%. In conclusion, both outline-separated and feature-recognized methods can be seamlessly integrated to automate the calculation of rough registration. This can significantly increase the clinical practicability of the model-based RSA technique.

  7. Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.

    PubMed

    Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn

    2016-01-01

    Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.

  8. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, J.G.

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared inmore » the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)« less

  9. Photometric theory for wide-angle phenomena

    NASA Technical Reports Server (NTRS)

    Usher, Peter D.

    1990-01-01

    An examination is made of the problem posed by wide-angle photographic photometry, in order to extract a photometric-morphological history of Comet P/Halley. Photometric solutions are presently achieved over wide angles through a generalization of an assumption-free moment-sum method. Standard stars in the field allow a complete solution to be obtained for extinction, sky brightness, and the characteristic curve. After formulating Newton's method for the solution of the general nonlinear least-square problem, an implementation is undertaken for a canonical data set. Attention is given to the problem of random and systematic photometric errors.

  10. Geodesic active fields--a geometric framework for image registration.

    PubMed

    Zosso, Dominique; Bresson, Xavier; Thiran, Jean-Philippe

    2011-05-01

    In this paper we present a novel geometric framework called geodesic active fields for general image registration. In image registration, one looks for the underlying deformation field that best maps one image onto another. This is a classic ill-posed inverse problem, which is usually solved by adding a regularization term. Here, we propose a multiplicative coupling between the registration term and the regularization term, which turns out to be equivalent to embed the deformation field in a weighted minimal surface problem. Then, the deformation field is driven by a minimization flow toward a harmonic map corresponding to the solution of the registration problem. This proposed approach for registration shares close similarities with the well-known geodesic active contours model in image segmentation, where the segmentation term (the edge detector function) is coupled with the regularization term (the length functional) via multiplication as well. As a matter of fact, our proposed geometric model is actually the exact mathematical generalization to vector fields of the weighted length problem for curves and surfaces introduced by Caselles-Kimmel-Sapiro. The energy of the deformation field is measured with the Polyakov energy weighted by a suitable image distance, borrowed from standard registration models. We investigate three different weighting functions, the squared error and the approximated absolute error for monomodal images, and the local joint entropy for multimodal images. As compared to specialized state-of-the-art methods tailored for specific applications, our geometric framework involves important contributions. Firstly, our general formulation for registration works on any parametrizable, smooth and differentiable surface, including nonflat and multiscale images. In the latter case, multiscale images are registered at all scales simultaneously, and the relations between space and scale are intrinsically being accounted for. Second, this method is, to the best of our knowledge, the first reparametrization invariant registration method introduced in the literature. Thirdly, the multiplicative coupling between the registration term, i.e. local image discrepancy, and the regularization term naturally results in a data-dependent tuning of the regularization strength. Finally, by choosing the metric on the deformation field one can freely interpolate between classic Gaussian and more interesting anisotropic, TV-like regularization.

  11. Finite-horizon differential games for missile-target interception system using adaptive dynamic programming with input constraints

    NASA Astrophysics Data System (ADS)

    Sun, Jingliang; Liu, Chunsheng

    2018-01-01

    In this paper, the problem of intercepting a manoeuvring target within a fixed final time is posed in a non-linear constrained zero-sum differential game framework. The Nash equilibrium solution is found by solving the finite-horizon constrained differential game problem via adaptive dynamic programming technique. Besides, a suitable non-quadratic functional is utilised to encode the control constraints into a differential game problem. The single critic network with constant weights and time-varying activation functions is constructed to approximate the solution of associated time-varying Hamilton-Jacobi-Isaacs equation online. To properly satisfy the terminal constraint, an additional error term is incorporated in a novel weight-updating law such that the terminal constraint error is also minimised over time. By utilising Lyapunov's direct method, the closed-loop differential game system and the estimation weight error of the critic network are proved to be uniformly ultimately bounded. Finally, the effectiveness of the proposed method is demonstrated by using a simple non-linear system and a non-linear missile-target interception system, assuming first-order dynamics for the interceptor and target.

  12. [Ethical questions related to nutrition and hidration: basic aspects].

    PubMed

    Collazo Chao, E; Girela, E

    2011-01-01

    Conditions that pose ethical problems related to nutrition and hydration are very common nowdays, particularly within Hospitals among terminally ill patients and other patients who require nutrition and hydration. In this article we intend to analyze some circumstances, according to widely accepted ethical values, in order to outline a clear action model to help clinicians in making such difficult decisions. The problematic situations analyzed include: should hydration and nutrition be considered basic care or therapeutic measures?, and the ethical aspects of enteral versus parenteral nutrition.

  13. Evaluation of global equal-area mass grid solutions from GRACE

    NASA Astrophysics Data System (ADS)

    Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron

    2015-04-01

    The Gravity Recovery and Climate Experiment (GRACE) range-rate data was inverted into global equal-area mass grid solutions at the Center for Space Research (CSR) using Tikhonov Regularization to stabilize the ill-posed inversion problem. These solutions are intended to be used for applications in Hydrology, Oceanography, Cryosphere etc without any need for post-processing. This paper evaluates these solutions with emphasis on spatial and temporal characteristics of the signal content. These solutions will be validated against multiple models and in-situ data sets.

  14. A kinetic study of jack-bean urease denaturation by a new dithiocarbamate bismuth compound

    NASA Astrophysics Data System (ADS)

    Menezes, D. C.; Borges, E.; Torres, M. F.; Braga, J. P.

    2012-10-01

    A kinetic study concerning enzymatic inhibitory effect of a new bismuth dithiocarbamate complex on jack-bean urease is reported. A neural network approach is used to solve the ill-posed inverse problem arising from numerical treatment of the subject. A reaction mechanism for the urease denaturation process is proposed and the rate constants, relaxation time constants, equilibrium constants, activation Gibbs free energies for each reaction step and Gibbs free energies for the transition species are determined.

  15. Hydrological Parameter Estimations from a Conservative Tracer Test With Variable-Density Effects at the Boise Hydrogeophysical Research Site

    DTIC Science & Technology

    2011-12-15

    the measured porosity values can be taken as equivalent to effective porosity values for this aquifer with the risk of only very limited overestimation...information to constrain/control an increasingly ill-posed problem, and (3) risk estimation of a model with more heterogeneity than is needed to explain...coarse fluvial deposits: Boise Hydrogeophysical Research Site, Geological Society of America Bulletin, 116(9–10), 1059–1073. Barrash, W., T. Clemo

  16. About probabilistic integration of ill-posed geophysical tomography and logging data: A knowledge discovery approach versus petrophysical transfer function concepts illustrated using cross-borehole radar-, P- and S-wave traveltime tomography in combination with cone penetration and dielectric logging data

    NASA Astrophysics Data System (ADS)

    Paasche, Hendrik

    2018-01-01

    Site characterization requires detailed and ideally spatially continuous information about the subsurface. Geophysical tomographic experiments allow for spatially continuous imaging of physical parameter variations, e.g., seismic wave propagation velocities. Such physical parameters are often related to typical geotechnical or hydrological target parameters, e.g. as achieved from 1D direct push or borehole logging. Here, the probabilistic inference of 2D tip resistance, sleeve friction, and relative dielectric permittivity distributions in near-surface sediments is constrained by ill-posed cross-borehole seismic P- and S-wave and radar wave traveltime tomography. In doing so, we follow a discovery science strategy employing a fully data-driven approach capable of accounting for tomographic ambiguity and differences in spatial resolution between the geophysical tomograms and the geotechnical logging data used for calibration. We compare the outcome to results achieved employing classical hypothesis-driven approaches, i.e., deterministic transfer functions derived empirically for the inference of 2D sleeve friction from S-wave velocity tomograms and theoretically for the inference of 2D dielectric permittivity from radar wave velocity tomograms. The data-driven approach offers maximal flexibility in combination with very relaxed considerations about the character of the expected links. This makes it a versatile tool applicable to almost any combination of data sets. However, error propagation may be critical and justify thinking about a hypothesis-driven pre-selection of an optimal database going along with the risk of excluding relevant information from the analyses. Results achieved by transfer function rely on information about the nature of the link and optimal calibration settings drawn as retrospective hypothesis by other authors. Applying such transfer functions at other sites turns them into a priori valid hypothesis, which can, particularly for empirically derived transfer functions, result in poor predictions. However, a mindful utilization and critical evaluation of the consequences of turning a retrospectively drawn hypothesis into an a priori valid hypothesis can also result in good results for inference and prediction problems when using classical transfer function concepts.

  17. On the use of the Reciprocity Gap Functional in inverse scattering with near-field data: An application to mammography

    NASA Astrophysics Data System (ADS)

    Delbary, Fabrice; Aramini, Riccardo; Bozza, Giovanni; Brignone, Massimo; Piana, Michele

    2008-11-01

    Microwave tomography is a non-invasive approach to the early diagnosis of breast cancer. However the problem of visualizing tumors from diffracted microwaves is a difficult nonlinear ill-posed inverse scattering problem. We propose a qualitative approach to the solution of such a problem, whereby the shape and location of cancerous tissues can be detected by means of a combination of the Reciprocity Gap Functional method and the Linear Sampling method. We validate this approach to synthetic near-fields produced by a finite element method for boundary integral equations, where the breast is mimicked by the axial view of two nested cylinders, the external one representing the skin and the internal one representing the fat tissue.

  18. Stigma and work.

    PubMed

    Stuart, Heather

    2004-01-01

    This paper addresses what is known about workplace stigma and employment inequity for people with mental and emotional problems. For people with serious mental disorders, studies show profound consequences of stigma, including diminished employability, lack of career advancement and poor quality of working life. People with serious mental illnesses are more likely to be unemployed or to be under-employed in inferior positions that are incommensurate with their skills or training. If they return to work following an illness, they often face hostility and reduced responsibilities. The result may be self-stigma and increased disability. Little is yet known about how workplace stigma affects those with less disabling psychological or emotional problems, even though these are likely to be more prevalent in workplace settings. Despite the heavy burden posed by poor mental health in the workplace, there is no regular source of population data relating to workplace stigma, and no evidence base to support the development of best-practice solutions for workplace anti-stigma programs. Suggestions for research are made in light of these gaps.

  19. Beyond Criminalization: Toward a Criminologically Informed Framework for Mental Health Policy and Services Research

    PubMed Central

    Silver, Eric; Wolff, Nancy

    2010-01-01

    The problems posed by persons with mental illness involved with the criminal justice system are vexing ones that have received attention at the local, state and national levels. The conceptual model currently guiding research and social action around these problems is shaped by the “criminalization” perspective and the associated belief that reconnecting individuals with mental health services will by itself reduce risk for arrest. This paper argues that such efforts are necessary but possibly not sufficient to achieve that reduction. Arguing for the need to develop a services research framework that identifies a broader range of risk factors for arrest, we describe three potentially useful criminological frameworks—the “life course,” “local life circumstances” and “routine activities” perspectives. Their utility as platforms for research in a population of persons with mental illness is discussed and suggestions are provided with regard to how services research guided by these perspectives might inform the development of community-based services aimed at reducing risk of arrest. PMID:16791518

  20. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  1. Impact of migration on illness experience and help-seeking strategies of patients from Turkey and Bosnia in primary health care in Basel.

    PubMed

    Gilgen, D; Maeusezahl, D; Salis Gross, C; Battegay, E; Flubacher, P; Tanner, M; Weiss, M G; Hatz, C

    2005-09-01

    Migration, particularly among refugees and asylum seekers, poses many challenges to the health system of host countries. This study examined the impact of migration history on illness experience, its meaning and help-seeking strategies of migrant patients from Bosnia and Turkey with a range of common health problems in general practice in Basel, Switzerland. The Explanatory Model Interview Catalogue, a data collection instrument for cross-cultural research which combines epidemiological and ethnographic research approaches, was used in semi-structured one-to-one patient interviews. Bosnian patients (n=36) who had more traumatic migration experiences than Turkish/Kurdish (n=62) or Swiss internal migrants (n=48) reported a larger number of health problems than the other groups. Psychological distress was reported most frequently by all three groups in response to focussed queries, but spontaneously reported symptoms indicated the prominence of somatic, rather than psychological or psychosocial, problems. Among Bosnians, 78% identified traumatic migration experiences as a cause of their illness, in addition to a range of psychological and biomedical causes. Help-seeking strategies for the current illness included a wide range of treatments, such as basic medical care at private surgeries, outpatients department in hospitals as well as alternative medical treatments among all groups. Findings provide a useful guide to clinicians who work with migrants and should inform policy in medical care, information and health promotion for migrants in Switzerland as well as further education of health professionals on issues concerning migrants health.

  2. Stochastic simulation of spatially correlated geo-processes

    USGS Publications Warehouse

    Christakos, G.

    1987-01-01

    In this study, developments in the theory of stochastic simulation are discussed. The unifying element is the notion of Radon projection in Euclidean spaces. This notion provides a natural way of reconstructing the real process from a corresponding process observable on a reduced dimensionality space, where analysis is theoretically easier and computationally tractable. Within this framework, the concept of space transformation is defined and several of its properties, which are of significant importance within the context of spatially correlated processes, are explored. The turning bands operator is shown to follow from this. This strengthens considerably the theoretical background of the geostatistical method of simulation, and some new results are obtained in both the space and frequency domains. The inverse problem is solved generally and the applicability of the method is extended to anisotropic as well as integrated processes. Some ill-posed problems of the inverse operator are discussed. Effects of the measurement error and impulses at origin are examined. Important features of the simulated process as described by geomechanical laws, the morphology of the deposit, etc., may be incorporated in the analysis. The simulation may become a model-dependent procedure and this, in turn, may provide numerical solutions to spatial-temporal geologic models. Because the spatial simu??lation may be technically reduced to unidimensional simulations, various techniques of generating one-dimensional realizations are reviewed. To link theory and practice, an example is computed in detail. ?? 1987 International Association for Mathematical Geology.

  3. Multiple sclerosis in a postgraduate student of anaesthesia: illness in doctors and fitness to practice.

    PubMed

    Reyes, Antonio Jose; Ramcharan, Kanterpersad; Sharma, Sharda

    2016-01-28

    A 29-year-old previously healthy woman, a doctor, was diagnosed with remitting relapsing multiple sclerosis after fulfilling McDonald's criteria for the diagnosis of definite multiple sclerosis. Despite 22 months of immunomodulatory treatment, the feasibility of continuing to train in a stressful specialty of medicine became an ethical and practical dilemma. Fitness for practice and career advancement among doctors with illnesses or having cognitive and physical decline from disease and/or ageing is a global problem. The need for addressing this issue in a compassionate and comprehensive manner is discussed. Cognitive and physical fitness are required in doctors and other healthcare workers since medical errors/adverse events are commonplace in medical practice. The public welfare is equally important in this global problem. 2016 BMJ Publishing Group Ltd.

  4. An ill-posed parabolic evolution system for dispersive deoxygenation-reaeration in water

    NASA Astrophysics Data System (ADS)

    Azaïez, M.; Ben Belgacem, F.; Hecht, F.; Le Bot, C.

    2014-01-01

    We consider an inverse problem that arises in the management of water resources and pertains to the analysis of surface water pollution by organic matter. Most physically relevant models used by engineers derive from various additions and corrections to enhance the earlier deoxygenation-reaeration model proposed by Streeter and Phelps in 1925, the unknowns being the biochemical oxygen demand (BOD) and the dissolved oxygen (DO) concentrations. The one we deal with includes Taylor’s dispersion to account for the heterogeneity of the contamination in all space directions. The system we obtain is then composed of two reaction-dispersion equations. The particularity is that both Neumann and Dirichlet boundary conditions are available on the DO tracer while the BOD density is free of any conditions. In fact, for real-life concerns, measurements on the DO are easy to obtain and to save. On the contrary, collecting data on the BOD is a sensitive task and turns out to be a lengthy process. The global model pursues the reconstruction of the BOD density, and especially of its flux along the boundary. Not only is this problem plainly worth studying for its own interest but it could also be a mandatory step in other applications such as the identification of the location of pollution sources. The non-standard boundary conditions generate two difficulties in mathematical and computational grounds. They set up a severe coupling between both equations and they are the cause of the ill-posed data reconstruction problem. Existence and stability fail. Identifiability is therefore the only positive result one can search for; it is the central purpose of the paper. Finally, we have performed some computational experiments to assess the capability of the mixed finite element in missing data recovery.

  5. Local search heuristic for the discrete leader-follower problem with multiple follower objectives

    NASA Astrophysics Data System (ADS)

    Kochetov, Yury; Alekseeva, Ekaterina; Mezmaz, Mohand

    2016-10-01

    We study a discrete bilevel problem, called as well as leader-follower problem, with multiple objectives at the lower level. It is assumed that constraints at the upper level can include variables of both levels. For such ill-posed problem we define feasible and optimal solutions for pessimistic case. A central point of this work is a two stage method to get a feasible solution under the pessimistic case, given a leader decision. The target of the first stage is a follower solution that violates the leader constraints. The target of the second stage is a pessimistic feasible solution. Each stage calls a heuristic and a solver for a series of particular mixed integer programs. The method is integrated inside a local search based heuristic that is designed to find near-optimal leader solutions.

  6. Inverse random source scattering for the Helmholtz equation in inhomogeneous media

    NASA Astrophysics Data System (ADS)

    Li, Ming; Chen, Chuchu; Li, Peijun

    2018-01-01

    This paper is concerned with an inverse random source scattering problem in an inhomogeneous background medium. The wave propagation is modeled by the stochastic Helmholtz equation with the source driven by additive white noise. The goal is to reconstruct the statistical properties of the random source such as the mean and variance from the boundary measurement of the radiated random wave field at multiple frequencies. Both the direct and inverse problems are considered. We show that the direct problem has a unique mild solution by a constructive proof. For the inverse problem, we derive Fredholm integral equations, which connect the boundary measurement of the radiated wave field with the unknown source function. A regularized block Kaczmarz method is developed to solve the ill-posed integral equations. Numerical experiments are included to demonstrate the effectiveness of the proposed method.

  7. Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.

    PubMed

    Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D

    2017-11-01

    We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.

  8. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    PubMed

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  9. Application of the sequential quadratic programming algorithm for reconstructing the distribution of optical parameters based on the time-domain radiative transfer equation.

    PubMed

    Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming

    2016-10-17

    Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.

  10. Space structures insulating material's thermophysical and radiation properties estimation

    NASA Astrophysics Data System (ADS)

    Nenarokomov, A. V.; Alifanov, O. M.; Titov, D. M.

    2007-11-01

    In many practical situations in aerospace technology it is impossible to measure directly such properties of analyzed materials (for example, composites) as thermal and radiation characteristics. The only way that can often be used to overcome these difficulties is indirect measurements. This type of measurement is usually formulated as the solution of inverse heat transfer problems. Such problems are ill-posed in mathematical sense and their main feature shows itself in the solution instabilities. That is why special regularizing methods are needed to solve them. The experimental methods of identification of the mathematical models of heat transfer based on solving the inverse problems are one of the modern effective solving manners. The objective of this paper is to estimate thermal and radiation properties of advanced materials using the approach based on inverse methods.

  11. Fractional-order TV-L2 model for image denoising

    NASA Astrophysics Data System (ADS)

    Chen, Dali; Sun, Shenshen; Zhang, Congrong; Chen, YangQuan; Xue, Dingyu

    2013-10-01

    This paper proposes a new fractional order total variation (TV) denoising method, which provides a much more elegant and effective way of treating problems of the algorithm implementation, ill-posed inverse, regularization parameter selection and blocky effect. Two fractional order TV-L2 models are constructed for image denoising. The majorization-minimization (MM) algorithm is used to decompose these two complex fractional TV optimization problems into a set of linear optimization problems which can be solved by the conjugate gradient algorithm. The final adaptive numerical procedure is given. Finally, we report experimental results which show that the proposed methodology avoids the blocky effect and achieves state-of-the-art performance. In addition, two medical image processing experiments are presented to demonstrate the validity of the proposed methodology.

  12. High-performance image reconstruction in fluorescence tomography on desktop computers and graphics hardware.

    PubMed

    Freiberger, Manuel; Egger, Herbert; Liebmann, Manfred; Scharfetter, Hermann

    2011-11-01

    Image reconstruction in fluorescence optical tomography is a three-dimensional nonlinear ill-posed problem governed by a system of partial differential equations. In this paper we demonstrate that a combination of state of the art numerical algorithms and a careful hardware optimized implementation allows to solve this large-scale inverse problem in a few seconds on standard desktop PCs with modern graphics hardware. In particular, we present methods to solve not only the forward but also the non-linear inverse problem by massively parallel programming on graphics processors. A comparison of optimized CPU and GPU implementations shows that the reconstruction can be accelerated by factors of about 15 through the use of the graphics hardware without compromising the accuracy in the reconstructed images.

  13. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    NASA Astrophysics Data System (ADS)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  14. Combining energy and Laplacian regularization to accurately retrieve the depth of brain activity of diffuse optical tomographic data

    NASA Astrophysics Data System (ADS)

    Chiarelli, Antonio M.; Maclin, Edward L.; Low, Kathy A.; Mathewson, Kyle E.; Fabiani, Monica; Gratton, Gabriele

    2016-03-01

    Diffuse optical tomography (DOT) provides data about brain function using surface recordings. Despite recent advancements, an unbiased method for estimating the depth of absorption changes and for providing an accurate three-dimensional (3-D) reconstruction remains elusive. DOT involves solving an ill-posed inverse problem, requiring additional criteria for finding unique solutions. The most commonly used criterion is energy minimization (energy constraint). However, as measurements are taken from only one side of the medium (the scalp) and sensitivity is greater at shallow depths, the energy constraint leads to solutions that tend to be small and superficial. To correct for this bias, we combine the energy constraint with another criterion, minimization of spatial derivatives (Laplacian constraint, also used in low resolution electromagnetic tomography, LORETA). Used in isolation, the Laplacian constraint leads to solutions that tend to be large and deep. Using simulated, phantom, and actual brain activation data, we show that combining these two criteria results in accurate (error <2 mm) absorption depth estimates, while maintaining a two-point spatial resolution of <24 mm up to a depth of 30 mm. This indicates that accurate 3-D reconstruction of brain activity up to 30 mm from the scalp can be obtained with DOT.

  15. Bayesian Recurrent Neural Network for Language Modeling.

    PubMed

    Chien, Jen-Tzung; Ku, Yuan-Chu

    2016-02-01

    A language model (LM) is calculated as the probability of a word sequence that provides the solution to word prediction for a variety of information systems. A recurrent neural network (RNN) is powerful to learn the large-span dynamics of a word sequence in the continuous space. However, the training of the RNN-LM is an ill-posed problem because of too many parameters from a large dictionary size and a high-dimensional hidden layer. This paper presents a Bayesian approach to regularize the RNN-LM and apply it for continuous speech recognition. We aim to penalize the too complicated RNN-LM by compensating for the uncertainty of the estimated model parameters, which is represented by a Gaussian prior. The objective function in a Bayesian classification network is formed as the regularized cross-entropy error function. The regularized model is constructed not only by calculating the regularized parameters according to the maximum a posteriori criterion but also by estimating the Gaussian hyperparameter by maximizing the marginal likelihood. A rapid approximation to a Hessian matrix is developed to implement the Bayesian RNN-LM (BRNN-LM) by selecting a small set of salient outer-products. The proposed BRNN-LM achieves a sparser model than the RNN-LM. Experiments on different corpora show the robustness of system performance by applying the rapid BRNN-LM under different conditions.

  16. Regularity Aspects in Inverse Musculoskeletal Biomechanics

    NASA Astrophysics Data System (ADS)

    Lund, Marie; Stâhl, Fredrik; Gulliksson, Mârten

    2008-09-01

    Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling, which is unrealistic but the error maybe small enough to be accepted for specific applications. These results are a start to ensure better results of inverse musculoskeletal simulations from a numerical point of view.

  17. Bayesian extraction of the parton distribution amplitude from the Bethe-Salpeter wave function

    NASA Astrophysics Data System (ADS)

    Gao, Fei; Chang, Lei; Liu, Yu-xin

    2017-07-01

    We propose a new numerical method to compute the parton distribution amplitude (PDA) from the Euclidean Bethe-Salpeter wave function. The essential step is to extract the weight function in the Nakanishi representation of the Bethe-Salpeter wave function in Euclidean space, which is an ill-posed inversion problem, via the maximum entropy method (MEM). The Nakanishi weight function as well as the corresponding light-front parton distribution amplitude (PDA) can be well determined. We confirm prior work on PDA computations, which was based on different methods.

  18. Chopping Time of the FPU {α }-Model

    NASA Astrophysics Data System (ADS)

    Carati, A.; Ponno, A.

    2018-03-01

    We study, both numerically and analytically, the time needed to observe the breaking of an FPU α -chain in two or more pieces, starting from an unbroken configuration at a given temperature. It is found that such a "chopping" time is given by a formula that, at low temperatures, is of the Arrhenius-Kramers form, so that the chain does not break up on an observable time-scale. The result explains why the study of the FPU problem is meaningful also in the ill-posed case of the α -model.

  19. A Toolbox for Imaging Stellar Surfaces

    NASA Astrophysics Data System (ADS)

    Young, John

    2018-04-01

    In this talk I will review the available algorithms for synthesis imaging at visible and infrared wavelengths, including both gray and polychromatic methods. I will explain state-of-the-art approaches to constraining the ill-posed image reconstruction problem, and selecting an appropriate regularisation function and strength of regularisation. The reconstruction biases that can follow from non-optimal choices will be discussed, including their potential impact on the physical interpretation of the results. This discussion will be illustrated with example stellar surface imaging results from real VLTI and COAST datasets.

  20. Mathematics and Measurement.

    PubMed

    Boisvert, R F; Donahue, M J; Lozier, D W; McMichael, R; Rust, B W

    2001-01-01

    In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST's current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years.

  1. Computing motion using resistive networks

    NASA Technical Reports Server (NTRS)

    Koch, Christof; Luo, Jin; Mead, Carver; Hutchinson, James

    1988-01-01

    Recent developments in the theory of early vision are described which lead from the formulation of the motion problem as an ill-posed one to its solution by minimizing certain 'cost' functions. These cost or energy functions can be mapped onto simple analog and digital resistive networks. It is shown how the optical flow can be computed by injecting currents into resistive networks and recording the resulting stationary voltage distribution at each node. These networks can be implemented in cMOS VLSI circuits and represent plausible candidates for biological vision systems.

  2. Projected regression method for solving Fredholm integral equations arising in the analytic continuation problem of quantum physics

    NASA Astrophysics Data System (ADS)

    Arsenault, Louis-François; Neuberg, Richard; Hannah, Lauren A.; Millis, Andrew J.

    2017-11-01

    We present a supervised machine learning approach to the inversion of Fredholm integrals of the first kind as they arise, for example, in the analytic continuation problem of quantum many-body physics. The approach provides a natural regularization for the ill-conditioned inverse of the Fredholm kernel, as well as an efficient and stable treatment of constraints. The key observation is that the stability of the forward problem permits the construction of a large database of outputs for physically meaningful inputs. Applying machine learning to this database generates a regression function of controlled complexity, which returns approximate solutions for previously unseen inputs; the approximate solutions are then projected onto the subspace of functions satisfying relevant constraints. Under standard error metrics the method performs as well or better than the Maximum Entropy method for low input noise and is substantially more robust to increased input noise. We suggest that the methodology will be similarly effective for other problems involving a formally ill-conditioned inversion of an integral operator, provided that the forward problem can be efficiently solved.

  3. The Relationship between Students' Problem Posing and Problem Solving Abilities and Beliefs: A Small-Scale Study with Chinese Elementary School Children

    ERIC Educational Resources Information Center

    Limin, Chen; Van Dooren, Wim; Verschaffel, Lieven

    2013-01-01

    The goal of the present study is to investigate the relationship between pupils' problem posing and problem solving abilities, their beliefs about problem posing and problem solving, and their general mathematics abilities, in a Chinese context. Five instruments, i.e., a problem posing test, a problem solving test, a problem posing questionnaire,…

  4. Medication errors: the role of the patient.

    PubMed

    Britten, Nicky

    2009-06-01

    1. Patients and their carers will usually be the first to notice any observable problems resulting from medication errors. They will probably be unable to distinguish between medication errors, adverse drug reactions, or 'side effects'. 2. Little is known about how patients understand drug related problems or how they make attributions of adverse effects. Some research suggests that patients' cognitive models of adverse drug reactions bear a close relationship to models of illness perception. 3. Attributions of adverse drug reactions are related to people's previous experiences and to their level of education. The evidence suggests that on the whole patients' reports of adverse drug reactions are accurate. However, patients do not report all the problems they perceive and are more likely to report those that they do perceive as severe. Patients may not report problems attributed to their medications if they are fearful of doctors' reactions. Doctors may respond inappropriately to patients' concerns, for example by ignoring them. Some authors have proposed the use of a symptom checklist to elicit patients' reports of suspected adverse drug reactions. 4. Many patients want information about adverse drug effects, and the challenge for the professional is to judge how much information to provide and the best way of doing so. Professionals' inappropriate emphasis on adherence may be dangerous when a medication error has occurred. 5. Recent NICE guidelines recommend that professionals should ask patients if they have any concerns about their medicines, and this approach is likely to yield information conducive to the identification of medication errors.

  5. Obtaining the Bidirectional Transfer Distribution Function ofIsotropically Scattering Materials Using an Integrating Sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonsson, Jacob C.; Branden, Henrik

    2006-10-19

    This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be usedmore » to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.« less

  6. Lassa fever: the challenges of curtailing a deadly disease.

    PubMed

    Ibekwe, Titus

    2012-01-01

    Today Lassa fever is mainly a disease of the developing world, however several imported cases have been reported in different parts of the world and there are growing concerns of the potentials of Lassa fever Virus as a biological weapon. Yet no tangible solution to this problem has been developed nearly half a decade after its identification. Hence, the paper is aimed at appraising the problems associated with LAF illness; the challenges in curbing the epidemic and recommendations on important focal points. A Review based on the documents from the EFAS conference 2011 and literature search on PubMed, Scopus and Science direct. The retrieval of relevant papers was via the University of British Columbia and University of Toronto Libraries. The two major search engines returned 61 and 920 articles respectively. Out of these, the final 26 articles that met the criteria were selected. Relevant information on epidemiology, burden of management and control were obtained. Prompt and effective containment of the Lassa fever disease in Lassa village four decades ago could have saved the West African sub-region and indeed the entire globe from the devastating effect and threats posed by this illness. That was a hard lesson calling for much more proactive measures towards the eradication of the illness at primary, secondary and tertiary levels of health care.

  7. SU-E-T-398: Evaluation of Radiobiological Parameters Using Serial Tumor Imaging During Radiotherapy as An Inverse Ill-Posed Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chvetsov, A; Sandison, G; Schwartz, J

    Purpose: Combination of serial tumor imaging with radiobiological modeling can provide more accurate information on the nature of treatment response and what underlies resistance. The purpose of this article is to improve the algorithms related to imaging-based radiobilogical modeling of tumor response. Methods: Serial imaging of tumor response to radiation therapy represents a sum of tumor cell sensitivity, tumor growth rates, and the rate of cell loss which are not separated explicitly. Accurate treatment response assessment would require separation of these radiobiological determinants of treatment response because they define tumor control probability. We show that the problem of reconstruction ofmore » radiobiological parameters from serial imaging data can be considered as inverse ill-posed problem described by the Fredholm integral equation of the first kind because it is governed by a sum of several exponential processes. Therefore, the parameter reconstruction can be solved using regularization methods. Results: To study the reconstruction problem, we used a set of serial CT imaging data for the head and neck cancer and a two-level cell population model of tumor response which separates the entire tumor cell population in two subpopulations of viable and lethally damage cells. The reconstruction was done using a least squared objective function and a simulated annealing algorithm. Using in vitro data for radiobiological parameters as reference data, we shown that the reconstructed values of cell surviving fractions and potential doubling time exhibit non-physical fluctuations if no stabilization algorithms are applied. The variational regularization allowed us to obtain statistical distribution for cell surviving fractions and cell number doubling times comparable to in vitro data. Conclusion: Our results indicate that using variational regularization can increase the number of free parameters in the model and open the way to development of more advanced algorithms which take into account tumor heterogeneity, for example, related to hypoxia.« less

  8. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    PubMed

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  9. Pre-Service Teachers' Free and Structured Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silber, Steven; Cai, Jinfa

    2017-01-01

    This exploratory study examined how pre-service teachers (PSTs) pose mathematical problems for free and structured mathematical problem-posing conditions. It was hypothesized that PSTs would pose more complex mathematical problems under structured posing conditions, with increasing levels of complexity, than PSTs would pose under free posing…

  10. Constitutive error based parameter estimation technique for plate structures using free vibration signatures

    NASA Astrophysics Data System (ADS)

    Guchhait, Shyamal; Banerjee, Biswanath

    2018-04-01

    In this paper, a variant of constitutive equation error based material parameter estimation procedure for linear elastic plates is developed from partially measured free vibration sig-natures. It has been reported in many research articles that the mode shape curvatures are much more sensitive compared to mode shape themselves to localize inhomogeneity. Complying with this idea, an identification procedure is framed as an optimization problem where the proposed cost function measures the error in constitutive relation due to incompatible curvature/strain and moment/stress fields. Unlike standard constitutive equation error based procedure wherein a solution of a couple system is unavoidable in each iteration, we generate these incompatible fields via two linear solves. A simple, yet effective, penalty based approach is followed to incorporate measured data. The penalization parameter not only helps in incorporating corrupted measurement data weakly but also acts as a regularizer against the ill-posedness of the inverse problem. Explicit linear update formulas are then developed for anisotropic linear elastic material. Numerical examples are provided to show the applicability of the proposed technique. Finally, an experimental validation is also provided.

  11. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    NASA Astrophysics Data System (ADS)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  12. Regularization Reconstruction Method for Imaging Problems in Electrical Capacitance Tomography

    NASA Astrophysics Data System (ADS)

    Chu, Pan; Lei, Jing

    2017-11-01

    The electrical capacitance tomography (ECT) is deemed to be a powerful visualization measurement technique for the parametric measurement in a multiphase flow system. The inversion task in the ECT technology is an ill-posed inverse problem, and seeking for an efficient numerical method to improve the precision of the reconstruction images is important for practical measurements. By the introduction of the Tikhonov regularization (TR) methodology, in this paper a loss function that emphasizes the robustness of the estimation and the low rank property of the imaging targets is put forward to convert the solution of the inverse problem in the ECT reconstruction task into a minimization problem. Inspired by the split Bregman (SB) algorithm, an iteration scheme is developed for solving the proposed loss function. Numerical experiment results validate that the proposed inversion method not only reconstructs the fine structures of the imaging targets, but also improves the robustness.

  13. [Prospective assessment of medication errors in critically ill patients in a university hospital].

    PubMed

    Salazar L, Nicole; Jirón A, Marcela; Escobar O, Leslie; Tobar, Eduardo; Romero, Carlos

    2011-11-01

    Critically ill patients are especially vulnerable to medication errors (ME) due to their severe clinical situation and the complexities of their management. To determine the frequency and characteristics of ME and identify shortcomings in the processes of medication management in an Intensive Care Unit. During a 3 months period, an observational prospective and randomized study was carried out in the ICU of a university hospital. Every step of patient's medication management (prescription, transcription, dispensation, preparation and administration) was evaluated by an external trained professional. Steps with higher frequency of ME and their therapeutic groups involved were identified. Medications errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention. In 52 of 124 patients evaluated, 66 ME were found in 194 drugs prescribed. In 34% of prescribed drugs, there was at least 1 ME during its use. Half of ME occurred during medication administration, mainly due to problems in infusion rates and schedule times. Antibacterial drugs had the highest rate of ME. We found a 34% rate of ME per drug prescribed, which is in concordance with international reports. The identification of those steps more prone to ME in the ICU, will allow the implementation of an intervention program to improve the quality and security of medication management.

  14. Regularized minimum I-divergence methods for the inverse blackbody radiation problem

    NASA Astrophysics Data System (ADS)

    Choi, Kerkil; Lanterman, Aaron D.; Shin, Jaemin

    2006-08-01

    This paper proposes iterative methods for estimating the area temperature distribution of a blackbody from its total radiated power spectrum measurements. This is called the inverse blackbody radiation problem. This problem is inherently ill-posed due to the characteristics of the kernel in the underlying integral equation given by Planck's law. The functions involved in the problem are all non-negative. Csiszár's I-divergence is an information-theoretic discrepancy measure between two non-negative functions. We derive iterative methods for minimizing Csiszár's I-divergence between the measured power spectrum and the power spectrum arising from the estimate according to the integral equation. Due to the ill-posedness of the problem, unconstrained algorithms often produce poor estimates, especially when the measurements are corrupted by noise. To alleviate this difficulty, we apply regularization methods to our algorithms. Penalties based on Shannon's entropy, the L1-norm and Good's roughness are chosen to suppress the undesirable artefacts. When a penalty is applied, the pertinent optimization that needs to be performed at each iteration is no longer trivial. In particular, Good's roughness causes couplings between estimate components. To handle this issue, we adapt Green's one-step-late method. This choice is based on the important fact that our minimum I-divergence algorithms can be interpreted as asymptotic forms of certain expectation-maximization algorithms. The effectiveness of our methods is illustrated via various numerical experiments.

  15. Creativity of Field-dependent and Field-independent Students in Posing Mathematical Problems

    NASA Astrophysics Data System (ADS)

    Azlina, N.; Amin, S. M.; Lukito, A.

    2018-01-01

    This study aims at describing the creativity of elementary school students with different cognitive styles in mathematical problem-posing. The posed problems were assessed based on three components of creativity, namely fluency, flexibility, and novelty. The free-type problem posing was used in this study. This study is a descriptive research with qualitative approach. Data collections were conducted through written task and task-based interviews. The subjects were two elementary students. One of them is Field Dependent (FD) and the other is Field Independent (FI) which were measured by GEFT (Group Embedded Figures Test). Further, the data were analyzed based on creativity components. The results show thatFD student’s posed problems have fulfilled the two components of creativity namely fluency, in which the subject posed at least 3 mathematical problems, and flexibility, in whichthe subject posed problems with at least 3 different categories/ideas. Meanwhile,FI student’s posed problems have fulfilled all three components of creativity, namely fluency, in which thesubject posed at least 3 mathematical problems, flexibility, in which thesubject posed problems with at least 3 different categories/ideas, and novelty, in which the subject posed problems that are purely the result of her own ideas and different from problems they have known.

  16. Phillips-Tikhonov regularization with a priori information for neutron emission tomographic reconstruction on Joint European Torus

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bielecki, J.; Scholz, M.; Drozdowicz, K.

    A method of tomographic reconstruction of the neutron emissivity in the poloidal cross section of the Joint European Torus (JET, Culham, UK) tokamak was developed. Due to very limited data set (two projection angles, 19 lines of sight only) provided by the neutron emission profile monitor (KN3 neutron camera), the reconstruction is an ill-posed inverse problem. The aim of this work consists in making a contribution to the development of reliable plasma tomography reconstruction methods that could be routinely used at JET tokamak. The proposed method is based on Phillips-Tikhonov regularization and incorporates a priori knowledge of the shape ofmore » normalized neutron emissivity profile. For the purpose of the optimal selection of the regularization parameters, the shape of normalized neutron emissivity profile is approximated by the shape of normalized electron density profile measured by LIDAR or high resolution Thomson scattering JET diagnostics. In contrast with some previously developed methods of ill-posed plasma tomography reconstruction problem, the developed algorithms do not include any post-processing of the obtained solution and the physical constrains on the solution are imposed during the regularization process. The accuracy of the method is at first evaluated by several tests with synthetic data based on various plasma neutron emissivity models (phantoms). Then, the method is applied to the neutron emissivity reconstruction for JET D plasma discharge #85100. It is demonstrated that this method shows good performance and reliability and it can be routinely used for plasma neutron emissivity reconstruction on JET.« less

  17. Skill Levels of Prospective Physics Teachers on Problem Posing

    ERIC Educational Resources Information Center

    Cildir, Sema; Sezen, Nazan

    2011-01-01

    Problem posing is one of the topics which the educators thoroughly accentuate. Problem posing skill is defined as an introvert activity of a student's learning. In this study, skill levels of prospective physics teachers on problem posing were determined and their views on problem posing were evaluated. To this end, prospective teachers were given…

  18. The New Method of Tsunami Source Reconstruction With r-Solution Inversion Method

    NASA Astrophysics Data System (ADS)

    Voronina, T. A.; Romanenko, A. A.

    2016-12-01

    Application of the r-solution method to reconstructing the initial tsunami waveform is discussed. This methodology is based on the inversion of remote measurements of water-level data. The wave propagation is considered within the scope of a linear shallow-water theory. The ill-posed inverse problem in question is regularized by means of a least square inversion using the truncated Singular Value Decomposition method. As a result of the numerical process, an r-solution is obtained. The method proposed allows one to control the instability of a numerical solution and to obtain an acceptable result in spite of ill posedness of the problem. Implementation of this methodology to reconstructing of the initial waveform to 2013 Solomon Islands tsunami validates the theoretical conclusion for synthetic data and a model tsunami source: the inversion result strongly depends on data noisiness, the azimuthal and temporal coverage of recording stations with respect to the source area. Furthermore, it is possible to make a preliminary selection of the most informative set of the available recording stations used in the inversion process.

  19. Original and Mirror Face Images and Minimum Squared Error Classification for Visible Light Face Recognition.

    PubMed

    Wang, Rong

    2015-01-01

    In real-world applications, the image of faces varies with illumination, facial expression, and poses. It seems that more training samples are able to reveal possible images of the faces. Though minimum squared error classification (MSEC) is a widely used method, its applications on face recognition usually suffer from the problem of a limited number of training samples. In this paper, we improve MSEC by using the mirror faces as virtual training samples. We obtained the mirror faces generated from original training samples and put these two kinds of samples into a new set. The face recognition experiments show that our method does obtain high accuracy performance in classification.

  20. Fundamentals of diffusion MRI physics.

    PubMed

    Kiselev, Valerij G

    2017-03-01

    Diffusion MRI is commonly considered the "engine" for probing the cellular structure of living biological tissues. The difficulty of this task is threefold. First, in structurally heterogeneous media, diffusion is related to structure in quite a complicated way. The challenge of finding diffusion metrics for a given structure is equivalent to other problems in physics that have been known for over a century. Second, in most cases the MRI signal is related to diffusion in an indirect way dependent on the measurement technique used. Third, finding the cellular structure given the MRI signal is an ill-posed inverse problem. This paper reviews well-established knowledge that forms the basis for responding to the first two challenges. The inverse problem is briefly discussed and the reader is warned about a number of pitfalls on the way. Copyright © 2017 John Wiley & Sons, Ltd.

  1. PAN AIR modeling studies. [higher order panel method for aircraft design

    NASA Technical Reports Server (NTRS)

    Towne, M. C.; Strande, S. M.; Erickson, L. L.; Kroo, I. M.; Enomoto, F. Y.; Carmichael, R. L.; Mcpherson, K. F.

    1983-01-01

    PAN AIR is a computer program that predicts subsonic or supersonic linear potential flow about arbitrary configurations. The code's versatility and generality afford numerous possibilities for modeling flow problems. Although this generality provides great flexibility, it also means that studies are required to establish the dos and don'ts of modeling. The purpose of this paper is to describe and evaluate a variety of methods for modeling flows with PAN AIR. The areas discussed are effects of panel density, internal flow modeling, forebody modeling in subsonic flow, propeller slipstream modeling, effect of wake length, wing-tail-wake interaction, effect of trailing-edge paneling on the Kutta condition, well- and ill-posed boundary-value problems, and induced-drag calculations. These nine topics address problems that are of practical interest to the users of PAN AIR.

  2. Meta-analysis in evidence-based healthcare: a paradigm shift away from random effects is overdue.

    PubMed

    Doi, Suhail A R; Furuya-Kanamori, Luis; Thalib, Lukman; Barendregt, Jan J

    2017-12-01

    Each year up to 20 000 systematic reviews and meta-analyses are published whose results influence healthcare decisions, thus making the robustness and reliability of meta-analytic methods one of the world's top clinical and public health priorities. The evidence synthesis makes use of either fixed-effect or random-effects statistical methods. The fixed-effect method has largely been replaced by the random-effects method as heterogeneity of study effects led to poor error estimation. However, despite the widespread use and acceptance of the random-effects method to correct this, it too remains unsatisfactory and continues to suffer from defective error estimation, posing a serious threat to decision-making in evidence-based clinical and public health practice. We discuss here the problem with the random-effects approach and demonstrate that there exist better estimators under the fixed-effect model framework that can achieve optimal error estimation. We argue for an urgent return to the earlier framework with updates that address these problems and conclude that doing so can markedly improve the reliability of meta-analytical findings and thus decision-making in healthcare.

  3. WASP (Write a Scientific Paper): Special cases of selective non-treatment and/or DNR.

    PubMed

    Mallia, Pierre

    2018-05-03

    Fetuses at low gestational age limit of viability, neonates with life threatening or life limiting congenital anomalies and deteriorating acutely ill newborn babies in intensive care, pose taxing ethical questions on whether to forego or stop treatment and allow them to die naturally. Although there is essentially no ethical difference between end of life decision between neonates and other children and adults, in the former, the fact that we are dealing with a new life, may pose greater problems to staff and parents. Good communication skills and involvement of all the team and the parents should start from the beginning to see which treatment can be foregone or stopped in the best interests of the child. This article deals with the importance of clinical ethics to avoid legal and moral showdowns and discusses accepted moral practice in this difficult area. Copyright © 2018. Published by Elsevier B.V.

  4. Determining the Performances of Pre-Service Primary School Teachers in Problem Posing Situations

    ERIC Educational Resources Information Center

    Kilic, Cigdem

    2013-01-01

    This study examined the problem posing strategies of pre-service primary school teachers in different problem posing situations (PPSs) and analysed the issues they encounter while posing problems. A problem posing task consisting of six PPSs (two free, two structured, and two semi-structured situations) was delivered to 40 participants.…

  5. Vision Assisted Navigation for Miniature Unmanned Aerial Vehicles (MAVs)

    DTIC Science & Technology

    2009-11-01

    commanded to orbit a target of known location. The error in target geolocation is shown for 200 frames with filtering (dashed line) and without (solid...so the performance of the filter was determined by using the estimated poses to solve a geolocation problem. An MAV flying at an altitude of 70 meters... geolocation as well as significantly reducing the short-term variance in the estimates based on the GPS/IMU alone. Due to the nature of the autopilot

  6. General design method for 3-dimensional, potential flow fields. Part 2: Computer program DIN3D1 for simple, unbranched ducts

    NASA Technical Reports Server (NTRS)

    Stanitz, J. D.

    1985-01-01

    The general design method for three-dimensional, potential, incompressible or subsonic-compressible flow developed in part 1 of this report is applied to the design of simple, unbranched ducts. A computer program, DIN3D1, is developed and five numerical examples are presented: a nozzle, two elbows, an S-duct, and the preliminary design of a side inlet for turbomachines. The two major inputs to the program are the upstream boundary shape and the lateral velocity distribution on the duct wall. As a result of these inputs, boundary conditions are overprescribed and the problem is ill posed. However, it appears that there are degrees of compatibility between these two major inputs and that, for reasonably compatible inputs, satisfactory solutions can be obtained. By not prescribing the shape of the upstream boundary, the problem presumably becomes well posed, but it is not clear how to formulate a practical design method under this circumstance. Nor does it appear desirable, because the designer usually needs to retain control over the upstream (or downstream) boundary shape. The problem is further complicated by the fact that, unlike the two-dimensional case, and irrespective of the upstream boundary shape, some prescribed lateral velocity distributions do not have proper solutions.

  7. Multistatic aerosol-cloud lidar in space: A theoretical perspective

    NASA Astrophysics Data System (ADS)

    Mishchenko, M. I.; Alexandrov, M. D.; Brian, C.; Travis, L. D.

    2016-12-01

    Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170° can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.

  8. Multistatic Aerosol Cloud Lidar in Space: A Theoretical Perspective

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Alexandrov, Mikhail D.; Cairns, Brian; Travis, Larry D.

    2016-01-01

    Accurate aerosol and cloud retrievals from space remain quite challenging and typically involve solving a severely ill-posed inverse scattering problem. In this Perspective, we formulate in general terms an aerosol and aerosol-cloud interaction space mission concept intended to provide detailed horizontal and vertical profiles of aerosol physical characteristics as well as identify mutually induced changes in the properties of aerosols and clouds. We argue that a natural and feasible way of addressing the ill-posedness of the inverse scattering problem while having an exquisite vertical-profiling capability is to fly a multistatic (including bistatic) lidar system. We analyze theoretically the capabilities of a formation-flying constellation of a primary satellite equipped with a conventional monostatic (backscattering) lidar and one or more additional platforms each hosting a receiver of the scattered laser light. If successfully implemented, this concept would combine the measurement capabilities of a passive multi-angle multi-spectral polarimeter with the vertical profiling capability of a lidar; address the ill-posedness of the inverse problem caused by the highly limited information content of monostatic lidar measurements; address the ill-posedness of the inverse problem caused by vertical integration and surface reflection in passive photopolarimetric measurements; relax polarization accuracy requirements; eliminate the need for exquisite radiative-transfer modeling of the atmosphere-surface system in data analyses; yield the day-and-night observation capability; provide direct characterization of ground-level aerosols as atmospheric pollutants; and yield direct measurements of polarized bidirectional surface reflectance. We demonstrate, in particular, that supplementing the conventional backscattering lidar with just one additional receiver flown in formation at a scattering angle close to 170deg can dramatically increase the information content of the measurements. Although the specific subject of this Perspective is the multistatic lidar concept, all our conclusions equally apply to a multistatic radar system intended to study from space the global distribution of cloud and precipitation characteristics.

  9. Performance of a Modern Glucose Meter in ICU and General Hospital Inpatients: 3 Years of Real-World Paired Meter and Central Laboratory Results.

    PubMed

    Zhang, Ray; Isakow, Warren; Kollef, Marin H; Scott, Mitchell G

    2017-09-01

    Due to accuracy concerns, the Food and Drug Administration issued guidances to manufacturers that resulted in Center for Medicare and Medicaid Services stating that the use of meters in critically ill patients is "off-label" and constitutes "high complexity" testing. This is causing significant workflow problems in ICUs nationally. We wished to determine whether real-world accuracy of modern glucose meters is worse in ICU patients compared with non-ICU inpatients. We reviewed glucose results over the preceding 3 years, comparing results from paired glucose meter and central laboratory tests performed within 60 minutes of each other in ICU versus non-ICU settings. Seven ICU and 30 non-ICU wards at a 1,300-bed academic hospital in the United States. A total of 14,763 general medicine/surgery inpatients and 20,970 ICU inpatients. None. Compared meter results with near simultaneously performed laboratory results from the same patient by applying the 2016 U.S. Food and Drug Administration accuracy criteria, determining mean absolute relative difference and examining where paired results fell within the Parkes consensus error grid zones. A higher percentage of glucose meter results from ICUs than from non-ICUs passed 2016 Food and Drug Administration accuracy criteria (p < 10) when comparing meter results with laboratory results. At 1 minute, no meter result from ICUs posed dangerous or significant risk by error grid analysis, whereas at 10 minutes, less than 0.1% of ICU meter results did, which was not statistically different from non-ICU results. Real-world accuracy of modern glucose meters is at least as accurate in the ICU setting as in the non-ICU setting at our institution.

  10. On epicardial potential reconstruction using regularization schemes with the L1-norm data term.

    PubMed

    Shou, Guofa; Xia, Ling; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart

    2011-01-07

    The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions.

  11. Optimal secondary source position in exterior spherical acoustical holophony

    NASA Astrophysics Data System (ADS)

    Pasqual, A. M.; Martin, V.

    2012-02-01

    Exterior spherical acoustical holophony is a branch of spatial audio reproduction that deals with the rendering of a given free-field radiation pattern (the primary field) by using a compact spherical loudspeaker array (the secondary source). More precisely, the primary field is known on a spherical surface surrounding the primary and secondary sources and, since the acoustic fields are described in spherical coordinates, they are naturally subjected to spherical harmonic analysis. Besides, the inverse problem of deriving optimal driving signals from a known primary field is ill-posed because the secondary source cannot radiate high-order spherical harmonics efficiently, especially in the low-frequency range. As a consequence, a standard least-squares solution will overload the transducers if the primary field contains such harmonics. Here, this is avoided by discarding the strongly decaying spherical waves, which are identified through inspection of the radiation efficiency curves of the secondary source. However, such an unavoidable regularization procedure increases the least-squares error, which also depends on the position of the secondary source. This paper deals with the above-mentioned questions in the context of far-field directivity reproduction at low and medium frequencies. In particular, an optimal secondary source position is sought, which leads to the lowest reproduction error in the least-squares sense without overloading the transducers. In order to address this issue, a regularization quality factor is introduced to evaluate the amount of regularization required. It is shown that the optimal position improves significantly the holophonic reconstruction and maximizes the regularization quality factor (minimizes the amount of regularization), which is the main general contribution of this paper. Therefore, this factor can also be used as a cost function to obtain the optimal secondary source position.

  12. A simulation based method to assess inversion algorithms for transverse relaxation data

    NASA Astrophysics Data System (ADS)

    Ghosh, Supriyo; Keener, Kevin M.; Pan, Yong

    2008-04-01

    NMR relaxometry is a very useful tool for understanding various chemical and physical phenomena in complex multiphase systems. A Carr-Purcell-Meiboom-Gill (CPMG) [P.T. Callaghan, Principles of Nuclear Magnetic Resonance Microscopy, Clarendon Press, Oxford, 1991] experiment is an easy and quick way to obtain transverse relaxation constant (T2) in low field. Most of the samples usually have a distribution of T2 values. Extraction of this distribution of T2s from the noisy decay data is essentially an ill-posed inverse problem. Various inversion approaches have been used to solve this problem, to date. A major issue in using an inversion algorithm is determining how accurate the computed distribution is. A systematic analysis of an inversion algorithm, UPEN [G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data, Journal of Magnetic Resonance 132 (1998) 65-77; G.C. Borgia, R.J.S. Brown, P. Fantazzini, Uniform-penalty inversion of multiexponential decay data II. Data spacing, T2 data, systematic data errors, and diagnostics, Journal of Magnetic Resonance 147 (2000) 273-285] was performed by means of simulated CPMG data generation. Through our simulation technique and statistical analyses, the effects of various experimental parameters on the computed distribution were evaluated. We converged to the true distribution by matching up the inversion results from a series of true decay data and a noisy simulated data. In addition to simulation studies, the same approach was also applied on real experimental data to support the simulation results.

  13. Embedding Game-Based Problem-Solving Phase into Problem-Posing System for Mathematics Learning

    ERIC Educational Resources Information Center

    Chang, Kuo-En; Wu, Lin-Jung; Weng, Sheng-En; Sung, Yao-Ting

    2012-01-01

    A problem-posing system is developed with four phases including posing problem, planning, solving problem, and looking back, in which the "solving problem" phase is implemented by game-scenarios. The system supports elementary students in the process of problem-posing, allowing them to fully engage in mathematical activities. In total, 92 fifth…

  14. Characteristics of Problem Posing of Grade 9 Students on Geometric Tasks

    ERIC Educational Resources Information Center

    Chua, Puay Huat; Wong, Khoon Yoong

    2012-01-01

    This is an exploratory study into the individual problem-posing characteristics of 480 Grade 9 Singapore students who were novice problem posers working on two geometric tasks. The students were asked to pose a problem for their friends to solve. Analyses of solvable posed problems were based on the problem type, problem information, solution type…

  15. Robustly Aligning a Shape Model and Its Application to Car Alignment of Unknown Pose.

    PubMed

    Li, Yan; Gu, Leon; Kanade, Takeo

    2011-09-01

    Precisely localizing in an image a set of feature points that form a shape of an object, such as car or face, is called alignment. Previous shape alignment methods attempted to fit a whole shape model to the observed data, based on the assumption of Gaussian observation noise and the associated regularization process. However, such an approach, though able to deal with Gaussian noise in feature detection, turns out not to be robust or precise because it is vulnerable to gross feature detection errors or outliers resulting from partial occlusions or spurious features from the background or neighboring objects. We address this problem by adopting a randomized hypothesis-and-test approach. First, a Bayesian inference algorithm is developed to generate a shape-and-pose hypothesis of the object from a partial shape or a subset of feature points. For alignment, a large number of hypotheses are generated by randomly sampling subsets of feature points, and then evaluated to find the one that minimizes the shape prediction error. This method of randomized subset-based matching can effectively handle outliers and recover the correct object shape. We apply this approach on a challenging data set of over 5,000 different-posed car images, spanning a wide variety of car types, lighting, background scenes, and partial occlusions. Experimental results demonstrate favorable improvements over previous methods on both accuracy and robustness.

  16. Modeling the 16 September 2015 Chile tsunami source with the inversion of deep-ocean tsunami records by means of the r - solution method

    NASA Astrophysics Data System (ADS)

    Voronina, Tatyana; Romanenko, Alexey; Loskutov, Artem

    2017-04-01

    The key point in the state-of-the-art in the tsunami forecasting is constructing a reliable tsunami source. In this study, we present an application of the original numerical inversion technique to modeling the tsunami sources of the 16 September 2015 Chile tsunami. The problem of recovering a tsunami source from remote measurements of the incoming wave in the deep-water tsunameters is considered as an inverse problem of mathematical physics in the class of ill-posed problems. This approach is based on the least squares and the truncated singular value decomposition techniques. The tsunami wave propagation is considered within the scope of the linear shallow-water theory. As in inverse seismic problem, the numerical solutions obtained by mathematical methods become unstable due to the presence of noise in real data. A method of r-solutions makes it possible to avoid instability in the solution to the ill-posed problem under study. This method seems to be attractive from the computational point of view since the main efforts are required only once for calculating the matrix whose columns consist of computed waveforms for each harmonic as a source (an unknown tsunami source is represented as a part of a spatial harmonics series in the source area). Furthermore, analyzing the singular spectra of the matrix obtained in the course of numerical calculations one can estimate the future inversion by a certain observational system that will allow offering a more effective disposition for the tsunameters with the help of precomputations. In other words, the results obtained allow finding a way to improve the inversion by selecting the most informative set of available recording stations. The case study of the 6 February 2013 Solomon Islands tsunami highlights a critical role of arranging deep-water tsunameters for obtaining the inversion results. Implementation of the proposed methodology to the 16 September 2015 Chile tsunami has successfully produced tsunami source model. The function recovered by the method proposed can find practical applications both as an initial condition for various optimization approaches and for computer calculation of the tsunami wave propagation.

  17. Analysis of an optimization-based atomistic-to-continuum coupling method for point defects

    DOE PAGES

    Olson, Derek; Shapeev, Alexander V.; Bochev, Pavel B.; ...

    2015-11-16

    Here, we formulate and analyze an optimization-based Atomistic-to-Continuum (AtC) coupling method for problems with point defects. Application of a potential-based atomistic model near the defect core enables accurate simulation of the defect. Away from the core, where site energies become nearly independent of the lattice position, the method switches to a more efficient continuum model. The two models are merged by minimizing the mismatch of their states on an overlap region, subject to the atomistic and continuum force balance equations acting independently in their domains. We prove that the optimization problem is well-posed and establish error estimates.

  18. ADRC for spacecraft attitude and position synchronization in libration point orbits

    NASA Astrophysics Data System (ADS)

    Gao, Chen; Yuan, Jianping; Zhao, Yakun

    2018-04-01

    This paper addresses the problem of spacecraft attitude and position synchronization in libration point orbits between a leader and a follower. Using dual quaternion, the dimensionless relative coupled dynamical model is derived considering computation efficiency and accuracy. Then a model-independent dimensionless cascade pose-feedback active disturbance rejection controller is designed to spacecraft attitude and position tracking control problems considering parameter uncertainties and external disturbances. Numerical simulations for the final approach phase in spacecraft rendezvous and docking and formation flying are done, and the results show high-precision tracking errors and satisfactory convergent rates under bounded control torque and force which validate the proposed approach.

  19. [Legal aspects of the use of footbaths for cattle and sheep].

    PubMed

    Kleiminger, E

    2012-04-24

    Claw diseases pose a major problem for dairy and sheep farms. As well as systemic treatments of these illnesses by means of drug injection, veterinarians discuss the application of footbaths for the local treatment of dermatitis digitalis or foot rot. On farms footbaths are used with different substances and for various purposes. The author presents the requirements for veterinary medicinal products (marketing authorization and manufacturing authorization) and demonstrates the operation of the "cascade in case of a treatment crisis". In addition, the distinction between veterinary hygiene biocidal products and veterinary medicinal products and substances to care for claws is explained.

  20. Mathematics and Measurement

    PubMed Central

    Boisvert, Ronald F.; Donahue, Michael J.; Lozier, Daniel W.; McMichael, Robert; Rust, Bert W.

    2001-01-01

    In this paper we describe the role that mathematics plays in measurement science at NIST. We first survey the history behind NIST’s current work in this area, starting with the NBS Math Tables project of the 1930s. We then provide examples of more recent efforts in the application of mathematics to measurement science, including the solution of ill-posed inverse problems, characterization of the accuracy of software for micromagnetic modeling, and in the development and dissemination of mathematical reference data. Finally, we comment on emerging issues in measurement science to which mathematicians will devote their energies in coming years. PMID:27500024

  1. Antinauseants in Pregnancy: Teratogens or Not?

    PubMed Central

    Biringer, Anne

    1984-01-01

    Nausea and/or vomiting affect 50% of all pregnant women. For most women, this is a self-limited problem which responds well to conservative management. However, there are some situations where the risk to the mother and fetus posed by the illness are greater than the possible risks of teratogenicity of antinauseant drugs. Antihistamines have had the widest testing, and to date, there has been no evidence linking doxylamine, dimenhydrinate or promethazine to congenital malformations. Since no available drugs have official approval for use in nausea and vomiting of pregnancy the physician is left alone to make this difficult decision. PMID:21279128

  2. On the reconstruction of the surface structure of the spotted stars

    NASA Astrophysics Data System (ADS)

    Kolbin, A. I.; Shimansky, V. V.; Sakhibullin, N. A.

    2013-07-01

    We have developed and tested a light-curve inversion technique for photometric mapping of spotted stars. The surface of a spotted star is partitioned into small area elements, over which a search is carried out for the intensity distribution providing the best agreement between the observed and model light curves within a specified uncertainty. We have tested mapping techniques based on the use of both a single light curve and several light curves obtained in different photometric bands. Surface reconstruction artifacts due to the ill-posed nature of the problem have been identified.

  3. Real-Time Identification of Wheel Terrain Interaction Models for Enhanced Autonomous Vehicle Mobility

    DTIC Science & Technology

    2014-04-24

    tim at io n Er ro r ( cm ) 0 2 4 6 8 10 Color Statistics Angelova...Color_Statistics_Error) / Average_Slip_Error Position Estimation Error: Global Pose Po si tio n Es tim at io n Er ro r ( cm ) 0 2 4 6 8 10 12 Color...get some kind of clearance for releasing pose and odometry data) collected at the following sites – Taylor, Gascola, Somerset, Fort Bliss and

  4. Patient safety is not enough: targeting quality improvements to optimize the health of the population.

    PubMed

    Woolf, Steven H

    2004-01-06

    Ensuring patient safety is essential for better health care, but preoccupation with niches of medicine, such as patient safety, can inadvertently compromise outcomes if it distracts from other problems that pose a greater threat to health. The greatest benefit for the population comes from a comprehensive view of population needs and making improvements in proportion with their potential effect on public health; anything less subjects an excess of people to morbidity and death. Patient safety, in context, is a subset of health problems affecting Americans. Safety is a subcategory of medical errors, which also includes mistakes in health promotion and chronic disease management that cost lives but do not affect "safety." These errors are a subset of lapses in quality, which result not only from errors but also from systemic problems, such as lack of access, inequity, and flawed system designs. Lapses in quality are a subset of deficient caring, which encompasses gaps in therapeutics, respect, and compassion that are undetected by normative quality indicators. These larger problems arguably cost hundreds of thousands more lives than do lapses in safety, and the system redesigns to correct them should receive proportionately greater emphasis. Ensuring such rational prioritization requires policy and medical leaders to eschew parochialism and take a global perspective in gauging health problems. The public's well-being requires policymakers to view the system as a whole and consider the potential effect on overall population health when prioritizing care improvements and system redesigns.

  5. Object recognition and localization from 3D point clouds by maximum-likelihood estimation

    NASA Astrophysics Data System (ADS)

    Dantanarayana, Harshana G.; Huntley, Jonathan M.

    2017-08-01

    We present an algorithm based on maximum-likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike `interest point'-based algorithms which normally discard such data. Compared to the 6D Hough transform, it has negligible memory requirements, and is computationally efficient compared to iterative closest point algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degrees of freedom (d.f.) example is given, followed by a full 6 d.f. analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an RMS alignment error as low as 0.3 mm.

  6. Problem Posing with the Multiplication Table

    ERIC Educational Resources Information Center

    Dickman, Benjamin

    2014-01-01

    Mathematical problem posing is an important skill for teachers of mathematics, and relates readily to mathematical creativity. This article gives a bit of background information on mathematical problem posing, lists further references to connect problem posing and creativity, and then provides 20 problems based on the multiplication table to be…

  7. A globally well-posed finite element algorithm for aerodynamics applications

    NASA Technical Reports Server (NTRS)

    Iannelli, G. S.; Baker, A. J.

    1991-01-01

    A finite element CFD algorithm is developed for Euler and Navier-Stokes aerodynamic applications. For the linear basis, the resultant approximation is at least second-order-accurate in time and space for synergistic use of three procedures: (1) a Taylor weak statement, which provides for derivation of companion conservation law systems with embedded dispersion-error control mechanisms; (2) a stiffly stable second-order-accurate implicit Rosenbrock-Runge-Kutta temporal algorithm; and (3) a matrix tensor product factorization that permits efficient numerical linear algebra handling of the terminal large-matrix statement. Thorough analyses are presented regarding well-posed boundary conditions for inviscid and viscous flow specifications. Numerical solutions are generated and compared for critical evaluation of quasi-one- and two-dimensional Euler and Navier-Stokes benchmark test problems.

  8. Investigation of Problem-Solving and Problem-Posing Abilities of Seventh-Grade Students

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ünal, Hasan

    2015-01-01

    This study aims to examine the effect of multiple problem-solving skills on the problem-posing abilities of gifted and non-gifted students and to assess whether the possession of such skills can predict giftedness or affect problem-posing abilities. Participants' metaphorical images of problem posing were also explored. Participants were 20 gifted…

  9. Density Imaging of Puy de Dôme Volcano by Joint Inversion of Muographic and Gravimetric Data

    NASA Astrophysics Data System (ADS)

    Barnoud, A.; Niess, V.; Le Ménédeu, E.; Cayol, V.; Carloganu, C.

    2016-12-01

    We aim at jointly inverting high density muographic and gravimetric data to robustly infer the density structure of volcanoes. We use the puy de Dôme volcano in France as a proof of principle since high quality data sets are available for both muography and gravimetry. Gravimetric inversion and muography are independent methods that provide an estimation of density distributions. On the one hand, gravimetry allows to reconstruct 3D density variations by inversion. This process is well known to be ill-posed and intrinsically non unique, thus it requires additional constraints (eg. a priori density model). On the other hand, muography provides a direct measurement of 2D mean densities (radiographic images) from the detection of high energy atmospheric muons crossing the volcanic edifice. 3D density distributions can be computed from several radiographic images, but the number of images is generally limited by field constraints and by the limited number of available telescopes. Thus, muon tomography is also ill-posed in practice.In the case of the puy de Dôme volcano, the density structures inferred from gravimetric data (Portal et al. 2016) and from muographic data (Le Ménédeu et al. 2016) show a qualitative agreement but cannot be compared quantitatively. Because each method has different intrinsic resolutions due to the physics (Jourde et al., 2015), the joint inversion is expected to improve the robustness of the inversion. Such joint inversion has already been applied in a volcanic context (Nishiyama et al., 2013).Volcano muography requires state-of-art, high-resolution and large-scale muon detectors (Ambrosino et al., 2015). Instrumental uncertainties and systematic errors may constitute an important limitation for muography and should not be overlooked. For instance, low-energy muons are detected together with ballistic high-energy muons, decreasing the measured value of the mean density closed to the topography.Here, we jointly invert the gravimetric and muographic data to characterize the 3D density distribution of the puy de Dôme volcano. We attempt to precisely identify and estimate the different uncertainties and systematic errors so that they can be accounted for in the inversion scheme.

  10. Digital imaging for dental caries.

    PubMed

    Wenzel, A

    2000-04-01

    Laboratory studies show that digital intraoral radiography systems are as accurate as dental film for the detection of caries when a good-quality image is obtained, although more re-takes might be necessary because of positioning errors with the digital systems, particularly the charge-coupled device sensors. The phosphor plate is more comfortable for the patient than nondigital systems, and the dose can be further reduced with the storage phosphors. Cross-contamination does not pose a problem with digital systems if simple hygiene procedures are observed.

  11. Renal and urologic manifestations of pediatric condition falsification/Munchausen by proxy.

    PubMed

    Feldman, Kenneth W; Feldman, Marc D; Grady, Richard; Burns, Mark W; McDonald, Ruth

    2007-06-01

    Renal and urologic problems in pediatric condition falsification (PCF)/Munchausen by proxy (MBP) can pose frustrating diagnostic and management problems. Five previously unreported victims of PCF/MBP are described. Symptoms included artifactual hematuria, recalcitrant urinary infections, dysfunctional voiding, perineal irritation, glucosuria, and "nutcracker syndrome", in addition to alleged sexual abuse. Falsifications included false or exaggerated history, specimen contamination, and induced illness. Caretakers also intentionally withheld appropriately prescribed treatment. Children underwent invasive diagnostic and surgical procedures because of the falsifications. They developed iatrogenic complications as well as behavioral problems stemming from their abuse. A PCF/MBP database was started in 1995 and includes the characteristics of 135 PCF/MBP victims examined by the first author between 1974 and 2006. Analysis of the database revealed that 25% of the children had renal or urologic issues. They were the presenting/primary issue for five. Diagnosis of PCF/MBP was delayed an average of 4.5 years from symptom onset. Almost all patients were victimized by their mothers, and maternal health falsification and somatization were common. Thirty-one of 34 children had siblings who were also victimized, six of whom died. In conclusion, falsifications of childhood renal and urologic illness are relatively uncommon; however, the deceits are prolonged and tortuous. Early recognition and intervention might limit the harm.

  12. Convex blind image deconvolution with inverse filtering

    NASA Astrophysics Data System (ADS)

    Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong

    2018-03-01

    Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.

  13. Some Reflections on Problem Posing: A Conversation with Marion Walter

    ERIC Educational Resources Information Center

    Baxter, Juliet A.

    2005-01-01

    Marion Walter, an internationally acclaimed mathematics educator discusses about problem posing, focusing on both the merits of problem posing and techniques to encourage problem posing. She believes that playful attitude toward problem variables is an essential part of an inquiring mind and the more opportunities that learners have, to change a…

  14. Wavelet methods in multi-conjugate adaptive optics

    NASA Astrophysics Data System (ADS)

    Helin, T.; Yudytskiy, M.

    2013-08-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory.

  15. Fundamental concepts of problem-based learning for the new facilitator.

    PubMed Central

    Kanter, S L

    1998-01-01

    Problem-based learning (PBL) is a powerful small group learning tool that should be part of the armamentarium of every serious educator. Classic PBL uses ill-structured problems to simulate the conditions that occur in the real environment. Students play an active role and use an iterative process of seeking new information based on identified learning issues, restructuring the information in light of the new knowledge, gathering additional information, and so forth. Faculty play a facilitatory role, not a traditional instructional role, by posing metacognitive questions to students. These questions serve to assist in organizing, generalizing, and evaluating knowledge; to probe for supporting evidence; to explore faulty reasoning; to stimulate discussion of attitudes; and to develop self-directed learning and self-assessment skills. Professional librarians play significant roles in the PBL environment extending from traditional service provider to resource person to educator. Students and faculty usually find the learning experience productive and enjoyable. PMID:9681175

  16. A novel algorithm of super-resolution image reconstruction based on multi-class dictionaries for natural scene

    NASA Astrophysics Data System (ADS)

    Wu, Wei; Zhao, Dewei; Zhang, Huan

    2015-12-01

    Super-resolution image reconstruction is an effective method to improve the image quality. It has important research significance in the field of image processing. However, the choice of the dictionary directly affects the efficiency of image reconstruction. A sparse representation theory is introduced into the problem of the nearest neighbor selection. Based on the sparse representation of super-resolution image reconstruction method, a super-resolution image reconstruction algorithm based on multi-class dictionary is analyzed. This method avoids the redundancy problem of only training a hyper complete dictionary, and makes the sub-dictionary more representatives, and then replaces the traditional Euclidean distance computing method to improve the quality of the whole image reconstruction. In addition, the ill-posed problem is introduced into non-local self-similarity regularization. Experimental results show that the algorithm is much better results than state-of-the-art algorithm in terms of both PSNR and visual perception.

  17. A space-frequency multiplicative regularization for force reconstruction problems

    NASA Astrophysics Data System (ADS)

    Aucejo, M.; De Smet, O.

    2018-05-01

    Dynamic forces reconstruction from vibration data is an ill-posed inverse problem. A standard approach to stabilize the reconstruction consists in using some prior information on the quantities to identify. This is generally done by including in the formulation of the inverse problem a regularization term as an additive or a multiplicative constraint. In the present article, a space-frequency multiplicative regularization is developed to identify mechanical forces acting on a structure. The proposed regularization strategy takes advantage of one's prior knowledge of the nature and the location of excitation sources, as well as that of their spectral contents. Furthermore, it has the merit to be free from the preliminary definition of any regularization parameter. The validity of the proposed regularization procedure is assessed numerically and experimentally. It is more particularly pointed out that properly exploiting the space-frequency characteristics of the excitation field to identify can improve the quality of the force reconstruction.

  18. Calculation of susceptibility through multiple orientation sampling (COSMOS): a method for conditioning the inverse problem from measured magnetic field map to susceptibility source image in MRI.

    PubMed

    Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi

    2009-01-01

    Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.

  19. Repetition and comprehension of spoken sentences by reading-disabled children.

    PubMed

    Shankweiler, D; Smith, S T; Mann, V A

    1984-11-01

    The language problems of reading-disabled elementary school children are not confined to written language alone. These children often exhibit problems of ordered recall of verbal materials that are equally severe whether the materials are presented in printed or in spoken form. Sentences that pose problems of pronoun reference might be expected to place a special burden on short-term memory because close grammatical relationships obtain between words that are distant from one another. With this logic in mind, third-grade children with specific reading disability and classmates matched for age and IQ were tested on five sentence types, each of which poses a problem in assigning pronoun reference. On one occasion the children were tested for comprehension of the sentences by a forced-choice picture verification task. On a later occasion they received the same sentences as a repetition test. Good and poor readers differed significantly in immediate recall of the reflexive sentences, but not in comprehension of them as assessed by picture choice. It was suggested that the pictures provided cues which lightened the memory load, a possibility that could explain why the poor readers were not demonstrably inferior in comprehension of the sentences even though they made significantly more errors than the good readers in recalling them.

  20. Accuracy and Precision of Visual Stimulus Timing in PsychoPy: No Timing Errors in Standard Usage

    PubMed Central

    Garaizar, Pablo; Vadillo, Miguel A.

    2014-01-01

    In a recent report published in PLoS ONE, we found that the performance of PsychoPy degraded with very short timing intervals, suggesting that it might not be perfectly suitable for experiments requiring the presentation of very brief stimuli. The present study aims to provide an updated performance assessment for the most recent version of PsychoPy (v1.80) under different hardware/software conditions. Overall, the results show that PsychoPy can achieve high levels of precision and accuracy in the presentation of brief visual stimuli. Although occasional timing errors were found in very demanding benchmarking tests, there is no reason to think that they can pose any problem for standard experiments developed by researchers. PMID:25365382

  1. Numerical reconstruction of tsunami source using combined seismic, satellite and DART data

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor

    2014-05-01

    Recent tsunamis, for instance, in Japan (2011), in Sumatra (2004), and at the Indian coast (2004) showed that a system of producing exact and timely information about tsunamis is of a vital importance. Numerical simulation is an effective instrument for providing such information. Bottom relief characteristics and the initial perturbation data (a tsunami source) are required for the direct simulation of tsunamis. The seismic data about the source are usually obtained in a few tens of minutes after an event has occurred (the seismic waves velocity being about five hundred kilometres per minute, while the velocity of tsunami waves is less than twelve kilometres per minute). A difference in the arrival times of seismic and tsunami waves can be used when operationally refining the tsunami source parameters and modelling expected tsunami wave height on the shore. The most suitable physical models related to the tsunamis simulation are based on the shallow water equations. The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate three different inverse problems of determining a tsunami source using three different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements, satellite wave-form images and seismic data. These problems are severely ill-posed. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analyzed. In numerical experiment we used gradient methods (Landweber iteration and conjugate gradient method) for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.

  2. Glimpse: Sparsity based weak lensing mass-mapping tool

    NASA Astrophysics Data System (ADS)

    Lanusse, F.; Starck, J.-L.; Leonard, A.; Pires, S.

    2018-02-01

    Glimpse, also known as Glimpse2D, is a weak lensing mass-mapping tool that relies on a robust sparsity-based regularization scheme to recover high resolution convergence from either gravitational shear alone or from a combination of shear and flexion. Including flexion allows the supplementation of the shear on small scales in order to increase the sensitivity to substructures and the overall resolution of the convergence map. To preserve all available small scale information, Glimpse avoids any binning of the irregularly sampled input shear and flexion fields and treats the mass-mapping problem as a general ill-posed inverse problem, regularized using a multi-scale wavelet sparsity prior. The resulting algorithm incorporates redshift, reduced shear, and reduced flexion measurements for individual galaxies and is made highly efficient by the use of fast Fourier estimators.

  3. Optical Breast Shape Capture and Finite Element Mesh Generation for Electrical Impedance Tomography

    PubMed Central

    Forsyth, J.; Borsic, A.; Halter, R.J.; Hartov, A.; Paulsen, K.D.

    2011-01-01

    X-Ray mammography is the standard for breast cancer screening. The development of alternative imaging modalities is desirable because Mammograms expose patients to ionizing radiation. Electrical Impedance Tomography (EIT) may be used to determine tissue conductivity, a property which is an indicator of cancer presence. EIT is also a low-cost imaging solution and does not involve ionizing radiation. In breast EIT, impedance measurements are made using electrodes placed on the surface of the patient’s breast. The complex conductivity of the volume of the breast is estimated by a reconstruction algorithm. EIT reconstruction is a severely ill-posed inverse problem. As a result, noisy instrumentation and incorrect modelling of the electrodes and domain shape produce significant image artefacts. In this paper, we propose a method that has the potential to reduce these errors by accurately modelling the patient breast shape. A 3D hand-held optical scanner is used to acquire the breast geometry and electrode positions. We develop methods for processing the data from the scanner and producing volume meshes accurately matching the breast surface and electrode locations, which can be used for image reconstruction. We demonstrate this method for a plaster breast phantom and a human subject. Using this approach will allow patient-specific finite element meshes to be generated which has the potential to improve the clinical value of EIT for breast cancer diagnosis. PMID:21646711

  4. A trade-off solution between model resolution and covariance in surface-wave inversion

    USGS Publications Warehouse

    Xia, J.; Xu, Y.; Miller, R.D.; Zeng, C.

    2010-01-01

    Regularization is necessary for inversion of ill-posed geophysical problems. Appraisal of inverse models is essential for meaningful interpretation of these models. Because uncertainties are associated with regularization parameters, extra conditions are usually required to determine proper parameters for assessing inverse models. Commonly used techniques for assessment of a geophysical inverse model derived (generally iteratively) from a linear system are based on calculating the model resolution and the model covariance matrices. Because the model resolution and the model covariance matrices of the regularized solutions are controlled by the regularization parameter, direct assessment of inverse models using only the covariance matrix may provide incorrect results. To assess an inverted model, we use the concept of a trade-off between model resolution and covariance to find a proper regularization parameter with singular values calculated in the last iteration. We plot the singular values from large to small to form a singular value plot. A proper regularization parameter is normally the first singular value that approaches zero in the plot. With this regularization parameter, we obtain a trade-off solution between model resolution and model covariance in the vicinity of a regularized solution. The unit covariance matrix can then be used to calculate error bars of the inverse model at a resolution level determined by the regularization parameter. We demonstrate this approach with both synthetic and real surface-wave data. ?? 2010 Birkh??user / Springer Basel AG.

  5. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  6. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  7. Confidence estimation for quantitative photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Gröhl, Janek; Kirchner, Thomas; Maier-Hein, Lena

    2018-02-01

    Quantification of photoacoustic (PA) images is one of the major challenges currently being addressed in PA research. Tissue properties can be quantified by correcting the recorded PA signal with an estimation of the corresponding fluence. Fluence estimation itself, however, is an ill-posed inverse problem which usually needs simplifying assumptions to be solved with state-of-the-art methods. These simplifications, as well as noise and artifacts in PA images reduce the accuracy of quantitative PA imaging (PAI). This reduction in accuracy is often localized to image regions where the assumptions do not hold true. This impedes the reconstruction of functional parameters when averaging over entire regions of interest (ROI). Averaging over a subset of voxels with a high accuracy would lead to an improved estimation of such parameters. To achieve this, we propose a novel approach to the local estimation of confidence in quantitative reconstructions of PA images. It makes use of conditional probability densities to estimate confidence intervals alongside the actual quantification. It encapsulates an estimation of the errors introduced by fluence estimation as well as signal noise. We validate the approach using Monte Carlo generated data in combination with a recently introduced machine learning-based approach to quantitative PAI. Our experiments show at least a two-fold improvement in quantification accuracy when evaluating on voxels with high confidence instead of thresholding signal intensity.

  8. Investigation of learning environment for arithmetic word problems by problem posing as sentence integration in Indonesian language

    NASA Astrophysics Data System (ADS)

    Hasanah, N.; Hayashi, Y.; Hirashima, T.

    2017-02-01

    Arithmetic word problems remain one of the most difficult area of teaching mathematics. Learning by problem posing has been suggested as an effective way to improve students’ understanding. However, the practice in usual classroom is difficult due to extra time needed for assessment and giving feedback to students’ posed problems. To address this issue, we have developed a tablet PC software named Monsakun for learning by posing arithmetic word problems based on Triplet Structure Model. It uses the mechanism of sentence-integration, an efficient implementation of problem-posing that enables agent-assessment of posed problems. The learning environment has been used in actual Japanese elementary school classrooms and the effectiveness has been confirmed in previous researches. In this study, ten Indonesian elementary school students living in Japan participated in a learning session of problem posing using Monsakun in Indonesian language. We analyzed their learning activities and show that students were able to interact with the structure of simple word problem using this learning environment. The results of data analysis and questionnaire suggested that the use of Monsakun provides a way of creating an interactive and fun environment for learning by problem posing for Indonesian elementary school students.

  9. Validation of US3D for Capsule Aerodynamics using 05-CA Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schwing, Alan

    2012-01-01

    RANS is ill-suited for analysis of these problems. For transonic and supersonic cases, US3D shows fairly good agreement using DES across all cases. Separation prediction and resulting backshell pressure are problems across all portions of this analysis. This becomes more of an issue at lower Mach numbers: .Stagnation pressures not as large - wake and backshell are more significant .Errors on shoulder act on a large area - small discrepancies manifest as large changes Subsonic comparisons are mixed with regard to integrated loads and merit more attention. Dominant unsteady behavior (wake shedding) resolved well, though.

  10. Computed inverse resonance imaging for magnetic susceptibility map reconstruction.

    PubMed

    Chen, Zikuan; Calhoun, Vince

    2012-01-01

    This article reports a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a 2-step computational approach. The forward T2*-weighted MRI (T2*MRI) process is broken down into 2 steps: (1) from magnetic susceptibility source to field map establishment via magnetization in the main field and (2) from field map to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes 2 inverse steps to reverse the T2*MRI procedure: field map calculation from MR-phase image and susceptibility source calculation from the field map. The inverse step from field map to susceptibility map is a 3-dimensional ill-posed deconvolution problem, which can be solved with 3 kinds of approaches: the Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from an MR-phase image with high fidelity (spatial correlation ≈ 0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by 2 computational steps: calculating the field map from the phase image and reconstructing the susceptibility map from the field map. The crux of CIMRI lies in an ill-posed 3-dimensional deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm.

  11. Computed inverse MRI for magnetic susceptibility map reconstruction

    PubMed Central

    Chen, Zikuan; Calhoun, Vince

    2015-01-01

    Objective This paper reports on a computed inverse magnetic resonance imaging (CIMRI) model for reconstructing the magnetic susceptibility source from MRI data using a two-step computational approach. Methods The forward T2*-weighted MRI (T2*MRI) process is decomposed into two steps: 1) from magnetic susceptibility source to fieldmap establishment via magnetization in a main field, and 2) from fieldmap to MR image formation by intravoxel dephasing average. The proposed CIMRI model includes two inverse steps to reverse the T2*MRI procedure: fieldmap calculation from MR phase image and susceptibility source calculation from the fieldmap. The inverse step from fieldmap to susceptibility map is a 3D ill-posed deconvolution problem, which can be solved by three kinds of approaches: Tikhonov-regularized matrix inverse, inverse filtering with a truncated filter, and total variation (TV) iteration. By numerical simulation, we validate the CIMRI model by comparing the reconstructed susceptibility maps for a predefined susceptibility source. Results Numerical simulations of CIMRI show that the split Bregman TV iteration solver can reconstruct the susceptibility map from a MR phase image with high fidelity (spatial correlation≈0.99). The split Bregman TV iteration solver includes noise reduction, edge preservation, and image energy conservation. For applications to brain susceptibility reconstruction, it is important to calibrate the TV iteration program by selecting suitable values of the regularization parameter. Conclusions The proposed CIMRI model can reconstruct the magnetic susceptibility source of T2*MRI by two computational steps: calculating the fieldmap from the phase image and reconstructing the susceptibility map from the fieldmap. The crux of CIMRI lies in an ill-posed 3D deconvolution problem, which can be effectively solved by the split Bregman TV iteration algorithm. PMID:22446372

  12. Examining the Prevalence of Self-Reported Foodborne Illnesses and Food Safety Risks among International College Students in the United States

    ERIC Educational Resources Information Center

    Lyonga, Agnes Ngale; Eighmy, Myron A.; Garden-Robinson, Julie

    2010-01-01

    Foodborne illness and food safety risks pose health threats to everyone, including international college students who live in the United States and encounter new or unfamiliar foods. This study assessed the prevalence of self-reported foodborne illness among international college students by cultural regions and length of time in the United…

  13. The Necessity of Machine Learning and Epistemology in the Development of Categorization Theories: A Case Study in Prototype-Exemplar Debate

    NASA Astrophysics Data System (ADS)

    Gagliardi, Francesco

    In the present paper we discuss some aspects of the development of categorization theories concerning cognitive psychology and machine learning. We consider the thirty-year debate between prototype-theory and exemplar-theory in the studies of cognitive psychology regarding the categorization processes. We propose this debate is ill-posed, because it neglects some theoretical and empirical results of machine learning about the bias-variance theorem and the existence of some instance-based classifiers which can embed models subsuming both prototype and exemplar theories. Moreover this debate lies on a epistemological error of pursuing a, so called, experimentum crucis. Then we present how an interdisciplinary approach, based on synthetic method for cognitive modelling, can be useful to progress both the fields of cognitive psychology and machine learning.

  14. An interactive framework for acquiring vision models of 3-D objects from 2-D images.

    PubMed

    Motai, Yuichi; Kak, Avinash

    2004-02-01

    This paper presents a human-computer interaction (HCI) framework for building vision models of three-dimensional (3-D) objects from their two-dimensional (2-D) images. Our framework is based on two guiding principles of HCI: 1) provide the human with as much visual assistance as possible to help the human make a correct input; and 2) verify each input provided by the human for its consistency with the inputs previously provided. For example, when stereo correspondence information is elicited from a human, his/her job is facilitated by superimposing epipolar lines on the images. Although that reduces the possibility of error in the human marked correspondences, such errors are not entirely eliminated because there can be multiple candidate points close together for complex objects. For another example, when pose-to-pose correspondence is sought from a human, his/her job is made easier by allowing the human to rotate the partial model constructed in the previous pose in relation to the partial model for the current pose. While this facility reduces the incidence of human-supplied pose-to-pose correspondence errors, such errors cannot be eliminated entirely because of confusion created when multiple candidate features exist close together. Each input provided by the human is therefore checked against the previous inputs by invoking situation-specific constraints. Different types of constraints (and different human-computer interaction protocols) are needed for the extraction of polygonal features and for the extraction of curved features. We will show results on both polygonal objects and object containing curved features.

  15. Extended infusion of beta-lactam antibiotics: optimizing therapy in critically-ill patients in the era of antimicrobial resistance.

    PubMed

    Rizk, Nesrine A; Kanafani, Zeina A; Tabaja, Hussam Z; Kanj, Souha S

    2017-07-01

    Beta-lactams are at the cornerstone of therapy in critical care settings, but their clinical efficacy is challenged by the rise in bacterial resistance. Infections with multi-drug resistant organisms are frequent in intensive care units, posing significant therapeutic challenges. The problem is compounded by a dearth in the development of new antibiotics. In addition, critically-ill patients have unique physiologic characteristics that alter the drugs pharmacokinetics and pharmacodynamics. Areas covered: The prolonged infusion of antibiotics (extended infusion [EI] and continuous infusion [CI]) has been the focus of research in the last decade. As beta-lactams have time-dependent killing characteristics that are altered in critically-ill patients, prolonged infusion is an attractive approach to maximize their drug delivery and efficacy. Several studies have compared traditional dosing to EI/CI of beta-lactams with regard to clinical efficacy. Clinical data are primarily composed of retrospective studies and some randomized controlled trials. Several reports show promising results. Expert commentary: Reviewing the currently available evidence, we conclude that EI/CI is probably beneficial in the treatment of critically-ill patients in whom an organism has been identified, particularly those with respiratory infections. Further studies are needed to evaluate the efficacy of EI/CI in the management of infections with resistant organisms.

  16. Are universities preparing nurses to meet the challenges posed by the Australian mental health care system?

    PubMed

    Wynaden, D; Orb, A; McGowan, S; Downie, J

    2000-09-01

    The preparedness of comprehensive nurses to work with the mentally ill is of concern to many mental health professionals. Discussion as to whether current undergraduate nursing programs in Australia prepare a graduate to work as a beginning practitioner in the mental health area has been the centre of debate for most of the 1990s. This, along with the apparent lack of interest and motivation of these nurses to work in the mental health area following graduation, remains a major problem for mental health care providers. With one in five Australians now experiencing the burden of a major mental illness, the preparation of a nurse who is competent to work with the mentally ill would appear to be a priority. The purpose of the present study was to determine third year undergraduate nursing students' perceived level of preparedness to work with mentally ill clients. The results suggested significant differences in students' perceived level of confidence, knowledge and skills prior to and following theoretical and clinical exposure to the mental health area. Pre-testing of students before entering their third year indicated that the philosophy of comprehensive nursing: integration, although aspired to in principle, does not appear to occur in reality.

  17. Problem Posing as a Pedagogical Strategy: A Teacher's Perspective

    ERIC Educational Resources Information Center

    Staebler-Wiseman, Heidi A.

    2011-01-01

    Student problem posing has been advocated for mathematics instruction, and it has been suggested that problem posing can be used to develop students' mathematical content knowledge. But, problem posing has rarely been utilized in university-level mathematics courses. The goal of this teacher-as-researcher study was to develop and investigate…

  18. Deconvolution of mixing time series on a graph

    PubMed Central

    Blocker, Alexander W.; Airoldi, Edoardo M.

    2013-01-01

    In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135

  19. Distorted Born iterative T-matrix method for inversion of CSEM data in anisotropic media

    NASA Astrophysics Data System (ADS)

    Jakobsen, Morten; Tveit, Svenn

    2018-05-01

    We present a direct iterative solutions to the nonlinear controlled-source electromagnetic (CSEM) inversion problem in the frequency domain, which is based on a volume integral equation formulation of the forward modelling problem in anisotropic conductive media. Our vectorial nonlinear inverse scattering approach effectively replaces an ill-posed nonlinear inverse problem with a series of linear ill-posed inverse problems, for which there already exist efficient (regularized) solution methods. The solution update the dyadic Green's function's from the source to the scattering-volume and from the scattering-volume to the receivers, after each iteration. The T-matrix approach of multiple scattering theory is used for efficient updating of all dyadic Green's functions after each linearized inversion step. This means that we have developed a T-matrix variant of the Distorted Born Iterative (DBI) method, which is often used in the acoustic and electromagnetic (medical) imaging communities as an alternative to contrast-source inversion. The main advantage of using the T-matrix approach in this context, is that it eliminates the need to perform a full forward simulation at each iteration of the DBI method, which is known to be consistent with the Gauss-Newton method. The T-matrix allows for a natural domain decomposition, since in the sense that a large model can be decomposed into an arbitrary number of domains that can be treated independently and in parallel. The T-matrix we use for efficient model updating is also independent of the source-receiver configuration, which could be an advantage when performing fast-repeat modelling and time-lapse inversion. The T-matrix is also compatible with the use of modern renormalization methods that can potentially help us to reduce the sensitivity of the CSEM inversion results on the starting model. To illustrate the performance and potential of our T-matrix variant of the DBI method for CSEM inversion, we performed a numerical experiments based on synthetic CSEM data associated with 2D VTI and 3D orthorombic model inversions. The results of our numerical experiment suggest that the DBIT method for inversion of CSEM data in anisotropic media is both accurate and efficient.

  20. Scene analysis in the natural environment

    PubMed Central

    Lewicki, Michael S.; Olshausen, Bruno A.; Surlykke, Annemarie; Moss, Cynthia F.

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals. PMID:24744740

  1. Students’ Creativity: Problem Posing in Structured Situation

    NASA Astrophysics Data System (ADS)

    Amalina, I. K.; Amirudin, M.; Budiarto, M. T.

    2018-01-01

    This is a qualitative research concerning on students’ creativity on problem posing task. The study aimed at describing the students’ creative thinking ability to pose the mathematics problem in structured situations with varied condition of given problems. In order to find out the students’ creative thinking ability, an analysis of mathematics problem posing test based on fluency, novelty, and flexibility and interview was applied for categorizing students’ responses on that task. The data analysis used the quality of problem posing and categorized in 4 level of creativity. The results revealed from 29 secondary students grade 8, a student in CTL (Creative Thinking Level) 1 met the fluency. A student in CTL 2 met the novelty, while a student in CTL 3 met both fluency and novelty and no one in CTL 4. These results are affected by students’ mathematical experience. The findings of this study highlight that student’s problem posing creativity are dependent on their experience in mathematics learning and from the point of view of which students start to pose problem.

  2. Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics

    NASA Astrophysics Data System (ADS)

    Kordy, Michal Adam

    The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.

  3. Assessing Students' Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Silver, Edward A.; Cai, Jinfa

    2005-01-01

    Specific examples are used to discuss assessment, an integral part of mathematics instruction, with problem posing and assessment of problem posing. General assessment criteria are suggested to evaluate student-generated problems in terms of their quantity, originality, and complexity.

  4. Engaging Pre-Service Middle-School Teacher-Education Students in Mathematical Problem Posing: Development of an Active Learning Framework

    ERIC Educational Resources Information Center

    Ellerton, Nerida F.

    2013-01-01

    Although official curriculum documents make cursory mention of the need for problem posing in school mathematics, problem posing rarely becomes part of the implemented or assessed curriculum. This paper provides examples of how problem posing can be made an integral part of mathematics teacher education programs. It is argued that such programs…

  5. Creativity and Mathematical Problem Posing: An Analysis of High School Students' Mathematical Problem Posing in China and the USA

    ERIC Educational Resources Information Center

    Van Harpen, Xianwei Y.; Sriraman, Bharath

    2013-01-01

    In the literature, problem-posing abilities are reported to be an important aspect/indicator of creativity in mathematics. The importance of problem-posing activities in mathematics is emphasized in educational documents in many countries, including the USA and China. This study was aimed at exploring high school students' creativity in…

  6. Interlocked Problem Posing and Children's Problem Posing Performance in Free Structured Situations

    ERIC Educational Resources Information Center

    Cankoy, Osman

    2014-01-01

    The aim of this study is to explore the mathematical problem posing performance of students in free structured situations. Two classes of fifth grade students (N = 30) were randomly assigned to experimental and control groups. The categories of the problems posed in free structured situations by the 2 groups of students were studied through…

  7. Problem-Posing Strategies Used by Years 8 and 9 Students

    ERIC Educational Resources Information Center

    Stoyanova, Elena

    2005-01-01

    According to Kilpatrick (1987), in the mathematics classrooms problem posing can be applied as a "goal" or as a means of instruction. Using problem posing as a goal of instruction involves asking students to respond to a range of problem-posing prompts. The main goal of this article is a classification of mathematics questions created by Years 8…

  8. 2D deblending using the multi-scale shaping scheme

    NASA Astrophysics Data System (ADS)

    Li, Qun; Ban, Xingan; Gong, Renbin; Li, Jinnuo; Ge, Qiang; Zu, Shaohuan

    2018-01-01

    Deblending can be posed as an inversion problem, which is ill-posed and requires constraint to obtain unique and stable solution. In blended record, signal is coherent, whereas interference is incoherent in some domains (e.g., common receiver domain and common offset domain). Due to the different sparsity, coefficients of signal and interference locate in different curvelet scale domains and have different amplitudes. Take into account the two differences, we propose a 2D multi-scale shaping scheme to constrain the sparsity to separate the blended record. In the domain where signal concentrates, the multi-scale scheme passes all the coefficients representing signal, while, in the domain where interference focuses, the multi-scale scheme suppresses the coefficients representing interference. Because the interference is suppressed evidently at each iteration, the constraint of multi-scale shaping operator in all scale domains are weak to guarantee the convergence of algorithm. We evaluate the performance of the multi-scale shaping scheme and the traditional global shaping scheme by using two synthetic and one field data examples.

  9. When a Problem Is More than a Teacher's Question

    ERIC Educational Resources Information Center

    Olson, Jo Clay; Knott, Libby

    2013-01-01

    Not only are the problems teachers pose throughout their teaching of great importance but also the ways in which they use those problems make this a critical component of teaching. A problem-posing episode includes the problem setup, the statement of the problem, and the follow-up questions. Analysis of problem-posing episodes of precalculus…

  10. Glacier mass variations from recent ITSG-Grace solutions: Experiences with the point-mass modeling technique in the framework of project SPICE.

    NASA Astrophysics Data System (ADS)

    Reimond, S.; Klinger, B.; Krauss, S.; Mayer-Gürr, T.; Eicker, A.; Zemp, M.

    2017-12-01

    In recent years, remotely sensed observations have become one of the most ubiquitous and valuable sources of information for glacier monitoring. In addition to altimetry and interferometry data (as observed, e.g., by the CryoSat-2 and TanDEM-X satellites), time-variable gravity field data from the GRACE satellite mission has been used by several authors to assess mass changes in glacier systems. The main challenges in this context are i) the limited spatial resolution of GRACE, ii) the gravity signal attenuation in space and iii) the problem of isolating the glaciological signal from the gravitational signatures as detected by GRACE.In order to tackle the challenges i) and ii), we thoroughly investigate the point-mass modeling technique to represent the local gravity field. Instead of simply evaluating global spherical harmonics, we operate on the normal equation level and make use of GRACE K-band ranging data (available since April 2002) processed at the Graz University of Technology. Assessing such small-scale mass changes from space-borne gravimetric data is an ill-posed problem, which we aim to stabilize by utilizing a Genetic Algorithm based Tikhonov regularization. Concerning issue iii), we evaluate three different hydrology models (i.e. GLDAS, LSDM and WGHM) for validation purposes and the derivation of error bounds. The non-glaciological signal is calculated for each region of interest and reduced from the GRACE results.We present mass variations of several alpine glacier systems (e.g. the European Alps, Svalbard or Iceland) and compare our results to glaciological observations provided by the World Glacier Monitoring Service (WGMS) and alternative inversion methods (surface density modeling).

  11. LF/MF Propagation Modeling for D-Region Ionospheric Remote Sensing

    NASA Astrophysics Data System (ADS)

    Higginson-Rollins, M. A.; Cohen, M.

    2017-12-01

    The D-region of the ionosphere is highly inaccessible because it is too high for continuous in-situ measurement techniques and too low for satellite measurements. Very-Low Frequency (VLF) signals have been developed and used as a diagnostic tool for this region of the ionosphere and are favorable because of the low ionospheric attenuation rates, allowing global propagation - but this also creates an ill-posed multi-mode propagation problem. As an alternative, Low-Frequency (LF) and Medium-Frequency (MF) signals could be used as a diagnostic tool of the D-region. These higher frequencies have a higher attenuation rate, and thus only a few modes propagate in the Earth-ionosphere waveguide, creating a much simpler problem to analyze. The United States Coast Guard (USCG) operates a national network of radio transmitters that serve as an enhancement to the Global Positioning System (GPS). This network is termed Differential Global Positioning System (DGPS) and uses fixed reference stations as a method of determining the error in received GPS satellite signals and transmits the correction value using low frequency and medium frequency radio signals between 285 kHz and 385 kHz. Using sensitive receivers, we can detect this signal many hundreds of km away. We present modeling of the propagation of these transmitters' signals for use as a diagnostic tool for characterizing the D-region. The Finite-Difference Time-Domain (FDTD) method is implemented to model the groundwave radiated by the DGPS beacons and account for environmental effects, such as changing soil conductivities and terrain. A full wave numerical solver is used to model the skywave component of the propagating signal and specifically to ascertain the reflection coefficients for various ionospheric conditions. Preliminary results are shown and discussed, and comparisons with collected data are presented.

  12. Trajectory prediction for ballistic missiles based on boost-phase LOS measurements

    NASA Astrophysics Data System (ADS)

    Yeddanapudi, Murali; Bar-Shalom, Yaakov

    1997-10-01

    This paper addresses the problem of the estimation of the trajectory of a tactical ballistic missile using line of sight (LOS) measurements from one or more passive sensors (typically satellites). The major difficulties of this problem include: the estimation of the unknown time of launch, incorporation of (inaccurate) target thrust profiles to model the target dynamics during the boost phase and an overall ill-conditioning of the estimation problem due to poor observability of the target motion via the LOS measurements. We present a robust estimation procedure based on the Levenberg-Marquardt algorithm that provides both the target state estimate and error covariance taking into consideration the complications mentioned above. An important consideration in the defense against tactical ballistic missiles is the determination of the target position and error covariance at the acquisition range of a surveillance radar in the vicinity of the impact point. We present a systematic procedure to propagate the target state and covariance to a nominal time, when it is within the detection range of a surveillance radar to obtain a cueing volume. Mont Carlo simulation studies on typical single and two sensor scenarios indicate that the proposed algorithms are accurate in terms of the estimates and the estimator calculated covariances are consistent with the errors.

  13. An Analysis of Secondary and Middle School Teachers' Mathematical Problem Posing

    ERIC Educational Resources Information Center

    Stickles, Paula R.

    2011-01-01

    This study identifies the kinds of problems teachers pose when they are asked to (a) generate problems from given information and (b) create new problems from ones given to them. To investigate teachers' problem posting, preservice and inservice teachers completed background questionnaires and four problem-posing instruments. Based on previous…

  14. Challenges of caring for children with mental disorders: Experiences and views of caregivers attending the outpatient clinic at Muhimbili National Hospital, Dar es Salaam - Tanzania.

    PubMed

    Ambikile, Joel Semel; Outwater, Anne

    2012-07-05

    It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child's illness. Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges.

  15. Online absolute pose compensation and steering control of industrial robot based on six degrees of freedom laser measurement

    NASA Astrophysics Data System (ADS)

    Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu

    2017-03-01

    In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.

  16. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  17. Using informative priors in facies inversion: The case of C-ISR method

    NASA Astrophysics Data System (ADS)

    Valakas, G.; Modis, K.

    2016-08-01

    Inverse problems involving the characterization of hydraulic properties of groundwater flow systems by conditioning on observations of the state variables are mathematically ill-posed because they have multiple solutions and are sensitive to small changes in the data. In the framework of McMC methods for nonlinear optimization and under an iterative spatial resampling transition kernel, we present an algorithm for narrowing the prior and thus producing improved proposal realizations. To achieve this goal, we cosimulate the facies distribution conditionally to facies observations and normal scores transformed hydrologic response measurements, assuming a linear coregionalization model. The approach works by creating an importance sampling effect that steers the process to selected areas of the prior. The effectiveness of our approach is demonstrated by an example application on a synthetic underdetermined inverse problem in aquifer characterization.

  18. Regional regularization method for ECT based on spectral transformation of Laplacian

    NASA Astrophysics Data System (ADS)

    Guo, Z. H.; Kan, Z.; Lv, D. C.; Shao, F. Q.

    2016-10-01

    Image reconstruction in electrical capacitance tomography is an ill-posed inverse problem, and regularization techniques are usually used to solve the problem for suppressing noise. An anisotropic regional regularization algorithm for electrical capacitance tomography is constructed using a novel approach called spectral transformation. Its function is derived and applied to the weighted gradient magnitude of the sensitivity of Laplacian as a regularization term. With the optimum regional regularizer, the a priori knowledge on the local nonlinearity degree of the forward map is incorporated into the proposed online reconstruction algorithm. Simulation experimentations were performed to verify the capability of the new regularization algorithm to reconstruct a superior quality image over two conventional Tikhonov regularization approaches. The advantage of the new algorithm for improving performance and reducing shape distortion is demonstrated with the experimental data.

  19. Analysis of Problems Posed by Sixth-Grade Middle School Students for the Addition of Fractions in Terms of Semantic Structures

    ERIC Educational Resources Information Center

    Kar, Tugrul

    2015-01-01

    This study aimed to investigate how the semantic structures of problems posed by sixth-grade middle school students for the addition of fractions affect their problem-posing performance. The students were presented with symbolic operations involving the addition of fractions and asked to pose two different problems related to daily-life situations…

  20. Robust kernel representation with statistical local features for face recognition.

    PubMed

    Yang, Meng; Zhang, Lei; Shiu, Simon Chi-Keung; Zhang, David

    2013-06-01

    Factors such as misalignment, pose variation, and occlusion make robust face recognition a difficult problem. It is known that statistical features such as local binary pattern are effective for local feature extraction, whereas the recently proposed sparse or collaborative representation-based classification has shown interesting results in robust face recognition. In this paper, we propose a novel robust kernel representation model with statistical local features (SLF) for robust face recognition. Initially, multipartition max pooling is used to enhance the invariance of SLF to image registration error. Then, a kernel-based representation model is proposed to fully exploit the discrimination information embedded in the SLF, and robust regression is adopted to effectively handle the occlusion in face images. Extensive experiments are conducted on benchmark face databases, including extended Yale B, AR (A. Martinez and R. Benavente), multiple pose, illumination, and expression (multi-PIE), facial recognition technology (FERET), face recognition grand challenge (FRGC), and labeled faces in the wild (LFW), which have different variations of lighting, expression, pose, and occlusions, demonstrating the promising performance of the proposed method.

  1. The prevalence of medical error related to end-of-life communication in Canadian hospitals: results of a multicentre observational study.

    PubMed

    Heyland, Daren K; Ilan, Roy; Jiang, Xuran; You, John J; Dodek, Peter

    2016-09-01

    In the hospital setting, inadequate engagement between healthcare professionals and seriously ill patients and their families regarding end-of-life decisions is common. This problem may lead to medical orders for life-sustaining treatments that are inconsistent with patient preferences. The prevalence of this patient safety problem has not been previously described. Using data from a multi-institutional audit, we quantified the mismatch between patients' and family members' expressed preferences for care and orders for life-sustaining treatments. We recruited seriously ill, elderly medical patients and/or their family members to participate in this audit. We considered it a medical error if a patient preferred not to be resuscitated and there were orders to undergo resuscitation (overtreatment), or if a patient preferred resuscitation (cardiopulmonary resuscitation, CPR) and there were orders not to be resuscitated (undertreatment). From 16 hospitals in Canada, 808 patients and 631 family members were included in this study. When comparing expressed preferences and documented orders for use of CPR, 37% of patients experienced a medical error. Very few patients (8, 2%) expressed a preference for CPR and had CPR withheld in their documented medical orders (Undertreatment). Of patients who preferred not to have CPR, 174 (35%) had orders to receive it (Overtreatment). There was considerable variability in overtreatment rates across sites (range: 14-82%). Patients who were frail were less likely to be overtreated; patients who did not have a participating family member were more likely to be overtreated. Medical errors related to the use of life-sustaining treatments are very common in internal medicine wards. Many patients are at risk of receiving inappropriate end-of-life care. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. PARTICIPANT BLINDING AND GASTROINTESTINAL ILLNESS IN A RANDOMIZED, CONTROLLED TRIAL OF AN IN-HOME DRINKING WATER INTERVENTION

    EPA Science Inventory


    Background. There is no consensus about the level of risk of gastrointestinal illness posed by consumption of drinking water that meets all regulatory requirements. Earlier drinking water intervention trials from Canada suggested that 14% - 40% of such gastrointestinal il...

  3. A fractional-order accumulative regularization filter for force reconstruction

    NASA Astrophysics Data System (ADS)

    Wensong, Jiang; Zhongyu, Wang; Jing, Lv

    2018-02-01

    The ill-posed inverse problem of the force reconstruction comes from the influence of noise to measured responses and results in an inaccurate or non-unique solution. To overcome this ill-posedness, in this paper, the transfer function of the reconstruction model is redefined by a Fractional order Accumulative Regularization Filter (FARF). First, the measured responses with noise are refined by a fractional-order accumulation filter based on a dynamic data refresh strategy. Second, a transfer function, generated by the filtering results of the measured responses, is manipulated by an iterative Tikhonov regularization with a serious of iterative Landweber filter factors. Third, the regularization parameter is optimized by the Generalized Cross-Validation (GCV) to improve the ill-posedness of the force reconstruction model. A Dynamic Force Measurement System (DFMS) for the force reconstruction is designed to illustrate the application advantages of our suggested FARF method. The experimental result shows that the FARF method with r = 0.1 and α = 20, has a PRE of 0.36% and an RE of 2.45%, is superior to other cases of the FARF method and the traditional regularization methods when it comes to the dynamic force reconstruction.

  4. On the optimization of electromagnetic geophysical data: Application of the PSO algorithm

    NASA Astrophysics Data System (ADS)

    Godio, A.; Santilano, A.

    2018-01-01

    Particle Swarm optimization (PSO) algorithm resolves constrained multi-parameter problems and is suitable for simultaneous optimization of linear and nonlinear problems, with the assumption that forward modeling is based on good understanding of ill-posed problem for geophysical inversion. We apply PSO for solving the geophysical inverse problem to infer an Earth model, i.e. the electrical resistivity at depth, consistent with the observed geophysical data. The method doesn't require an initial model and can be easily constrained, according to external information for each single sounding. The optimization process to estimate the model parameters from the electromagnetic soundings focuses on the discussion of the objective function to be minimized. We discuss the possibility to introduce in the objective function vertical and lateral constraints, with an Occam-like regularization. A sensitivity analysis allowed us to check the performance of the algorithm. The reliability of the approach is tested on synthetic, real Audio-Magnetotelluric (AMT) and Long Period MT data. The method appears able to solve complex problems and allows us to estimate the a posteriori distribution of the model parameters.

  5. Unraveling the Mystery of the Origin of Mathematical Problems: Using a Problem-Posing Framework with Prospective Mathematics Teachers

    ERIC Educational Resources Information Center

    Contreras, Jose

    2007-01-01

    In this article, I model how a problem-posing framework can be used to enhance our abilities to systematically generate mathematical problems by modifying the attributes of a given problem. The problem-posing model calls for the application of the following fundamental mathematical processes: proving, reversing, specializing, generalizing, and…

  6. Estimation of Release History of Pollutant Source and Dispersion Coefficient of Aquifer Using Trained ANN Model

    NASA Astrophysics Data System (ADS)

    Srivastava, R.; Ayaz, M.; Jain, A.

    2013-12-01

    Knowledge of the release history of a groundwater pollutant source is critical in the prediction of the future trend of the pollutant movement and in choosing an effective remediation strategy. Moreover, for source sites which have undergone an ownership change, the estimated release history can be utilized for appropriate allocation of the costs of remediation among different parties who may be responsible for the contamination. Estimation of the release history with the help of concentration data is an inverse problem that becomes ill-posed because of the irreversible nature of the dispersion process. Breakthrough curves represent the temporal variation of pollutant concentration at a particular location, and contain significant information about the source and the release history. Several methodologies have been developed to solve the inverse problem of estimating the source and/or porous medium properties using the breakthrough curves as a known input. A common problem in the use of the breakthrough curves for this purpose is that, in most field situations, we have little or no information about the time of measurement of the breakthrough curve with respect to the time when the pollutant source becomes active. We develop an Artificial Neural Network (ANN) model to estimate the release history of a groundwater pollutant source through the use of breakthrough curves. It is assumed that the source location is known but the time dependent contaminant source strength is unknown. This temporal variation of the strength of the pollutant source is the output of the ANN model that is trained using the Levenberg-Marquardt algorithm utilizing synthetically generated breakthrough curves as inputs. A single hidden layer was used in the neural network and, to utilize just sufficient information and reduce the required sampling duration, only the upper half of the curve is used as the input pattern. The second objective of this work was to identify the aquifer parameters. An ANN model was developed to estimate the longitudinal and transverse dispersion coefficients following a philosophy similar to the one used earlier. Performance of the trained ANN model is evaluated for a 3-Dimensional case, first with perfect data and then with erroneous data with an error level up to 10 percent. Since the solution is highly sensitive to the errors in the input data, instead of using the raw data, we smoothen the upper half of the erroneous breakthrough curve by approximating it with a fourth order polynomial which is used as the input pattern for the ANN model. The main advantage of the proposed model is that it requires only the upper half of the breakthrough curve and, in addition to minimizing the effect of uncertainties in the tail ends of the breakthrough curve, is capable of estimating both the release history and aquifer parameters reasonably well. Results for the case with erroneous data having different error levels demonstrate the practical applicability and robustness of the ANN models. It is observed that with increase in the error level, the correlation coefficient of the training, testing and validation regressions tends to decrease, although the value stays within acceptable limits even for reasonably large error levels.

  7. A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors

    PubMed Central

    Nefti-Meziani, Samia; Carbonaro, Nicola

    2017-01-01

    Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively. PMID:28858252

  8. A Quantitative Evaluation of Drive Pattern Selection for Optimizing EIT-Based Stretchable Sensors.

    PubMed

    Russo, Stefania; Nefti-Meziani, Samia; Carbonaro, Nicola; Tognetti, Alessandro

    2017-08-31

    Electrical Impedance Tomography (EIT) is a medical imaging technique that has been recently used to realize stretchable pressure sensors. In this method, voltage measurements are taken at electrodes placed at the boundary of the sensor and are used to reconstruct an image of the applied touch pressure points. The drawback with EIT-based sensors, however, is their low spatial resolution due to the ill-posed nature of the EIT reconstruction. In this paper, we show our performance evaluation of different EIT drive patterns, specifically strategies for electrode selection when performing current injection and voltage measurements. We compare voltage data with Signal-to-Noise Ratio (SNR) and Boundary Voltage Changes (BVC), and study image quality with Size Error (SE), Position Error (PE) and Ringing (RNG) parameters, in the case of one-point and two-point simultaneous contact locations. The study shows that, in order to improve the performance of EIT based sensors, the electrode selection strategies should dynamically change correspondingly to the location of the input stimuli. In fact, the selection of one drive pattern over another can improve the target size detection and position accuracy up to 4.7% and 18%, respectively.

  9. Optimal accelerometer placement on a robot arm for pose estimation

    NASA Astrophysics Data System (ADS)

    Wijayasinghe, Indika B.; Sanford, Joseph D.; Abubakar, Shamsudeen; Saadatzi, Mohammad Nasser; Das, Sumit K.; Popa, Dan O.

    2017-05-01

    The performance of robots to carry out tasks depends in part on the sensor information they can utilize. Usually, robots are fitted with angle joint encoders that are used to estimate the position and orientation (or the pose) of its end-effector. However, there are numerous situations, such as in legged locomotion, mobile manipulation, or prosthetics, where such joint sensors may not be present at every, or any joint. In this paper we study the use of inertial sensors, in particular accelerometers, placed on the robot that can be used to estimate the robot pose. Studying accelerometer placement on a robot involves many parameters that affect the performance of the intended positioning task. Parameters such as the number of accelerometers, their size, geometric placement and Signal-to-Noise Ratio (SNR) are included in our study of their effects for robot pose estimation. Due to the ubiquitous availability of inexpensive accelerometers, we investigated pose estimation gains resulting from using increasingly large numbers of sensors. Monte-Carlo simulations are performed with a two-link robot arm to obtain the expected value of an estimation error metric for different accelerometer configurations, which are then compared for optimization. Results show that, with a fixed SNR model, the pose estimation error decreases with increasing number of accelerometers, whereas for a SNR model that scales inversely to the accelerometer footprint, the pose estimation error increases with the number of accelerometers. It is also shown that the optimal placement of the accelerometers depends on the method used for pose estimation. The findings suggest that an integration-based method favors placement of accelerometers at the extremities of the robot links, whereas a kinematic-constraints-based method favors a more uniformly distributed placement along the robot links.

  10. The Sizing and Optimization Language (SOL): A computer language to improve the user/optimizer interface

    NASA Technical Reports Server (NTRS)

    Lucas, S. H.; Scotti, S. J.

    1989-01-01

    The nonlinear mathematical programming method (formal optimization) has had many applications in engineering design. A figure illustrates the use of optimization techniques in the design process. The design process begins with the design problem, such as the classic example of the two-bar truss designed for minimum weight as seen in the leftmost part of the figure. If formal optimization is to be applied, the design problem must be recast in the form of an optimization problem consisting of an objective function, design variables, and constraint function relations. The middle part of the figure shows the two-bar truss design posed as an optimization problem. The total truss weight is the objective function, the tube diameter and truss height are design variables, with stress and Euler buckling considered as constraint function relations. Lastly, the designer develops or obtains analysis software containing a mathematical model of the object being optimized, and then interfaces the analysis routine with existing optimization software such as CONMIN, ADS, or NPSOL. This final state of software development can be both tedious and error-prone. The Sizing and Optimization Language (SOL), a special-purpose computer language whose goal is to make the software implementation phase of optimum design easier and less error-prone, is presented.

  11. An algorithm for variational data assimilation of contact concentration measurements for atmospheric chemistry models

    NASA Astrophysics Data System (ADS)

    Penenko, Alexey; Penenko, Vladimir

    2014-05-01

    Contact concentration measurement data assimilation problem is considered for convection-diffusion-reaction models originating from the atmospheric chemistry study. High dimensionality of models imposes strict requirements on the computational efficiency of the algorithms. Data assimilation is carried out within the variation approach on a single time step of the approximated model. A control function is introduced into the source term of the model to provide flexibility for data assimilation. This function is evaluated as the minimum of the target functional that connects its norm to a misfit between measured and model-simulated data. In the case mathematical model acts as a natural Tikhonov regularizer for the ill-posed measurement data inversion problem. This provides flow-dependent and physically-plausible structure of the resulting analysis and reduces a need to calculate model error covariance matrices that are sought within conventional approach to data assimilation. The advantage comes at the cost of the adjoint problem solution. This issue is solved within the frameworks of splitting-based realization of the basic convection-diffusion-reaction model. The model is split with respect to physical processes and spatial variables. A contact measurement data is assimilated on each one-dimensional convection-diffusion splitting stage. In this case a computationally-efficient direct scheme for both direct and adjoint problem solution can be constructed based on the matrix sweep method. Data assimilation (or regularization) parameter that regulates ratio between model and data in the resulting analysis is obtained with Morozov discrepancy principle. For the proper performance the algorithm takes measurement noise estimation. In the case of Gaussian errors the probability that the used Chi-squared-based estimate is the upper one acts as the assimilation parameter. A solution obtained can be used as the initial guess for data assimilation algorithms that assimilate outside the splitting stages and involve iterations. Splitting method stage that is responsible for chemical transformation processes is realized with the explicit discrete-analytical scheme with respect to time. The scheme is based on analytical extraction of the exponential terms from the solution. This provides unconditional positive sign for the evaluated concentrations. Splitting-based structure of the algorithm provides means for efficient parallel realization. The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004.

  12. Filtered maximum likelihood expectation maximization based global reconstruction for bioluminescence tomography.

    PubMed

    Yang, Defu; Wang, Lin; Chen, Dongmei; Yan, Chenggang; He, Xiaowei; Liang, Jimin; Chen, Xueli

    2018-05-17

    The reconstruction of bioluminescence tomography (BLT) is severely ill-posed due to the insufficient measurements and diffuses nature of the light propagation. Predefined permissible source region (PSR) combined with regularization terms is one common strategy to reduce such ill-posedness. However, the region of PSR is usually hard to determine and can be easily affected by subjective consciousness. Hence, we theoretically developed a filtered maximum likelihood expectation maximization (fMLEM) method for BLT. Our method can avoid predefining the PSR and provide a robust and accurate result for global reconstruction. In the method, the simplified spherical harmonics approximation (SP N ) was applied to characterize diffuse light propagation in medium, and the statistical estimation-based MLEM algorithm combined with a filter function was used to solve the inverse problem. We systematically demonstrated the performance of our method by the regular geometry- and digital mouse-based simulations and a liver cancer-based in vivo experiment. Graphical abstract The filtered MLEM-based global reconstruction method for BLT.

  13. A New Problem-Posing Approach Based on Problem-Solving Strategy: Analyzing Pre-Service Primary School Teachers' Performance

    ERIC Educational Resources Information Center

    Kiliç, Çigdem

    2017-01-01

    This study examined pre-service primary school teachers' performance in posing problems that require knowledge of problem-solving strategies. Quantitative and qualitative methods were combined. The 120 participants were asked to pose a problem that could be solved by using the find-a-pattern a particular problem-solving strategy. After that,…

  14. Evaluation of lens distortion errors using an underwater camera system for video-based motion analysis

    NASA Technical Reports Server (NTRS)

    Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.

    1994-01-01

    Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.

  15. Artifacts as Sources for Problem-Posing Activities

    ERIC Educational Resources Information Center

    Bonotto, Cinzia

    2013-01-01

    The problem-posing process represents one of the forms of authentic mathematical inquiry which, if suitably implemented in classroom activities, could move well beyond the limitations of word problems, at least as they are typically utilized. The two exploratory studies presented sought to investigate the impact of "problem-posing" activities when…

  16. The Art of Problem Posing. 3rd Edition

    ERIC Educational Resources Information Center

    Brown, Stephen I.; Walter, Marion I.

    2005-01-01

    The new edition of this classic book describes and provides a myriad of examples of the relationships between problem posing and problem solving, and explores the educational potential of integrating these two activities in classrooms at all levels. "The Art of Problem Posing, Third Edition" encourages readers to shift their thinking…

  17. Identification errors in the blood transfusion laboratory: a still relevant issue for patient safety.

    PubMed

    Lippi, Giuseppe; Plebani, Mario

    2011-04-01

    Remarkable technological advances and increased awareness have both contributed to decrease substantially the uncertainty of the analytical phase, so that the manually intensive preanalytical activities currently represent the leading sources of errors in laboratory and transfusion medicine. Among preanalytical errors, misidentification and mistransfusion are still regarded as a considerable problem, posing serious risks for patient health and carrying huge expenses for the healthcare system. As such, a reliable policy of risk management should be readily implemented, developing through a multifaceted approach to prevent or limit the adverse outcomes related to transfusion reactions from blood incompatibility. This strategy encompasses root cause analysis, compliance with accreditation requirements, strict adherence to standard operating procedures, guidelines and recommendations for specimen collection, use of positive identification devices, rejection of potentially misidentified specimens, informatics data entry, query host communication, automated systems for patient identification and sample labeling and an adequate and safe environment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Model reduction by trimming for a class of semi-Markov reliability models and the corresponding error bound

    NASA Technical Reports Server (NTRS)

    White, Allan L.; Palumbo, Daniel L.

    1991-01-01

    Semi-Markov processes have proved to be an effective and convenient tool to construct models of systems that achieve reliability by redundancy and reconfiguration. These models are able to depict complex system architectures and to capture the dynamics of fault arrival and system recovery. A disadvantage of this approach is that the models can be extremely large, which poses both a model and a computational problem. Techniques are needed to reduce the model size. Because these systems are used in critical applications where failure can be expensive, there must be an analytically derived bound for the error produced by the model reduction technique. A model reduction technique called trimming is presented that can be applied to a popular class of systems. Automatic model generation programs were written to help the reliability analyst produce models of complex systems. This method, trimming, is easy to implement and the error bound easy to compute. Hence, the method lends itself to inclusion in an automatic model generator.

  19. A new approach to identify, classify and count drugrelated events

    PubMed Central

    Bürkle, Thomas; Müller, Fabian; Patapovas, Andrius; Sonst, Anja; Pfistermeister, Barbara; Plank-Kiegele, Bettina; Dormann, Harald; Maas, Renke

    2013-01-01

    Aims The incidence of clinical events related to medication errors and/or adverse drug reactions reported in the literature varies by a degree that cannot solely be explained by the clinical setting, the varying scrutiny of investigators or varying definitions of drug-related events. Our hypothesis was that the individual complexity of many clinical cases may pose relevant limitations for current definitions and algorithms used to identify, classify and count adverse drug-related events. Methods Based on clinical cases derived from an observational study we identified and classified common clinical problems that cannot be adequately characterized by the currently used definitions and algorithms. Results It appears that some key models currently used to describe the relation of medication errors (MEs), adverse drug reactions (ADRs) and adverse drug events (ADEs) can easily be misinterpreted or contain logical inconsistencies that limit their accurate use to all but the simplest clinical cases. A key limitation of current models is the inability to deal with complex interactions such as one drug causing two clinically distinct side effects or multiple drugs contributing to a single clinical event. Using a large set of clinical cases we developed a revised model of the interdependence between MEs, ADEs and ADRs and extended current event definitions when multiple medications cause multiple types of problems. We propose algorithms that may help to improve the identification, classification and counting of drug-related events. Conclusions The new model may help to overcome some of the limitations that complex clinical cases pose to current paper- or software-based drug therapy safety. PMID:24007453

  20. Reverse Flood Routing with the Lag-and-Route Storage Model

    NASA Astrophysics Data System (ADS)

    Mazi, K.; Koussis, A. D.

    2010-09-01

    This work presents a method for reverse routing of flood waves in open channels, which is an inverse problem of the signal identification type. Inflow determination from outflow measurements is useful in hydrologic forensics and in optimal reservoir control, but has been seldom studied. Such problems are ill posed and their solution is sensitive to small perturbations present in the data, or to any related uncertainty. Therefore the major difficulty in solving this inverse problem consists in controlling the amplification of errors that inevitably befall flow measurements, from which the inflow signal is to be determined. The lag-and-route model offers a convenient framework for reverse routing, because not only is formal deconvolution not required, but also reverse routing is through a single linear reservoir. In addition, this inversion degenerates to calculating the intermediate inflow (prior to the lag step) simply as the sum of the outflow and of its time derivative multiplied by the reservoir’s time constant. The remaining time shifting (lag) of the intermediate, reversed flow presents no complications, as pure translation causes no error amplification. Note that reverse routing with the inverted Muskingum scheme (Koussis et al., submitted to the 12th Plinius Conference) fails when that scheme is specialised to the Kalinin-Miljukov model (linear reservoirs in series). The principal functioning of the reverse routing procedure was verified first with perfect field data (outflow hydrograph generated by forward routing of a known inflow hydrograph). The field data were then seeded with random error. To smooth the oscillations caused by the imperfect (measured) outflow data, we applied a multipoint Savitzky-Golay low-pass filter. The combination of reverse routing and filtering achieved an effective recovery of the inflow signal extremely efficiently. Specifically, we compared the reverse routing results of the inverted lag-and-route model and of the inverted Kalinin-Miljukov model. The latter applies the lag-and-route model’s single-reservoir inversion scheme sequentially to its cascade of linear reservoirs, the number of which is related to the stream's hydromorphology. For this purpose, we used the example of Bruen & Dooge (2007), who back-routed flow hydrographs in a 100-km long prismatic channel using a scheme for the reverse solution of the St. Venant equations of flood wave motion. The lag-and-route reverse routing model recovered the inflow hydrograph with comparable accuracy to that of the multi-reservoir, inverted Kalinin-Miljukov model, both performing as well as the box-scheme for reverse routing with the St. Venant equations. In conclusion, the success in the regaining of the inflow signal by the devised single-reservoir reverse routing procedure, with multipoint low-pass filtering, can be attributed to its simple computational structure that endows it with remarkable robustness and exceptional efficiency.

  1. An Investigation on Chinese Teachers' Realistic Problem Posing and Problem Solving Ability and Beliefs

    ERIC Educational Resources Information Center

    Chen, Limin; Van Dooren, Wim; Chen, Qi; Verschaffel, Lieven

    2011-01-01

    In the present study, which is a part of a research project about realistic word problem solving and problem posing in Chinese elementary schools, a problem solving and a problem posing test were administered to 128 pre-service and in-service elementary school teachers from Tianjin City in China, wherein the teachers were asked to solve 3…

  2. Enhancing students’ mathematical problem posing skill through writing in performance tasks strategy

    NASA Astrophysics Data System (ADS)

    Kadir; Adelina, R.; Fatma, M.

    2018-01-01

    Many researchers have studied the Writing in Performance Task (WiPT) strategy in learning, but only a few paid attention on its relation to the problem-posing skill in mathematics. The problem-posing skill in mathematics covers problem reformulation, reconstruction, and imitation. The purpose of the present study was to examine the effect of WiPT strategy on students’ mathematical problem-posing skill. The research was conducted at a Public Junior Secondary School in Tangerang Selatan. It used a quasi-experimental method with randomized control group post-test. The samples were 64 students consists of 32 students of the experiment group and 32 students of the control. A cluster random sampling technique was used for sampling. The research data were obtained by testing. The research shows that the problem-posing skill of students taught by WiPT strategy is higher than students taught by a conventional strategy. The research concludes that the WiPT strategy is more effective in enhancing the students’ mathematical problem-posing skill compared to the conventional strategy.

  3. Decoupled Method for Reconstruction of Surface Conditions From Internal Temperatures On Ablative Materials With Uncertain Recession Model

    NASA Technical Reports Server (NTRS)

    Oliver, A. Brandon

    2017-01-01

    Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.

  4. Reconstruction of electrical impedance tomography (EIT) images based on the expectation maximum (EM) method.

    PubMed

    Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi

    2012-11-01

    Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  5. FOREWORD: Tackling inverse problems in a Banach space environment: from theory to applications Tackling inverse problems in a Banach space environment: from theory to applications

    NASA Astrophysics Data System (ADS)

    Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara

    2012-10-01

    Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.

  6. The missions and means framework as an ontology

    NASA Astrophysics Data System (ADS)

    Deitz, Paul H.; Bray, Britt E.; Michaelis, James R.

    2016-05-01

    The analysis of warfare frequently suffers from an absence of logical structure for a] specifying explicitly the military mission and b] quantitatively evaluating the mission utility of alternative products and services. In 2003, the Missions and Means Framework (MMF) was developed to redress these shortcomings. The MMF supports multiple combatants, levels of war and, in fact, is a formal embodiment of the Military Decision-Making Process (MDMP). A major effect of incomplete analytic discipline in military systems analyses is that they frequently fall into the category of ill-posed problems in which they are under-specified, under-determined, or under-constrained. Critical context is often missing. This is frequently the result of incomplete materiel requirements analyses which have unclear linkages to higher levels of warfare, system-of-systems linkages, tactics, techniques and procedures, and the effect of opposition forces. In many instances the capabilities of materiel are assumed to be immutable. This is a result of not assessing how platform components morph over time due to damage, logistics, or repair. Though ill-posed issues can be found many places in military analysis, probably the greatest challenge comes in the disciplines of C4ISR supported by ontologies in which formal naming and definition of the types, properties, and interrelationships of the entities are fundamental to characterizing mission success. Though the MMF was not conceived as an ontology, over the past decade some workers, particularly in the field of communication, have labelled the MMF as such. This connection will be described and discussed.

  7. Fundamental Bounds for Sequence Reconstruction from Nanopore Sequencers.

    PubMed

    Magner, Abram; Duda, Jarosław; Szpankowski, Wojciech; Grama, Ananth

    2016-06-01

    Nanopore sequencers are emerging as promising new platforms for high-throughput sequencing. As with other technologies, sequencer errors pose a major challenge for their effective use. In this paper, we present a novel information theoretic analysis of the impact of insertion-deletion (indel) errors in nanopore sequencers. In particular, we consider the following problems: (i) for given indel error characteristics and rate, what is the probability of accurate reconstruction as a function of sequence length; (ii) using replicated extrusion (the process of passing a DNA strand through the nanopore), what is the number of replicas needed to accurately reconstruct the true sequence with high probability? Our results provide a number of important insights: (i) the probability of accurate reconstruction of a sequence from a single sample in the presence of indel errors tends quickly (i.e., exponentially) to zero as the length of the sequence increases; and (ii) replicated extrusion is an effective technique for accurate reconstruction. We show that for typical distributions of indel errors, the required number of replicas is a slow function (polylogarithmic) of sequence length - implying that through replicated extrusion, we can sequence large reads using nanopore sequencers. Moreover, we show that in certain cases, the required number of replicas can be related to information-theoretic parameters of the indel error distributions.

  8. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems

    PubMed Central

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-01-01

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments. PMID:28216555

  9. Monocular-Based 6-Degree of Freedom Pose Estimation Technology for Robotic Intelligent Grasping Systems.

    PubMed

    Liu, Tao; Guo, Yin; Yang, Shourui; Yin, Shibin; Zhu, Jigui

    2017-02-14

    Industrial robots are expected to undertake ever more advanced tasks in the modern manufacturing industry, such as intelligent grasping, in which robots should be capable of recognizing the position and orientation of a part before grasping it. In this paper, a monocular-based 6-degree of freedom (DOF) pose estimation technology to enable robots to grasp large-size parts at informal poses is proposed. A camera was mounted on the robot end-flange and oriented to measure several featured points on the part before the robot moved to grasp it. In order to estimate the part pose, a nonlinear optimization model based on the camera object space collinearity error in different poses is established, and the initial iteration value is estimated with the differential transformation. Measuring poses of the camera are optimized based on uncertainty analysis. Also, the principle of the robotic intelligent grasping system was developed, with which the robot could adjust its pose to grasp the part. In experimental tests, the part poses estimated with the method described in this paper were compared with those produced by a laser tracker, and results show the RMS angle and position error are about 0.0228° and 0.4603 mm. Robotic intelligent grasping tests were also successfully performed in the experiments.

  10. Analysis of jet-airfoil interaction noise sources by using a microphone array technique

    NASA Astrophysics Data System (ADS)

    Fleury, Vincent; Davy, Renaud

    2016-03-01

    The paper is concerned with the characterization of jet noise sources and jet-airfoil interaction sources by using microphone array data. The measurements were carried-out in the anechoic open test section wind tunnel of Onera, Cepra19. The microphone array technique relies on the convected, Lighthill's and Ffowcs-Williams and Hawkings' acoustic analogy equation. The cross-spectrum of the source term of the analogy equation is sought. It is defined as the optimal solution to a minimal error equation using the measured microphone cross-spectra as reference. This inverse problem is ill-posed yet. A penalty term based on a localization operator is therefore added to improve the recovery of jet noise sources. The analysis of isolated jet noise data in subsonic regime shows the contribution of the conventional mixing noise source in the low frequency range, as expected, and of uniformly distributed, uncorrelated noise sources in the jet flow at higher frequencies. In underexpanded supersonic regime, a shock-associated noise source is clearly identified, too. An additional source is detected in the vicinity of the nozzle exit both in supersonic and subsonic regimes. In the presence of the airfoil, the distribution of the noise sources is deeply modified. In particular, a strong noise source is localized on the flap. For high Strouhal numbers, higher than about 2 (based on the jet mixing velocity and diameter), a significant contribution from the shear-layer near the flap is observed, too. Indications of acoustic reflections on the airfoil are also discerned.

  11. Minimal residual method provides optimal regularization parameter for diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Jagannath, Ravi Prasad K.; Yalavarthy, Phaneendra K.

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  12. Minimal residual method provides optimal regularization parameter for diffuse optical tomography.

    PubMed

    Jagannath, Ravi Prasad K; Yalavarthy, Phaneendra K

    2012-10-01

    The inverse problem in the diffuse optical tomography is known to be nonlinear, ill-posed, and sometimes under-determined, requiring regularization to obtain meaningful results, with Tikhonov-type regularization being the most popular one. The choice of this regularization parameter dictates the reconstructed optical image quality and is typically chosen empirically or based on prior experience. An automated method for optimal selection of regularization parameter that is based on regularized minimal residual method (MRM) is proposed and is compared with the traditional generalized cross-validation method. The results obtained using numerical and gelatin phantom data indicate that the MRM-based method is capable of providing the optimal regularization parameter.

  13. Flow curve analysis of a Pickering emulsion-polymerized PEDOT:PSS/PS-based electrorheological fluid

    NASA Astrophysics Data System (ADS)

    Kim, So Hee; Choi, Hyoung Jin; Leong, Yee-Kwong

    2017-11-01

    The steady shear electrorheological (ER) response of poly(3, 4-ethylenedioxythiophene): poly(styrene sulfonate)/polystyrene (PEDOT:PSS/PS) composite particles, which were initially fabricated from Pickering emulsion polymerization, was tested with a 10 vol% ER fluid dispersed in a silicone oil. The model independent shear rate and yield stress obtained from the raw torque-rotational speed data using a Couette type rotational rheometer under an applied electric field strength were then analyzed by Tikhonov regularization, which is the most suitable technique for solving an ill-posed inverse problem. The shear stress-shear rate data also fitted well with the data extracted from the Bingham fluid model.

  14. An estimate for the thermal photon rate from lattice QCD

    NASA Astrophysics Data System (ADS)

    Brandt, Bastian B.; Francis, Anthony; Harris, Tim; Meyer, Harvey B.; Steinberg, Aman

    2018-03-01

    We estimate the production rate of photons by the quark-gluon plasma in lattice QCD. We propose a new correlation function which provides better control over the systematic uncertainty in estimating the photon production rate at photon momenta in the range πT/2 to 2πT. The relevant Euclidean vector current correlation functions are computed with Nf = 2 Wilson clover fermions in the chirally-symmetric phase. In order to estimate the photon rate, an ill-posed problem for the vector-channel spectral function must be regularized. We use both a direct model for the spectral function and a modelindependent estimate from the Backus-Gilbert method to give an estimate for the photon rate.

  15. Efficient generalized cross-validation with applications to parametric image restoration and resolution enhancement.

    PubMed

    Nguyen, N; Milanfar, P; Golub, G

    2001-01-01

    In many image restoration/resolution enhancement applications, the blurring process, i.e., point spread function (PSF) of the imaging system, is not known or is known only to within a set of parameters. We estimate these PSF parameters for this ill-posed class of inverse problem from raw data, along with the regularization parameters required to stabilize the solution, using the generalized cross-validation method (GCV). We propose efficient approximation techniques based on the Lanczos algorithm and Gauss quadrature theory, reducing the computational complexity of the GCV. Data-driven PSF and regularization parameter estimation experiments with synthetic and real image sequences are presented to demonstrate the effectiveness and robustness of our method.

  16. Rigorous Numerics for ill-posed PDEs: Periodic Orbits in the Boussinesq Equation

    NASA Astrophysics Data System (ADS)

    Castelli, Roberto; Gameiro, Marcio; Lessard, Jean-Philippe

    2018-04-01

    In this paper, we develop computer-assisted techniques for the analysis of periodic orbits of ill-posed partial differential equations. As a case study, our proposed method is applied to the Boussinesq equation, which has been investigated extensively because of its role in the theory of shallow water waves. The idea is to use the symmetry of the solutions and a Newton-Kantorovich type argument (the radii polynomial approach) to obtain rigorous proofs of existence of the periodic orbits in a weighted ℓ1 Banach space of space-time Fourier coefficients with exponential decay. We present several computer-assisted proofs of the existence of periodic orbits at different parameter values.

  17. Dissecting Success Stories on Mathematical Problem Posing: A Case of the Billiard Task

    ERIC Educational Resources Information Center

    Koichu, Boris; Kontorovich, Igor

    2013-01-01

    "Success stories," i.e., cases in which mathematical problems posed in a controlled setting are perceived by the problem posers or other individuals as interesting, cognitively demanding, or surprising, are essential for understanding the nature of problem posing. This paper analyzes two success stories that occurred with individuals of different…

  18. What Makes a Problem Mathematically Interesting? Inviting Prospective Teachers to Pose Better Problems

    ERIC Educational Resources Information Center

    Crespo, Sandra; Sinclair, Nathalie

    2008-01-01

    School students of all ages, including those who subsequently become teachers, have limited experience posing their own mathematical problems. Yet problem posing, both as an act of mathematical inquiry and of mathematics teaching, is part of the mathematics education reform vision that seeks to promote mathematics as an worthy intellectual…

  19. Helping Young Students to Better Pose an Environmental Problem

    ERIC Educational Resources Information Center

    Pruneau, Diane; Freiman, Viktor; Barbier, Pierre-Yves; Langis, Joanne

    2009-01-01

    Grade 3 students were asked to solve a sedimentation problem in a local river. With scientists, students explored many aspects of the problem and proposed solutions. Graphic representation tools were used to help students to better pose the problem. Using questionnaires and interviews, researchers observed students' capacity to pose the problem…

  20. University Students' Problem Posing Abilities and Attitudes towards Mathematics.

    ERIC Educational Resources Information Center

    Grundmeier, Todd A.

    2002-01-01

    Explores the problem posing abilities and attitudes towards mathematics of students in a university pre-calculus class and a university mathematical proof class. Reports a significant difference in numeric posing versus non-numeric posing ability in both classes. (Author/MM)

  1. Effects of the Problem-Posing Approach on Students' Problem Solving Skills and Metacognitive Awareness in Science Education

    NASA Astrophysics Data System (ADS)

    Akben, Nimet

    2018-05-01

    The interrelationship between mathematics and science education has frequently been emphasized, and common goals and approaches have often been adopted between disciplines. Improving students' problem-solving skills in mathematics and science education has always been given special attention; however, the problem-posing approach which plays a key role in mathematics education has not been commonly utilized in science education. As a result, the purpose of this study was to better determine the effects of the problem-posing approach on students' problem-solving skills and metacognitive awareness in science education. This was a quasi-experimental based study conducted with 61 chemistry and 40 physics students; a problem-solving inventory and a metacognitive awareness inventory were administered to participants both as a pre-test and a post-test. During the 2017-2018 academic year, problem-solving activities based on the problem-posing approach were performed with the participating students during their senior year in various university chemistry and physics departments throughout the Republic of Turkey. The study results suggested that structured, semi-structured, and free problem-posing activities improve students' problem-solving skills and metacognitive awareness. These findings indicated not only the usefulness of integrating problem-posing activities into science education programs but also the need for further research into this question.

  2. Pulse reflectometry as an acoustical inverse problem: Regularization of the bore reconstruction

    NASA Astrophysics Data System (ADS)

    Forbes, Barbara J.; Sharp, David B.; Kemp, Jonathan A.

    2002-11-01

    The theoretical basis of acoustic pulse reflectometry, a noninvasive method for the reconstruction of an acoustical duct from the reflections measured in response to an input pulse, is reviewed in terms of the inversion of the central Fredholm equation. It is known that this is an ill-posed problem in the context of finite-bandwidth experimental signals. Recent work by the authors has proposed the truncated singular value decomposition (TSVD) in the regularization of the transient input impulse response, a non-measurable quantity from which the spatial bore reconstruction is derived. In the present paper we further emphasize the relevance of the singular system framework to reflectometry applications, examining for the first time the transient bases of the system. In particular, by varying the truncation point for increasing condition numbers of the system matrix, it is found that the effects of out-of-bandwidth singular functions on the bore reconstruction can be systematically studied.

  3. A frequency-domain seismic blind deconvolution based on Gini correlations

    NASA Astrophysics Data System (ADS)

    Wang, Zhiguo; Zhang, Bing; Gao, Jinghuai; Huo Liu, Qing

    2018-02-01

    In reflection seismic processing, the seismic blind deconvolution is a challenging problem, especially when the signal-to-noise ratio (SNR) of the seismic record is low and the length of the seismic record is short. As a solution to this ill-posed inverse problem, we assume that the reflectivity sequence is independent and identically distributed (i.i.d.). To infer the i.i.d. relationships from seismic data, we first introduce the Gini correlations (GCs) to construct a new criterion for the seismic blind deconvolution in the frequency-domain. Due to a unique feature, the GCs are robust in their higher tolerance of the low SNR data and less dependent on record length. Applications of the seismic blind deconvolution based on the GCs show their capacity in estimating the unknown seismic wavelet and the reflectivity sequence, whatever synthetic traces or field data, even with low SNR and short sample record.

  4. Quantitative imaging of aggregated emulsions.

    PubMed

    Penfold, Robert; Watson, Andrew D; Mackie, Alan R; Hibberd, David J

    2006-02-28

    Noise reduction, restoration, and segmentation methods are developed for the quantitative structural analysis in three dimensions of aggregated oil-in-water emulsion systems imaged by fluorescence confocal laser scanning microscopy. Mindful of typical industrial formulations, the methods are demonstrated for concentrated (30% volume fraction) and polydisperse emulsions. Following a regularized deconvolution step using an analytic optical transfer function and appropriate binary thresholding, novel application of the Euclidean distance map provides effective discrimination of closely clustered emulsion droplets with size variation over at least 1 order of magnitude. The a priori assumption of spherical nonintersecting objects provides crucial information to combat the ill-posed inverse problem presented by locating individual particles. Position coordinates and size estimates are recovered with sufficient precision to permit quantitative study of static geometrical features. In particular, aggregate morphology is characterized by a novel void distribution measure based on the generalized Apollonius problem. This is also compared with conventional Voronoi/Delauney analysis.

  5. User-assisted video segmentation system for visual communication

    NASA Astrophysics Data System (ADS)

    Wu, Zhengping; Chen, Chun

    2002-01-01

    Video segmentation plays an important role for efficient storage and transmission in visual communication. In this paper, we introduce a novel video segmentation system using point tracking and contour formation techniques. Inspired by the results from the study of the human visual system, we intend to solve the video segmentation problem into three separate phases: user-assisted feature points selection, feature points' automatic tracking, and contour formation. This splitting relieves the computer of ill-posed automatic segmentation problems, and allows a higher level of flexibility of the method. First, the precise feature points can be found using a combination of user assistance and an eigenvalue-based adjustment. Second, the feature points in the remaining frames are obtained using motion estimation and point refinement. At last, contour formation is used to extract the object, and plus a point insertion process to provide the feature points for next frame's tracking.

  6. Regularized finite element modeling of progressive failure in soils within nonlocal softening plasticity

    NASA Astrophysics Data System (ADS)

    Huang, Maosong; Qu, Xie; Lü, Xilin

    2017-11-01

    By solving a nonlinear complementarity problem for the consistency condition, an improved implicit stress return iterative algorithm for a generalized over-nonlocal strain softening plasticity was proposed, and the consistent tangent matrix was obtained. The proposed algorithm was embodied into existing finite element codes, and it enables the nonlocal regularization of ill-posed boundary value problem caused by the pressure independent and dependent strain softening plasticity. The algorithm was verified by the numerical modeling of strain localization in a plane strain compression test. The results showed that a fast convergence can be achieved and the mesh-dependency caused by strain softening can be effectively eliminated. The influences of hardening modulus and material characteristic length on the simulation were obtained. The proposed algorithm was further used in the simulations of the bearing capacity of a strip footing; the results are mesh-independent, and the progressive failure process of the soil was well captured.

  7. Single photon emission computed tomography-guided Cerenkov luminescence tomography

    NASA Astrophysics Data System (ADS)

    Hu, Zhenhua; Chen, Xueli; Liang, Jimin; Qu, Xiaochao; Chen, Duofang; Yang, Weidong; Wang, Jing; Cao, Feng; Tian, Jie

    2012-07-01

    Cerenkov luminescence tomography (CLT) has become a valuable tool for preclinical imaging because of its ability of reconstructing the three-dimensional distribution and activity of the radiopharmaceuticals. However, it is still far from a mature technology and suffers from relatively low spatial resolution due to the ill-posed inverse problem for the tomographic reconstruction. In this paper, we presented a single photon emission computed tomography (SPECT)-guided reconstruction method for CLT, in which a priori information of the permissible source region (PSR) from SPECT imaging results was incorporated to effectively reduce the ill-posedness of the inverse reconstruction problem. The performance of the method was first validated with the experimental reconstruction of an adult athymic nude mouse implanted with a Na131I radioactive source and an adult athymic nude mouse received an intravenous tail injection of Na131I. A tissue-mimic phantom based experiment was then conducted to illustrate the ability of the proposed method in resolving double sources. Compared with the traditional PSR strategy in which the PSR was determined by the surface flux distribution, the proposed method obtained much more accurate and encouraging localization and resolution results. Preliminary results showed that the proposed SPECT-guided reconstruction method was insensitive to the regularization methods and ignored the heterogeneity of tissues which can avoid the segmentation procedure of the organs.

  8. Numerical Simulations of Reacting Flows Using Asynchrony-Tolerant Schemes for Exascale Computing

    NASA Astrophysics Data System (ADS)

    Cleary, Emmet; Konduri, Aditya; Chen, Jacqueline

    2017-11-01

    Communication and data synchronization between processing elements (PEs) are likely to pose a major challenge in scalability of solvers at the exascale. Recently developed asynchrony-tolerant (AT) finite difference schemes address this issue by relaxing communication and synchronization between PEs at a mathematical level while preserving accuracy, resulting in improved scalability. The performance of these schemes has been validated for simple linear and nonlinear homogeneous PDEs. However, many problems of practical interest are governed by highly nonlinear PDEs with source terms, whose solution may be sensitive to perturbations caused by communication asynchrony. The current work applies the AT schemes to combustion problems with chemical source terms, yielding a stiff system of PDEs with nonlinear source terms highly sensitive to temperature. Examples shown will use single-step and multi-step CH4 mechanisms for 1D premixed and nonpremixed flames. Error analysis will be discussed both in physical and spectral space. Results show that additional errors introduced by the AT schemes are negligible and the schemes preserve their accuracy. We acknowledge funding from the DOE Computational Science Graduate Fellowship administered by the Krell Institute.

  9. Errant life, molecular biology, and biopower: Canguilhem, Jacob, and Foucault.

    PubMed

    Talcott, Samuel

    2014-01-01

    This paper considers the theoretical circumstances that urged Michel Foucault to analyse modern societies in terms of biopower. Georges Canguilhem's account of the relations between science and the living forms an essential starting point for Foucault's own later explorations, though the challenges posed by the molecular revolution in biology and François Jacob's history of it allowed Foucault to extend and transform Canguilhem's philosophy of error. Using archival research into his 1955-1956 course on "Science and Error," I show that, for Canguilhem, it is inauthentic to treat a living being as an error, even if living things are capable of making errors in the domain of knowledge. The emergent molecular biology in the 1960s posed a grave challenge, however, since it suggested that individuals could indeed be errors of genetic reproduction. The paper discusses how Canguilhem and Foucault each responded to this by examining, among other texts, their respective reviews of Jacob's The Logic of the Living. For Canguilhem this was an opportunity to reaffirm the creativity of life in the living individual, which is not a thing to be evaluated, but the source of values. For Foucault, drawing on Jacob's work, this was the opportunity to develop a transformed account of valuation by posing biopower as the DNA of society. Despite their disagreements, the paper examines these three authors as different iterations of a historical epistemology attuned to errancy, error, and experimentation.

  10. Sparse radar imaging using 2D compressed sensing

    NASA Astrophysics Data System (ADS)

    Hou, Qingkai; Liu, Yang; Chen, Zengping; Su, Shaoying

    2014-10-01

    Radar imaging is an ill-posed linear inverse problem and compressed sensing (CS) has been proved to have tremendous potential in this field. This paper surveys the theory of radar imaging and a conclusion is drawn that the processing of ISAR imaging can be denoted mathematically as a problem of 2D sparse decomposition. Based on CS, we propose a novel measuring strategy for ISAR imaging radar and utilize random sub-sampling in both range and azimuth dimensions, which will reduce the amount of sampling data tremendously. In order to handle 2D reconstructing problem, the ordinary solution is converting the 2D problem into 1D by Kronecker product, which will increase the size of dictionary and computational cost sharply. In this paper, we introduce the 2D-SL0 algorithm into the reconstruction of imaging. It is proved that 2D-SL0 can achieve equivalent result as other 1D reconstructing methods, but the computational complexity and memory usage is reduced significantly. Moreover, we will state the results of simulating experiments and prove the effectiveness and feasibility of our method.

  11. Improved real-time dynamics from imaginary frequency lattice simulations

    NASA Astrophysics Data System (ADS)

    Pawlowski, Jan M.; Rothkopf, Alexander

    2018-03-01

    The computation of real-time properties, such as transport coefficients or bound state spectra of strongly interacting quantum fields in thermal equilibrium is a pressing matter. Since the sign problem prevents a direct evaluation of these quantities, lattice data needs to be analytically continued from the Euclidean domain of the simulation to Minkowski time, in general an ill-posed inverse problem. Here we report on a novel approach to improve the determination of real-time information in the form of spectral functions by setting up a simulation prescription in imaginary frequencies. By carefully distinguishing between initial conditions and quantum dynamics one obtains access to correlation functions also outside the conventional Matsubara frequencies. In particular the range between ω0 and ω1 = 2πT, which is most relevant for the inverse problem may be more highly resolved. In combination with the fact that in imaginary frequencies the kernel of the inverse problem is not an exponential but only a rational function we observe significant improvements in the reconstruction of spectral functions, demonstrated in a simple 0+1 dimensional scalar field theory toy model.

  12. Fast reconstruction of optical properties for complex segmentations in near infrared imaging

    NASA Astrophysics Data System (ADS)

    Jiang, Jingjing; Wolf, Martin; Sánchez Majos, Salvador

    2017-04-01

    The intrinsic ill-posed nature of the inverse problem in near infrared imaging makes the reconstruction of fine details of objects deeply embedded in turbid media challenging even for the large amounts of data provided by time-resolved cameras. In addition, most reconstruction algorithms for this type of measurements are only suitable for highly symmetric geometries and rely on a linear approximation to the diffusion equation since a numerical solution of the fully non-linear problem is computationally too expensive. In this paper, we will show that a problem of practical interest can be successfully addressed making efficient use of the totality of the information supplied by time-resolved cameras. We set aside the goal of achieving high spatial resolution for deep structures and focus on the reconstruction of complex arrangements of large regions. We show numerical results based on a combined approach of wavelength-normalized data and prior geometrical information, defining a fully parallelizable problem in arbitrary geometries for time-resolved measurements. Fast reconstructions are obtained using a diffusion approximation and Monte-Carlo simulations, parallelized in a multicore computer and a GPU respectively.

  13. Hessian Schatten-norm regularization for linear inverse problems.

    PubMed

    Lefkimmiatis, Stamatios; Ward, John Paul; Unser, Michael

    2013-05-01

    We introduce a novel family of invariant, convex, and non-quadratic functionals that we employ to derive regularized solutions of ill-posed linear inverse imaging problems. The proposed regularizers involve the Schatten norms of the Hessian matrix, which are computed at every pixel of the image. They can be viewed as second-order extensions of the popular total-variation (TV) semi-norm since they satisfy the same invariance properties. Meanwhile, by taking advantage of second-order derivatives, they avoid the staircase effect, a common artifact of TV-based reconstructions, and perform well for a wide range of applications. To solve the corresponding optimization problems, we propose an algorithm that is based on a primal-dual formulation. A fundamental ingredient of this algorithm is the projection of matrices onto Schatten norm balls of arbitrary radius. This operation is performed efficiently based on a direct link we provide between vector projections onto lq norm balls and matrix projections onto Schatten norm balls. Finally, we demonstrate the effectiveness of the proposed methods through experimental results on several inverse imaging problems with real and simulated data.

  14. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    NASA Astrophysics Data System (ADS)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  15. Wavelet-promoted sparsity for non-invasive reconstruction of electrical activity of the heart.

    PubMed

    Cluitmans, Matthijs; Karel, Joël; Bonizzi, Pietro; Volders, Paul; Westra, Ronald; Peeters, Ralf

    2018-05-12

    We investigated a novel sparsity-based regularization method in the wavelet domain of the inverse problem of electrocardiography that aims at preserving the spatiotemporal characteristics of heart-surface potentials. In three normal, anesthetized dogs, electrodes were implanted around the epicardium and body-surface electrodes were attached to the torso. Potential recordings were obtained simultaneously on the body surface and on the epicardium. A CT scan was used to digitize a homogeneous geometry which consisted of the body-surface electrodes and the epicardial surface. A novel multitask elastic-net-based method was introduced to regularize the ill-posed inverse problem. The method simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Performance was assessed in terms of quality of reconstructed epicardial potentials, estimated activation and recovery time, and estimated locations of pacing, and compared with performance of Tikhonov zeroth-order regularization. Results in the wavelet domain obtained higher sparsity than those in the time domain. Epicardial potentials were non-invasively reconstructed with higher accuracy than with Tikhonov zeroth-order regularization (p < 0.05), and recovery times were improved (p < 0.05). No significant improvement was found in terms of activation times and localization of origin of pacing. Next to improved estimation of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias, this novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions. Graphical Abstract The inverse problem of electrocardiography is to reconstruct heart-surface potentials from recorded bodysurface electrocardiograms (ECGs) and a torso-heart geometry. However, it is ill-posed and solving it requires additional constraints for regularization. We introduce a regularization method that simultaneously pursues a sparse wavelet representation in time-frequency and exploits correlations in space. Our approach reconstructs epicardial (heart-surface) potentials with higher accuracy than common methods. It also improves the reconstruction of recovery isochrones, which is important when assessing substrate for cardiac arrhythmias. This novel technique opens potentially powerful opportunities for clinical application, by allowing to choose wavelet bases that are optimized for specific clinical questions.

  16. Analyzing Pre-Service Primary Teachers' Fraction Knowledge Structures through Problem Posing

    ERIC Educational Resources Information Center

    Kilic, Cigdem

    2015-01-01

    In this study it was aimed to determine pre-service primary teachers' knowledge structures of fraction through problem posing activities. A total of 90 pre-service primary teachers participated in this study. A problem posing test consisting of two questions was used and the participants were asked to generate as many as problems based on the…

  17. Students’ Mathematical Creative Thinking through Problem Posing Learning

    NASA Astrophysics Data System (ADS)

    Ulfah, U.; Prabawanto, S.; Jupri, A.

    2017-09-01

    The research aims to investigate the differences in enhancement of students’ mathematical creative thinking ability of those who received problem posing approach assisted by manipulative media and students who received problem posing approach without manipulative media. This study was a quasi experimental research with non-equivalent control group design. Population of this research was third-grade students of a primary school in Bandung city in 2016/2017 academic year. Sample of this research was two classes as experiment class and control class. The instrument used is a test of mathematical creative thinking ability. Based on the results of the research, it is known that the enhancement of the students’ mathematical creative thinking ability of those who received problem posing approach with manipulative media aid is higher than the ability of those who received problem posing approach without manipulative media aid. Students who get learning problem posing learning accustomed in arranging mathematical sentence become matter of story so it can facilitate students to comprehend about story

  18. An Interview Forum on Interlibrary Loan/Document Delivery with Lynn Wiley and Tom Delaney

    ERIC Educational Resources Information Center

    Hasty, Douglas F.

    2003-01-01

    The Virginia Boucher-OCLC Distinguished ILL Librarian Award is the most prestigious commendation given to practitioners in the field. The following questions about ILL were posed to the two most recent recipients of the Boucher Award: Tom Delaney (2002), Coordinator of Interlibrary Loan Services at Colorado State University and Lynn Wiley (2001),…

  19. Deinstitutionalization: Its Impact on Community Mental Health Centers and the Seriously Mentally Ill

    ERIC Educational Resources Information Center

    Kliewer, Stephen P.; McNally Melissa; Trippany, Robyn L.

    2009-01-01

    Deinstitutionalization has had a significant impact on the mental health system, including the client, the agency, and the counselor. For clients with serious mental illness, learning to live in a community setting poses challenges that are often difficult to overcome. Community mental health agencies must respond to these specific needs, thus…

  20. Spouses' Effectiveness as End-of-Life Health Care Surrogates: Accuracy, Uncertainty, and Errors of Overtreatment or Undertreatment

    ERIC Educational Resources Information Center

    Moorman, Sara M.; Carr, Deborah

    2008-01-01

    Purpose: We document the extent to which older adults accurately report their spouses' end-of-life treatment preferences, in the hypothetical scenarios of terminal illness with severe physical pain and terminal illness with severe cognitive impairment. We investigate the extent to which accurate reports, inaccurate reports (i.e., errors of…

  1. Analysis of general aviation single-pilot IFR incident data obtained from the NASA Aviation Safety Reporting System

    NASA Technical Reports Server (NTRS)

    Bergeron, H. P.

    1983-01-01

    An analysis of incident data obtained from the NASA Aviation Safety Reporting System (ASRS) has been made to determine the problem areas in general aviation single-pilot IFR (SPIFR) operations. The Aviation Safety Reporting System data base is a compilation of voluntary reports of incidents from any person who has observed or been involved in an occurrence which was believed to have posed a threat to flight safety. This paper examines only those reported incidents specifically related to general aviation single-pilot IFR operations. The frequency of occurrence of factors related to the incidents was the criterion used to define significant problem areas and, hence, to suggest where research is needed. The data was cataloged into one of five major problem areas: (1) controller judgment and response problems, (2) pilot judgment and response problems, (3) air traffic control (ATC) intrafacility and interfacility conflicts, (4) ATC and pilot communication problems, and (5) IFR-VFR conflicts. In addition, several points common to all or most of the problems were observed and reported. These included human error, communications, procedures and rules, and work load.

  2. Lq -Lp optimization for multigrid fluorescence tomography of small animals using simplified spherical harmonics

    NASA Astrophysics Data System (ADS)

    Edjlali, Ehsan; Bérubé-Lauzière, Yves

    2018-01-01

    We present the first Lq -Lp optimization scheme for fluorescence tomographic imaging. This is then applied to small animal imaging. Fluorescence tomography is an ill-posed, and in full generality, a nonlinear problem that seeks to image the 3D concentration distribution of a fluorescent agent inside a biological tissue. Standard candidates for regularization to deal with the ill-posedness of the image reconstruction problem include L1 and L2 regularization. In this work, a general Lq -Lp regularization framework (Lq discrepancy function - Lp regularization term) is introduced for fluorescence tomographic imaging. A method to calculate the gradient for this general framework is developed which allows evaluating the performance of different cost functions/regularization schemes in solving the fluorescence tomographic problem. The simplified spherical harmonics approximation is used to accurately model light propagation inside the tissue. Furthermore, a multigrid mesh is utilized to decrease the dimension of the inverse problem and reduce the computational cost of the solution. The inverse problem is solved iteratively using an lm-BFGS quasi-Newton optimization method. The simulations are performed under different scenarios of noisy measurements. These are carried out on the Digimouse numerical mouse model with the kidney being the target organ. The evaluation of the reconstructed images is performed both qualitatively and quantitatively using several metrics including QR, RMSE, CNR, and TVE under rigorous conditions. The best reconstruction results under different scenarios are obtained with an L1.5 -L1 scheme with premature termination of the optimization process. This is in contrast to approaches commonly found in the literature relying on L2 -L2 schemes.

  3. Communicating Scientific Findings to Lawyers, Policy-Makers, and the Public (Invited)

    NASA Astrophysics Data System (ADS)

    Thompson, W.; Velsko, S. P.

    2013-12-01

    This presentation will summarize the authors' collaborative research on inferential errors, bias and communication difficulties that have arisen in the area of WMD forensics. This research involves analysis of problems that have arisen in past national security investigations, interviews with scientists from various disciplines whose work has been used in WMD investigations, interviews with policy-makers, and psychological studies of lay understanding of forensic evidence. Implications of this research for scientists involved in nuclear explosion monitoring will be discussed. Among the issues covered will be: - Potential incompatibilities between the questions policy makers pose and the answers that experts can provide. - Common misunderstandings of scientific and statistical data. - Advantages and disadvantages of various methods for describing and characterizing the strength of scientific findings. - Problems that can arise from excessive hedging or, alternatively, insufficient qualification of scientific conclusions. - Problems that can arise from melding scientific and non-scientific evidence in forensic assessments.

  4. Process-based Assignment-Setting Change for Support of Overcoming Bottlenecks in Learning by Problem-Posing in Arithmetic Word Problems

    NASA Astrophysics Data System (ADS)

    Supianto, A. A.; Hayashi, Y.; Hirashima, T.

    2017-02-01

    Problem-posing is well known as an effective activity to learn problem-solving methods. Monsakun is an interactive problem-posing learning environment to facilitate arithmetic word problems learning for one operation of addition and subtraction. The characteristic of Monsakun is problem-posing as sentence-integration that lets learners make a problem of three sentences. Monsakun provides learners with five or six sentences including dummies, which are designed through careful considerations by an expert teacher as a meaningful distraction to the learners in order to learn the structure of arithmetic word problems. The results of the practical use of Monsakun in elementary schools show that many learners have difficulties in arranging the proper answer at the high level of assignments. The analysis of the problem-posing process of such learners found that their misconception of arithmetic word problems causes impasses in their thinking and mislead them to use dummies. This study proposes a method of changing assignments as a support for overcoming bottlenecks of thinking. In Monsakun, the bottlenecks are often detected as a frequently repeated use of a specific dummy. If such dummy can be detected, it is the key factor to support learners to overcome their difficulty. This paper discusses how to detect the bottlenecks and to realize such support in learning by problem-posing.

  5. The Problems Posed and Models Employed by Primary School Teachers in Subtraction with Fractions

    ERIC Educational Resources Information Center

    Iskenderoglu, Tuba Aydogdu

    2017-01-01

    Students have difficulties in solving problems of fractions in almost all levels, and in problem posing. Problem posing skills influence the process of development of the behaviors observed at the level of comprehension. That is why it is very crucial for teachers to develop activities for student to have conceptual comprehension of fractions and…

  6. Measuring mental illness stigma with diminished social desirability effects.

    PubMed

    Michaels, Patrick J; Corrigan, Patrick W

    2013-06-01

    For persons with mental illness, stigma diminishes employment and independent living opportunities as well as participation in psychiatric care. Public stigma interventions have sought to ameliorate these consequences. Evaluation of anti-stigma programs' impact is typically accomplished with self-report questionnaires. However, cultural mores encourage endorsement of answers that are socially preferred rather than one's true belief. This problem, social desirability, has been circumvented through development of faux knowledge tests (KTs) (i.e., Error-Choice Tests); written to assess prejudice. Our KT uses error-choice test methodology to assess stigmatizing attitudes. Test content was derived from review of typical KTs for façade reinforcement. Answer endorsement suggests bias or stigma; such determinations were based on the empirical literature. KT psychometrics were examined in samples of college students, community members and mental health providers and consumers. Test-retest reliability ranged from fair (0.50) to good (0.70). Construct validity analyses of public stigma indicated a positive relationship with the Attribution Questionnaire and inverse relationships with Self-Determination and Empowerment Scales. No significant relationships were observed with self-stigma measures (recovery, empowerment). This psychometric evaluation study suggests that a self-administered questionnaire may circumvent social desirability and have merit as a stigma measurement tool.

  7. Bayesian probabilistic approach for inverse source determination from limited and noisy chemical or biological sensor concentration measurements

    NASA Astrophysics Data System (ADS)

    Yee, Eugene

    2007-04-01

    Although a great deal of research effort has been focused on the forward prediction of the dispersion of contaminants (e.g., chemical and biological warfare agents) released into the turbulent atmosphere, much less work has been directed toward the inverse prediction of agent source location and strength from the measured concentration, even though the importance of this problem for a number of practical applications is obvious. In general, the inverse problem of source reconstruction is ill-posed and unsolvable without additional information. It is demonstrated that a Bayesian probabilistic inferential framework provides a natural and logically consistent method for source reconstruction from a limited number of noisy concentration data. In particular, the Bayesian approach permits one to incorporate prior knowledge about the source as well as additional information regarding both model and data errors. The latter enables a rigorous determination of the uncertainty in the inference of the source parameters (e.g., spatial location, emission rate, release time, etc.), hence extending the potential of the methodology as a tool for quantitative source reconstruction. A model (or, source-receptor relationship) that relates the source distribution to the concentration data measured by a number of sensors is formulated, and Bayesian probability theory is used to derive the posterior probability density function of the source parameters. A computationally efficient methodology for determination of the likelihood function for the problem, based on an adjoint representation of the source-receptor relationship, is described. Furthermore, we describe the application of efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) for sampling from the posterior distribution of the source parameters, the latter of which is required to undertake the Bayesian computation. The Bayesian inferential methodology for source reconstruction is validated against real dispersion data for two cases involving contaminant dispersion in highly disturbed flows over urban and complex environments where the idealizations of horizontal homogeneity and/or temporal stationarity in the flow cannot be applied to simplify the problem. Furthermore, the methodology is applied to the case of reconstruction of multiple sources.

  8. Application of the L-curve in geophysical inverse problems: methodologies for the extraction of the optimal parameter

    NASA Astrophysics Data System (ADS)

    Bassrei, A.; Terra, F. A.; Santos, E. T.

    2007-12-01

    Inverse problems in Applied Geophysics are usually ill-posed. One way to reduce such deficiency is through derivative matrices, which are a particular case of a more general family that receive the name regularization. The regularization by derivative matrices has an input parameter called regularization parameter, which choice is already a problem. It was suggested in the 1970's a heuristic approach later called L-curve, with the purpose to provide the optimum regularization parameter. The L-curve is a parametric curve, where each point is associated to a λ parameter. In the horizontal axis one represents the error between the observed data and the calculated one and in the vertical axis one represents the product between the regularization matrix and the estimated model. The ideal point is the L-curve knee, where there is a balance between the quantities represented in the Cartesian axes. The L-curve has been applied to a variety of inverse problems, also in Geophysics. However, the visualization of the knee is not always an easy task, in special when the L-curve does not the L shape. In this work three methodologies are employed for the search and obtainment of the optimal regularization parameter from the L curve. The first criterion is the utilization of Hansen's tool box which extracts λ automatically. The second criterion consists in to extract visually the optimal parameter. By third criterion one understands the construction of the first derivative of the L-curve, and the posterior automatic extraction of the inflexion point. The utilization of the L-curve with the three above criteria were applied and validated in traveltime tomography and 2-D gravity inversion. After many simulations with synthetic data, noise- free as well as data corrupted with noise, with the regularization orders 0, 1, and 2, we verified that the three criteria are valid and provide satisfactory results. The third criterion presented the best performance, specially in cases where the L-curve has an irregular shape.

  9. Problem-Posing Research in Mathematics Education: Looking Back, Looking Around, and Looking Ahead

    ERIC Educational Resources Information Center

    Silver, Edward A.

    2013-01-01

    In this paper, I comment on the set of papers in this special issue on mathematical problem posing. I offer some observations about the papers in relation to several key issues, and I suggest some productive directions for continued research inquiry on mathematical problem posing.

  10. Depression and decision-making capacity for treatment or research: a systematic review

    PubMed Central

    2013-01-01

    Background Psychiatric disorders can pose problems in the assessment of decision-making capacity (DMC). This is so particularly where psychopathology is seen as the extreme end of a dimension that includes normality. Depression is an example of such a psychiatric disorder. Four abilities (understanding, appreciating, reasoning and ability to express a choice) are commonly assessed when determining DMC in psychiatry and uncertainty exists about the extent to which depression impacts capacity to make treatment or research participation decisions. Methods A systematic review of the medical ethical and empirical literature concerning depression and DMC was conducted. Medline, EMBASE and PsycInfo databases were searched for studies of depression and consent and DMC. Empirical studies and papers containing ethical analysis were extracted and analysed. Results 17 publications were identified. The clinical ethics studies highlighted appreciation of information as the ability that can be impaired in depression, indicating that emotional factors can impact on DMC. The empirical studies reporting decision-making ability scores also highlighted impairment of appreciation but without evidence of strong impact. Measurement problems, however, looked likely. The frequency of clinical judgements of lack of DMC in people with depression varied greatly according to acuity of illness and whether judgements are structured or unstructured. Conclusions Depression can impair DMC especially if severe. Most evidence indicates appreciation as the ability primarily impaired by depressive illness. Understanding and measuring the appreciation ability in depression remains a problem in need of further research. PMID:24330745

  11. Prostate Brachytherapy Seed Reconstruction with Gaussian Blurring and Optimal Coverage Cost

    PubMed Central

    Lee, Junghoon; Liu, Xiaofeng; Jain, Ameet K.; Song, Danny Y.; Burdette, E. Clif; Prince, Jerry L.; Fichtinger, Gabor

    2009-01-01

    Intraoperative dosimetry in prostate brachytherapy requires localization of the implanted radioactive seeds. A tomosynthesis-based seed reconstruction method is proposed. A three-dimensional volume is reconstructed from Gaussian-blurred projection images and candidate seed locations are computed from the reconstructed volume. A false positive seed removal process, formulated as an optimal coverage problem, iteratively removes “ghost” seeds that are created by tomosynthesis reconstruction. In an effort to minimize pose errors that are common in conventional C-arms, initial pose parameter estimates are iteratively corrected by using the detected candidate seeds as fiducials, which automatically “focuses” the collected images and improves successive reconstructed volumes. Simulation results imply that the implanted seed locations can be estimated with a detection rate of ≥ 97.9% and ≥ 99.3% from three and four images, respectively, when the C-arm is calibrated and the pose of the C-arm is known. The algorithm was also validated on phantom data sets successfully localizing the implanted seeds from four or five images. In a Phase-1 clinical trial, we were able to localize the implanted seeds from five intraoperative fluoroscopy images with 98.8% (STD=1.6) overall detection rate. PMID:19605321

  12. A Human Proximity Operations System test case validation approach

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  13. Localization from near-source quasi-static electromagnetic fields

    NASA Astrophysics Data System (ADS)

    Mosher, J. C.

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. The nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUltiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.

  14. Localization from near-source quasi-static electromagnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, John Compton

    1993-09-01

    A wide range of research has been published on the problem of estimating the parameters of electromagnetic and acoustical sources from measurements of signals measured at an array of sensors. In the quasi-static electromagnetic cases examined here, the signal variation from a point source is relatively slow with respect to the signal propagation and the spacing of the array of sensors. As such, the location of the point sources can only be determined from the spatial diversity of the received signal across the array. The inverse source localization problem is complicated by unknown model order and strong local minima. Themore » nonlinear optimization problem is posed for solving for the parameters of the quasi-static source model. The transient nature of the sources can be exploited to allow subspace approaches to separate out the signal portion of the spatial correlation matrix. Decomposition techniques are examined for improved processing, and an adaptation of MUtiple SIgnal Characterization (MUSIC) is presented for solving the source localization problem. Recent results on calculating the Cramer-Rao error lower bounds are extended to the multidimensional problem here. This thesis focuses on the problem of source localization in magnetoencephalography (MEG), with a secondary application to thunderstorm source localization. Comparisons are also made between MEG and its electrical equivalent, electroencephalography (EEG). The error lower bounds are examined in detail for several MEG and EEG configurations, as well as localizing thunderstorm cells over Cape Canaveral and Kennedy Space Center. Time-eigenspectrum is introduced as a parsing technique for improving the performance of the optimization problem.« less

  15. Dense-HOG-based drift-reduced 3D face tracking for infant pain monitoring

    NASA Astrophysics Data System (ADS)

    Saeijs, Ronald W. J. J.; Tjon A Ten, Walther E.; de With, Peter H. N.

    2017-03-01

    This paper presents a new algorithm for 3D face tracking intended for clinical infant pain monitoring. The algorithm uses a cylinder head model and 3D head pose recovery by alignment of dynamically extracted templates based on dense-HOG features. The algorithm includes extensions for drift reduction, using re-registration in combination with multi-pose state estimation by means of a square-root unscented Kalman filter. The paper reports experimental results on videos of moving infants in hospital who are relaxed or in pain. Results show good tracking behavior for poses up to 50 degrees from upright-frontal. In terms of eye location error relative to inter-ocular distance, the mean tracking error is below 9%.

  16. Challenges of caring for children with mental disorders: Experiences and views of caregivers attending the outpatient clinic at Muhimbili National Hospital, Dar es Salaam - Tanzania

    PubMed Central

    2012-01-01

    Background It is estimated that world-wide up to 20 % of children suffer from debilitating mental illness. Mental disorders that pose a significant concern include learning disorders, hyperkinetic disorders (ADHD), depression, psychosis, pervasive development disorders, attachment disorders, anxiety disorders, conduct disorder, substance abuse and eating disorders. Living with such children can be very stressful for caregivers in the family. Therefore, determination of challenges of living with these children is important in the process of finding ways to help or support caregivers to provide proper care for their children. The purpose of this study was to explore the psychological and emotional, social, and economic challenges that parents or guardians experience when caring for mentally ill children and what they do to address or deal with them. Methodology A qualitative study design using in-depth interviews and focus group discussions was applied. The study was conducted at the psychiatric unit of Muhimbili National Hospital in Tanzania. Two focus groups discussions (FGDs) and 8 in-depth interviews were conducted with caregivers who attended the psychiatric clinic with their children. Data analysis was done using content analysis. Results The study revealed psychological and emotional, social, and economic challenges caregivers endure while living with mentally ill children. Psychological and emotional challenges included being stressed by caring tasks and having worries about the present and future life of their children. They had feelings of sadness, and inner pain or bitterness due to the disturbing behaviour of the children. They also experienced some communication problems with their children due to their inability to talk. Social challenges were inadequate social services for their children, stigma, burden of caring task, lack of public awareness of mental illness, lack of social support, and problems with social life. The economic challenges were poverty, child care interfering with various income generating activities in the family, and extra expenses associated with the child’s illness. Conclusion Caregivers of mentally ill children experience various psychological and emotional, social, and economic challenges. Professional assistance, public awareness of mental illnesses in children, social support by the government, private sector, and non-governmental organizations (NGOs) are important in addressing these challenges. PMID:22559084

  17. 3D first-arrival traveltime tomography with modified total variation regularization

    NASA Astrophysics Data System (ADS)

    Jiang, Wenbin; Zhang, Jie

    2018-02-01

    Three-dimensional (3D) seismic surveys have become a major tool in the exploration and exploitation of hydrocarbons. 3D seismic first-arrival traveltime tomography is a robust method for near-surface velocity estimation. A common approach for stabilizing the ill-posed inverse problem is to apply Tikhonov regularization to the inversion. However, the Tikhonov regularization method recovers smooth local structures while blurring the sharp features in the model solution. We present a 3D first-arrival traveltime tomography method with modified total variation (MTV) regularization to preserve sharp velocity contrasts and improve the accuracy of velocity inversion. To solve the minimization problem of the new traveltime tomography method, we decouple the original optimization problem into two following subproblems: a standard traveltime tomography problem with the traditional Tikhonov regularization and a L2 total variation problem. We apply the conjugate gradient method and split-Bregman iterative method to solve these two subproblems, respectively. Our synthetic examples show that the new method produces higher resolution models than the conventional traveltime tomography with Tikhonov regularization. We apply the technique to field data. The stacking section shows significant improvements with static corrections from the MTV traveltime tomography.

  18. Global optimization for motion estimation with applications to ultrasound videos of carotid artery plaques

    NASA Astrophysics Data System (ADS)

    Murillo, Sergio; Pattichis, Marios; Soliz, Peter; Barriga, Simon; Loizou, C. P.; Pattichis, C. S.

    2010-03-01

    Motion estimation from digital video is an ill-posed problem that requires a regularization approach. Regularization introduces a smoothness constraint that can reduce the resolution of the velocity estimates. The problem is further complicated for ultrasound videos (US), where speckle noise levels can be significant. Motion estimation using optical flow models requires the modification of several parameters to satisfy the optical flow constraint as well as the level of imposed smoothness. Furthermore, except in simulations or mostly unrealistic cases, there is no ground truth to use for validating the velocity estimates. This problem is present in all real video sequences that are used as input to motion estimation algorithms. It is also an open problem in biomedical applications like motion analysis of US of carotid artery (CA) plaques. In this paper, we study the problem of obtaining reliable ultrasound video motion estimates for atherosclerotic plaques for use in clinical diagnosis. A global optimization framework for motion parameter optimization is presented. This framework uses actual carotid artery motions to provide optimal parameter values for a variety of motions and is tested on ten different US videos using two different motion estimation techniques.

  19. On decoupling of volatility smile and term structure in inverse option pricing

    NASA Astrophysics Data System (ADS)

    Egger, Herbert; Hein, Torsten; Hofmann, Bernd

    2006-08-01

    Correct pricing of options and other financial derivatives is of great importance to financial markets and one of the key subjects of mathematical finance. Usually, parameters specifying the underlying stochastic model are not directly observable, but have to be determined indirectly from observable quantities. The identification of local volatility surfaces from market data of European vanilla options is one very important example of this type. As with many other parameter identification problems, the reconstruction of local volatility surfaces is ill-posed, and reasonable results can only be achieved via regularization methods. Moreover, due to the sparsity of data, the local volatility is not uniquely determined, but depends strongly on the kind of regularization norm used and a good a priori guess for the parameter. By assuming a multiplicative structure for the local volatility, which is motivated by the specific data situation, the inverse problem can be decomposed into two separate sub-problems. This removes part of the non-uniqueness and allows us to establish convergence and convergence rates under weak assumptions. Additionally, a numerical solution of the two sub-problems is much cheaper than that of the overall identification problem. The theoretical results are illustrated by numerical tests.

  20. Using Weighted Constraints to Diagnose Errors in Logic Programming--The Case of an Ill-Defined Domain

    ERIC Educational Resources Information Center

    Le, Nguyen-Thinh; Menzel, Wolfgang

    2009-01-01

    In this paper, we introduce logic programming as a domain that exhibits some characteristics of being ill-defined. In order to diagnose student errors in such a domain, we need a means to hypothesise the student's intention, that is the strategy underlying her solution. This is achieved by weighting constraints, so that hypotheses about solution…

  1. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  2. An Exploratory Framework for Handling the Complexity of Mathematical Problem Posing in Small Groups

    ERIC Educational Resources Information Center

    Kontorovich, Igor; Koichu, Boris; Leikin, Roza; Berman, Avi

    2012-01-01

    The paper introduces an exploratory framework for handling the complexity of students' mathematical problem posing in small groups. The framework integrates four facets known from past research: task organization, students' knowledge base, problem-posing heuristics and schemes, and group dynamics and interactions. In addition, it contains a new…

  3. Problem Posing at All Levels in the Calculus Classroom

    ERIC Educational Resources Information Center

    Perrin, John Robert

    2007-01-01

    This article explores the use of problem posing in the calculus classroom using investigative projects. Specially, four examples of student work are examined, each one differing in originality of problem posed. By allowing students to explore actual questions that they have about calculus, coming from their own work or class discussion, or…

  4. Critical Inquiry across the Disciplines: Strategies for Student-Generated Problem Posing

    ERIC Educational Resources Information Center

    Nardone, Carroll Ferguson; Lee, Renee Gravois

    2011-01-01

    Problem posing is a higher-order, active-learning task that is important for students to develop. This article describes a series of interdisciplinary learning activities designed to help students strengthen their problem-posing skills, which requires that students become more responsible for their learning and that faculty move to a facilitator…

  5. Developing Teachers' Subject Didactic Competence through Problem Posing

    ERIC Educational Resources Information Center

    Ticha, Marie; Hospesova, Alena

    2013-01-01

    Problem posing (not only in lesson planning but also directly in teaching whenever needed) is one of the attributes of a teacher's subject didactic competence. In this paper, problem posing in teacher education is understood as an educational and a diagnostic tool. The results of the study were gained in pre-service primary school teacher…

  6. The Impact of Problem Posing on Elementary Teachers' Beliefs about Mathematics and Mathematics Teaching

    ERIC Educational Resources Information Center

    Barlow, Angela T.; Cates, Janie M.

    2006-01-01

    This study investigated the impact of incorporating problem posing in elementary classrooms on the beliefs held by elementary teachers about mathematics and mathematics teaching. Teachers participated in a year-long staff development project aimed at facilitating the incorporation of problem posing into their classrooms. Beliefs were examined via…

  7. The Posing of Arithmetic Problems by Mathematically Talented Students

    ERIC Educational Resources Information Center

    Espinoza González, Johan; Lupiáñez Gómez, José Luis; Segovia Alex, Isidoro

    2016-01-01

    Introduction: This paper analyzes the arithmetic problems posed by a group of mathematically talented students when given two problem-posing tasks, and compares these students' responses to those given by a standard group of public school students to the same tasks. Our analysis focuses on characterizing and identifying the differences between the…

  8. Posing Problems to Understand Children's Learning of Fractions

    ERIC Educational Resources Information Center

    Cheng, Lu Pien

    2013-01-01

    In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…

  9. Development of the Structured Problem Posing Skills and Using Metaphoric Perceptions

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Unal, Hasan

    2014-01-01

    The purpose of this study was to introduce problem posing activity to third grade students who have never met before. This study was also explored students' metaphorical images on problem posing process. Participants were from Public school in Marmara Region in Turkey. Data was analyzed both qualitatively (content analysis for difficulty and…

  10. Integrating Worked Examples into Problem Posing in a Web-Based Learning Environment

    ERIC Educational Resources Information Center

    Hsiao, Ju-Yuan; Hung, Chun-Ling; Lan, Yu-Feng; Jeng, Yoau-Chau

    2013-01-01

    Most students always lack of experience and perceive difficult regarding problem posing. The study hypothesized that worked examples may have benefits for supporting students' problem posing activities. A quasi-experiment was conducted in the context of a business mathematics course for examining the effects of integrating worked examples into…

  11. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    NASA Astrophysics Data System (ADS)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  12. Implications of clinical trial design on sample size requirements.

    PubMed

    Leon, Andrew C

    2008-07-01

    The primary goal in designing a randomized controlled clinical trial (RCT) is to minimize bias in the estimate of treatment effect. Randomized group assignment, double-blinded assessments, and control or comparison groups reduce the risk of bias. The design must also provide sufficient statistical power to detect a clinically meaningful treatment effect and maintain a nominal level of type I error. An attempt to integrate neurocognitive science into an RCT poses additional challenges. Two particularly relevant aspects of such a design often receive insufficient attention in an RCT. Multiple outcomes inflate type I error, and an unreliable assessment process introduces bias and reduces statistical power. Here we describe how both unreliability and multiple outcomes can increase the study costs and duration and reduce the feasibility of the study. The objective of this article is to consider strategies that overcome the problems of unreliability and multiplicity.

  13. The Additional Error of Inertial Sensors Induced by Hypersonic Flight Conditions

    PubMed Central

    Karachun, Volodimir; Mel’nick, Viktorij; Korobiichuk, Igor; Nowicki, Michał; Szewczyk, Roman; Kobzar, Svitlana

    2016-01-01

    The emergence of hypersonic technology pose a new challenge for inertial navigation sensors, widely used in aerospace industry. The main problems are: extremely high temperatures, vibration of the fuselage, penetrating acoustic radiation and shock N-waves. The nature of the additional errors of the gyroscopic inertial sensor with hydrostatic suspension components under operating conditions generated by forced precession of the movable part of the suspension due to diffraction phenomena in acoustic fields is explained. The cause of the disturbing moments in the form of the Coriolis inertia forces during the transition of the suspension surface into the category of impedance is revealed. The boundaries of occurrence of the features on the resonance wave match are described. The values of the “false” angular velocity as a result of the elastic-stress state of suspension in the acoustic fields are determined. PMID:26927122

  14. Effective Compiler Error Message Enhancement for Novice Programming Students

    ERIC Educational Resources Information Center

    Becker, Brett A.; Glanville, Graham; Iwashima, Ricardo; McDonnell, Claire; Goslin, Kyle; Mooney, Catherine

    2016-01-01

    Programming is an essential skill that many computing students are expected to master. However, programming can be difficult to learn. Successfully interpreting compiler error messages (CEMs) is crucial for correcting errors and progressing toward success in programming. Yet these messages are often difficult to understand and pose a barrier to…

  15. Mathematics of tsunami: modelling and identification

    NASA Astrophysics Data System (ADS)

    Krivorotko, Olga; Kabanikhin, Sergey

    2015-04-01

    Tsunami (long waves in the deep water) motion caused by underwater earthquakes is described by shallow water equations ( { ηtt = div (gH (x,y)-gradη), (x,y) ∈ Ω, t ∈ (0,T ); η|t=0 = q(x,y), ηt|t=0 = 0, (x,y) ∈ Ω. ( (1) Bottom relief H(x,y) characteristics and the initial perturbation data (a tsunami source q(x,y)) are required for the direct simulation of tsunamis. The main difficulty problem of tsunami modelling is a very big size of the computational domain (Ω = 500 × 1000 kilometres in space and about one hour computational time T for one meter of initial perturbation amplitude max|q|). The calculation of the function η(x,y,t) of three variables in Ω × (0,T) requires large computing resources. We construct a new algorithm to solve numerically the problem of determining the moving tsunami wave height S(x,y) which is based on kinematic-type approach and analytical representation of fundamental solution. Proposed algorithm of determining the function of two variables S(x,y) reduces the number of operations in 1.5 times than solving problem (1). If all functions does not depend on the variable y (one dimensional case), then the moving tsunami wave height satisfies of the well-known Airy-Green formula: S(x) = S(0)° --- 4H (0)/H (x). The problem of identification parameters of a tsunami source using additional measurements of a passing wave is called inverse tsunami problem. We investigate two different inverse problems of determining a tsunami source q(x,y) using two different additional data: Deep-ocean Assessment and Reporting of Tsunamis (DART) measurements and satellite altimeters wave-form images. These problems are severely ill-posed. The main idea consists of combination of two measured data to reconstruct the source parameters. We apply regularization techniques to control the degree of ill-posedness such as Fourier expansion, truncated singular value decomposition, numerical regularization. The algorithm of selecting the truncated number of singular values of an inverse problem operator which is agreed with the error level in measured data is described and analysed. In numerical experiment we used conjugate gradient method for solving inverse tsunami problems. Gradient methods are based on minimizing the corresponding misfit function. To calculate the gradient of the misfit function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of two types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Institute of Computational Mathematics and Mathematical Geophysics of SB RAS developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. We demonstrate the tsunami simulation plug-in for historical tsunami events (2004 Indian Ocean tsunami, Simushir tsunami 2006 and others). This work was supported by the Ministry of Education and Science of the Russian Federation.

  16. An inverse dynamics approach to face animation.

    PubMed

    Pitermann, M; Munhall, K G

    2001-09-01

    Muscle-based models of the human face produce high quality animation but rely on recorded muscle activity signals or synthetic muscle signals that are often derived by trial and error. This paper presents a dynamic inversion of a muscle-based model (Lucero and Munhall, 1999) that permits the animation to be created from kinematic recordings of facial movements. Using a nonlinear optimizer (Powell's algorithm), the inversion produces a muscle activity set for seven muscles in the lower face that minimize the root mean square error between kinematic data recorded with OPTOTRAK and the corresponding nodes of the modeled facial mesh. This inverted muscle activity is then used to animate the facial model. In three tests of the inversion, strong correlations were observed for kinematics produced from synthetic muscle activity, for OPTOTRAK kinematics recorded from a talker for whom the facial model is morphologically adapted and finally for another talker with the model morphology adapted to a different individual. The correspondence between the animation kinematics and the three-dimensional OPTOTRAK data are very good and the animation is of high quality. Because the kinematic to electromyography (EMG) inversion is ill posed, there is no relation between the actual EMG and the inverted EMG. The overall redundancy of the motor system means that many different EMG patterns can produce the same kinematic output.

  17. Giving Voice to Study Volunteers: Comparing views of mentally ill, physically ill, and healthy protocol participants on ethical aspects of clinical research

    PubMed Central

    Roberts, Laura Weiss; Kim, Jane Paik

    2014-01-01

    Motivation Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Methods Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Results Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy=−0.04, CI [−0.46, 0.39]; physically ill – healthy= −0.13, CI [−0.62, −.36]). Conclusions Clinical research volunteers and healthy clinical research-“naive” individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. PMID:24931849

  18. Giving voice to study volunteers: comparing views of mentally ill, physically ill, and healthy protocol participants on ethical aspects of clinical research.

    PubMed

    Roberts, Laura Weiss; Kim, Jane Paik

    2014-09-01

    Ethical controversy surrounds clinical research involving seriously ill participants. While many stakeholders have opinions, the extent to which protocol volunteers themselves see human research as ethically acceptable has not been documented. To address this gap of knowledge, authors sought to assess views of healthy and ill clinical research volunteers regarding the ethical acceptability of human studies involving individuals who are ill or are potentially vulnerable. Surveys and semi-structured interviews were used to query clinical research protocol participants and a comparison group of healthy individuals. A total of 179 respondents participated in this study: 150 in protocols (60 mentally ill, 43 physically ill, and 47 healthy clinical research protocol participants) and 29 healthy individuals not enrolled in protocols. Main outcome measures included responses regarding ethical acceptability of clinical research when it presents significant burdens and risks, involves people with serious mental and physical illness, or enrolls people with other potential vulnerabilities in the research situation. Respondents expressed decreasing levels of acceptance of participation in research that posed burdens of increasing severity. Participation in protocols with possibly life-threatening consequences was perceived as least acceptable (mean = 1.82, sd = 1.29). Research on serious illnesses, including HIV, cancer, schizophrenia, depression, and post-traumatic stress disorder, was seen as ethically acceptable across respondent groups (range of means = [4.0, 4.7]). Mentally ill volunteers expressed levels of ethical acceptability for physical illness research and mental illness research as acceptable and similar, while physically ill volunteers expressed greater ethical acceptability for physical illness research than for mental illness research. Mentally ill, physically ill, and healthy participants expressed neutral to favorable perspectives regarding the ethical acceptability of clinical research participation by potentially vulnerable subpopulations (difference in acceptability perceived by mentally ill - healthy = -0.04, CI [-0.46, 0.39]; physically ill - healthy = -0.13, CI [-0.62, -.36]). Clinical research volunteers and healthy clinical research-"naïve" individuals view studies involving ill people as ethically acceptable, and their responses reflect concern regarding research that poses considerable burdens and risks and research involving vulnerable subpopulations. Physically ill research volunteers may be more willing to see burdensome and risky research as acceptable. Mentally ill research volunteers and healthy individuals expressed similar perspectives in this study, helping to dispel a misconception that those with mental illness should be presumed to hold disparate views. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. A Tikhonov Regularization Scheme for Focus Rotations with Focused Ultrasound Phased Arrays

    PubMed Central

    Hughes, Alec; Hynynen, Kullervo

    2016-01-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually-driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations. PMID:27913323

  20. A Tikhonov Regularization Scheme for Focus Rotations With Focused Ultrasound-Phased Arrays.

    PubMed

    Hughes, Alec; Hynynen, Kullervo

    2016-12-01

    Phased arrays have a wide range of applications in focused ultrasound therapy. By using an array of individually driven transducer elements, it is possible to steer a focus through space electronically and compensate for acoustically heterogeneous media with phase delays. In this paper, the concept of focusing an ultrasound-phased array is expanded to include a method to control the orientation of the focus using a Tikhonov regularization scheme. It is then shown that the Tikhonov regularization parameter used to solve the ill-posed focus rotation problem plays an important role in the balance between quality focusing and array efficiency. Finally, the technique is applied to the synthesis of multiple foci, showing that this method allows for multiple independent spatial rotations.

  1. Applications of Electrical Impedance Tomography (EIT): A Short Review

    NASA Astrophysics Data System (ADS)

    Kanti Bera, Tushar

    2018-03-01

    Electrical Impedance Tomography (EIT) is a tomographic imaging method which solves an ill posed inverse problem using the boundary voltage-current data collected from the surface of the object under test. Though the spatial resolution is comparatively low compared to conventional tomographic imaging modalities, due to several advantages EIT has been studied for a number of applications such as medical imaging, material engineering, civil engineering, biotechnology, chemical engineering, MEMS and other fields of engineering and applied sciences. In this paper, the applications of EIT have been reviewed and presented as a short summary. The working principal, instrumentation and advantages are briefly discussed followed by a detail discussion on the applications of EIT technology in different areas of engineering, technology and applied sciences.

  2. [Prevalence of patients with HIV infection in an emergency department].

    PubMed

    Greco, G M; Paparo, R; Ventura, R; Migliardi, C; Tallone, R; Moccia, F

    1995-01-01

    The activity at an ED, primarily aiming at providing rational and qualified support to critically ill patients, is forced to manage very different nosographic entities, including infectious, often contagious, pathologies. In this context the diffusion of HIV infection poses a number of problems concerning both the kind of patients presenting to the ED and the professional risk of health-care workers. In the first four months of 1992 the incidence of patients with recognized or presumed HIV infection at the "Pronto Soccorso Medico" was of 1.78% of 2327 patients admitted. This study aims to contribute to the epidemiologic definition of the risk of HIV infection due to occupational exposure, stressing the peculiar conditions of urgency-emergency often characterizing the activity within the ED.

  3. Two approaches to the care of an elder parent: a study of Robert Anderson's I Never Sang for My Father and Sawako Ariyoshi's Kokotsu no hito [The Twilight Years].

    PubMed

    Donow, H S

    1990-08-01

    Care of an elder patient is often regarded by the children as an unwanted burden. Anderson's 1968 play, I Never Sang for My Father, and Ariyoshi's 1972 novel, Kokotsu no hito [The Twilight years], show how two different families of two different cultures (American and Japanese) respond to this crisis. Both texts arrive at dramatically different conclusions: in one the children, Gene and Alice, prove unwilling or unable to cope with the problems posed by their father's need; in the other Akiko, though nearly overwhelmed by the burden of her father-in-law's illness, emerges richer for the experience.

  4. Improving chemical species tomography of turbulent flows using covariance estimation.

    PubMed

    Grauer, Samuel J; Hadwin, Paul J; Daun, Kyle J

    2017-05-01

    Chemical species tomography (CST) experiments can be divided into limited-data and full-rank cases. Both require solving ill-posed inverse problems, and thus the measurement data must be supplemented with prior information to carry out reconstructions. The Bayesian framework formalizes the role of additive information, expressed as the mean and covariance of a joint-normal prior probability density function. We present techniques for estimating the spatial covariance of a flow under limited-data and full-rank conditions. Our results show that incorporating a covariance estimate into CST reconstruction via a Bayesian prior increases the accuracy of instantaneous estimates. Improvements are especially dramatic in real-time limited-data CST, which is directly applicable to many industrially relevant experiments.

  5. Locating an atmospheric contamination source using slow manifolds

    NASA Astrophysics Data System (ADS)

    Tang, Wenbo; Haller, George; Baik, Jong-Jin; Ryu, Young-Hee

    2009-04-01

    Finite-size particle motion in fluids obeys the Maxey-Riley equations, which become singular in the limit of infinitesimally small particle size. Because of this singularity, finding the source of a dispersed set of small particles is a numerically ill-posed problem that leads to exponential blowup. Here we use recent results on the existence of a slow manifold in the Maxey-Riley equations to overcome this difficulty in source inversion. Specifically, we locate the source of particles by projecting their dispersed positions on a time-varying slow manifold, and by advecting them on the manifold in backward time. We use this technique to locate the source of a hypothetical anthrax release in an unsteady three-dimensional atmospheric wind field in an urban street canyon.

  6. Developing Pre-Service Teachers Understanding of Fractions through Problem Posing

    ERIC Educational Resources Information Center

    Toluk-Ucar, Zulbiye

    2009-01-01

    This study investigated the effect of problem posing on the pre-service primary teachers' understanding of fraction concepts enrolled in two different versions of a methods course at a university in Turkey. In the experimental version, problem posing was used as a teaching strategy. At the beginning of the study, the pre-service teachers'…

  7. The Effects of Problem Posing on Student Mathematical Learning: A Meta-Analysis

    ERIC Educational Resources Information Center

    Rosli, Roslinda; Capraro, Mary Margaret; Capraro, Robert M.

    2014-01-01

    The purpose of the study was to meta-synthesize research findings on the effectiveness of problem posing and to investigate the factors that might affect the incorporation of problem posing in the teaching and learning of mathematics. The eligibility criteria for inclusion of literature in the meta-analysis was: published between 1989 and 2011,…

  8. Teachers Implementing Mathematical Problem Posing in the Classroom: Challenges and Strategies

    ERIC Educational Resources Information Center

    Leung, Shuk-kwan S.

    2013-01-01

    This paper reports a study about how a teacher educator shared knowledge with teachers when they worked together to implement mathematical problem posing (MPP) in the classroom. It includes feasible methods for getting practitioners to use research-based tasks aligned to the curriculum in order to encourage children to pose mathematical problems.…

  9. Problem-Posing in Education: Transformation of the Practice of the Health Professional.

    ERIC Educational Resources Information Center

    Casagrande, L. D. R.; Caron-Ruffino, M.; Rodrigues, R. A. P.; Vendrusculo, D. M. S.; Takayanagui, A. M. M.; Zago, M. M. F.; Mendes, M. D.

    1998-01-01

    Studied the use of a problem-posing model in health education. The model based on the ideas of Paulo Freire is presented. Four innovative experiences of teaching-learning in environmental and occupational health and patient education are reported. Notes that the problem-posing model has the capability to transform health-education practice.…

  10. Prospective Middle School Mathematics Teachers' Knowledge of Linear Graphs in Context of Problem-Posing

    ERIC Educational Resources Information Center

    Kar, Tugrul

    2016-01-01

    This study examined prospective middle school mathematics teachers' problem-posing skills by investigating their ability to associate linear graphs with daily life situations. Prospective teachers were given linear graphs and asked to pose problems that could potentially be represented by the graphs. Their answers were analyzed in two stages. In…

  11. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  12. An in-home video study and questionnaire survey of food preparation, kitchen sanitation, and hand washing practices.

    PubMed

    Scott, Elizabeth; Herbold, Nancie

    2010-06-01

    Foodborne illnesses pose a problem to all individuals but are especially significant for infants, the elderly, and individuals with compromised immune systems. Personal hygiene is recognized as the number-one way people can lower their risk. The majority of meals in the U.S. are eaten at home. Little is known, however, about the actual application of personal hygiene and sanitation behaviors in the home. The study discussed in this article assessed knowledge of hygiene practices compared to observed behaviors and determined whether knowledge equated to practice. It was a descriptive study involving a convenience sample of 30 households. Subjects were recruited from the Boston area and a researcher and/or a research assistant traveled to the homes of study participants to videotape a standard food preparation procedure preceded by floor mopping. The results highlight the differences between individuals' reported beliefs and actual practice. This information can aid food safety and other health professionals in targeting food safety education so that consumers understand their own critical role in decreasing their risk for foodborne illness.

  13. A Framework for Modeling Human-Machine Interactions

    NASA Technical Reports Server (NTRS)

    Shafto, Michael G.; Rosekind, Mark R. (Technical Monitor)

    1996-01-01

    Modern automated flight-control systems employ a variety of different behaviors, or modes, for managing the flight. While developments in cockpit automation have resulted in workload reduction and economical advantages, they have also given rise to an ill-defined class of human-machine problems, sometimes referred to as 'automation surprises'. Our interest in applying formal methods for describing human-computer interaction stems from our ongoing research on cockpit automation. In this area of aeronautical human factors, there is much concern about how flight crews interact with automated flight-control systems, so that the likelihood of making errors, in particular mode-errors, is minimized and the consequences of such errors are contained. The goal of the ongoing research on formal methods in this context is: (1) to develop a framework for describing human interaction with control systems; (2) to formally categorize such automation surprises; and (3) to develop tests for identification of these categories early in the specification phase of a new human-machine system.

  14. [Building questions in forensic medicine and their logical basis].

    PubMed

    Kovalev, D; Shmarov, K; Ten'kov, D

    2015-01-01

    The authors characterize in brief the requirements to the correct formulation of the questions posed to forensic medical experts with special reference to the mistakes made in building the questions and the ways to avoid them. This article actually continues the series of publications of the authors concerned with the major logical errors encountered in expert conclusions. Further publications will be dedicated to the results of the in-depth analysis of the logical errors contained in the questions posed to forensic medical experts and encountered in the expert conclusions.

  15. Under Control

    PubMed Central

    Payne, John

    1971-01-01

    The new film of David Mercer's Family life poses some hard questions for psychiatry to answer and puts the Laingian case for 'schizophrenia' being an illness created within the family unit. PMID:27670980

  16. Joint Denoising/Compression of Image Contours via Shape Prior and Context Tree

    NASA Astrophysics Data System (ADS)

    Zheng, Amin; Cheung, Gene; Florencio, Dinei

    2018-07-01

    With the advent of depth sensing technologies, the extraction of object contours in images---a common and important pre-processing step for later higher-level computer vision tasks like object detection and human action recognition---has become easier. However, acquisition noise in captured depth images means that detected contours suffer from unavoidable errors. In this paper, we propose to jointly denoise and compress detected contours in an image for bandwidth-constrained transmission to a client, who can then carry out aforementioned application-specific tasks using the decoded contours as input. We first prove theoretically that in general a joint denoising / compression approach can outperform a separate two-stage approach that first denoises then encodes contours lossily. Adopting a joint approach, we first propose a burst error model that models typical errors encountered in an observed string y of directional edges. We then formulate a rate-constrained maximum a posteriori (MAP) problem that trades off the posterior probability p(x'|y) of an estimated string x' given y with its code rate R(x'). We design a dynamic programming (DP) algorithm that solves the posed problem optimally, and propose a compact context representation called total suffix tree (TST) that can reduce complexity of the algorithm dramatically. Experimental results show that our joint denoising / compression scheme outperformed a competing separate scheme in rate-distortion performance noticeably.

  17. Bone orientation and position estimation errors using Cosserat point elements and least squares methods: Application to gait.

    PubMed

    Solav, Dana; Camomilla, Valentina; Cereatti, Andrea; Barré, Arnaud; Aminian, Kamiar; Wolf, Alon

    2017-09-06

    The aim of this study was to analyze the accuracy of bone pose estimation based on sub-clusters of three skin-markers characterized by triangular Cosserat point elements (TCPEs) and to evaluate the capability of four instantaneous physical parameters, which can be measured non-invasively in vivo, to identify the most accurate TCPEs. Moreover, TCPE pose estimations were compared with the estimations of two least squares minimization methods applied to the cluster of all markers, using rigid body (RBLS) and homogeneous deformation (HDLS) assumptions. Analysis was performed on previously collected in vivo treadmill gait data composed of simultaneous measurements of the gold-standard bone pose by bi-plane fluoroscopy tracking the subjects' knee prosthesis and a stereophotogrammetric system tracking skin-markers affected by soft tissue artifact. Femur orientation and position errors estimated from skin-marker clusters were computed for 18 subjects using clusters of up to 35 markers. Results based on gold-standard data revealed that instantaneous subsets of TCPEs exist which estimate the femur pose with reasonable accuracy (median root mean square error during stance/swing: 1.4/2.8deg for orientation, 1.5/4.2mm for position). A non-invasive and instantaneous criteria to select accurate TCPEs for pose estimation (4.8/7.3deg, 5.8/12.3mm), was compared with RBLS (4.3/6.6deg, 6.9/16.6mm) and HDLS (4.6/7.6deg, 6.7/12.5mm). Accounting for homogeneous deformation, using HDLS or selected TCPEs, yielded more accurate position estimations than RBLS method, which, conversely, yielded more accurate orientation estimations. Further investigation is required to devise effective criteria for cluster selection that could represent a significant improvement in bone pose estimation accuracy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Mighty Mathematicians: Using Problem Posing and Problem Solving to Develop Mathematical Power

    ERIC Educational Resources Information Center

    McGatha, Maggie B.; Sheffield, Linda J.

    2006-01-01

    This article describes a year-long professional development institute combined with a summer camp for students. Both were designed to help teachers and students develop their problem-solving and problem-posing abilities.

  19. Violence by Parents Against Their Children: Reporting of Maltreatment Suspicions, Child Protection, and Risk in Mental Illness.

    PubMed

    McEwan, Miranda; Friedman, Susan Hatters

    2016-12-01

    Psychiatrists are mandated to report suspicions of child abuse in America. Potential for harm to children should be considered when one is treating parents who are at risk. Although it is the commonly held wisdom that mental illness itself is a major risk factor for child abuse, there are methodologic issues with studies purporting to demonstrate this. Rather, the risk from an individual parent must be considered. Substance abuse and personality disorder pose a separate risk than serious mental illness. Violence risk from mental illness is dynamic, rather than static. When severe mental illness is well-treated, the risk is decreased. However, these families are in need of social support. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. An Analysis of Problem-Posing Tasks in Chinese and US Elementary Mathematics Textbooks

    ERIC Educational Resources Information Center

    Cai, Jinfa; Jiang, Chunlian

    2017-01-01

    This paper reports on 2 studies that examine how mathematical problem posing is integrated in Chinese and US elementary mathematics textbooks. Study 1 involved a historical analysis of the problem-posing (PP) tasks in 3 editions of the most widely used elementary mathematics textbook series published by People's Education Press in China over 3…

  1. Fraction Multiplication and Division Word Problems Posed by Different Years of Pre-Service Elementary Mathematics Teachers

    ERIC Educational Resources Information Center

    Aydogdu Iskenderoglu, Tuba

    2018-01-01

    It is important for pre-service teachers to know the conceptual difficulties they have experienced regarding the concepts of multiplication and division in fractions and problem posing is a way to learn these conceptual difficulties. Problem posing is a synthetic activity that fundamentally has multiple answers. The purpose of this study is to…

  2. Generalizability Theory Research on Developing a Scoring Rubric to Assess Primary School Students' Problem Posing Skills

    ERIC Educational Resources Information Center

    Cankoy, Osman; Özder, Hasan

    2017-01-01

    The aim of this study is to develop a scoring rubric to assess primary school students' problem posing skills. The rubric including five dimensions namely solvability, reasonability, mathematical structure, context and language was used. The raters scored the students' problem posing skills both with and without the scoring rubric to test the…

  3. An Investigation of Relationships between Students' Mathematical Problem-Posing Abilities and Their Mathematical Content Knowledge

    ERIC Educational Resources Information Center

    Van Harpen, Xianwei Y.; Presmeg, Norma C.

    2013-01-01

    The importance of students' problem-posing abilities in mathematics has been emphasized in the K-12 curricula in the USA and China. There are claims that problem-posing activities are helpful in developing creative approaches to mathematics. At the same time, there are also claims that students' mathematical content knowledge could be highly…

  4. An Investigation of Eighth Grade Students' Problem Posing Skills (Turkey Sample)

    ERIC Educational Resources Information Center

    Arikan, Elif Esra; Ünal, Hasan

    2015-01-01

    To pose a problem refers to the creative activity for mathematics education. The purpose of the study was to explore the eighth grade students' problem posing ability. Three learning domains such as requiring four operations, fractions and geometry were chosen for this reason. There were two classes which were coded as class A and class B. Class A…

  5. Mathematical Creative Process Wallas Model in Students Problem Posing with Lesson Study Approach

    ERIC Educational Resources Information Center

    Nuha, Muhammad 'Azmi; Waluya, S. B.; Junaedi, Iwan

    2018-01-01

    Creative thinking is very important in the modern era so that it should be improved by doing efforts such as making a lesson that train students to pose their own problems. The purposes of this research are (1) to give an initial description of students about mathematical creative thinking level in Problem Posing Model with Lesson Study approach…

  6. Application of the epidemiological model in studying human error in aviation

    NASA Technical Reports Server (NTRS)

    Cheaney, E. S.; Billings, C. E.

    1981-01-01

    An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.

  7. Problem Posing with Realistic Mathematics Education Approach in Geometry Learning

    NASA Astrophysics Data System (ADS)

    Mahendra, R.; Slamet, I.; Budiyono

    2017-09-01

    One of the difficulties of students in the learning of geometry is on the subject of plane that requires students to understand the abstract matter. The aim of this research is to determine the effect of Problem Posing learning model with Realistic Mathematics Education Approach in geometry learning. This quasi experimental research was conducted in one of the junior high schools in Karanganyar, Indonesia. The sample was taken using stratified cluster random sampling technique. The results of this research indicate that the model of Problem Posing learning with Realistic Mathematics Education Approach can improve students’ conceptual understanding significantly in geometry learning especially on plane topics. It is because students on the application of Problem Posing with Realistic Mathematics Education Approach are become to be active in constructing their knowledge, proposing, and problem solving in realistic, so it easier for students to understand concepts and solve the problems. Therefore, the model of Problem Posing learning with Realistic Mathematics Education Approach is appropriately applied in mathematics learning especially on geometry material. Furthermore, the impact can improve student achievement.

  8. New Interstellar Dust Models Consistent with Interstellar Extinction, Emission and Abundances Constraints

    NASA Technical Reports Server (NTRS)

    Zubko, V.; Dwek, E.; Arendt, R. G.; Oegerle, William (Technical Monitor)

    2001-01-01

    We present new interstellar dust models that are consistent with both, the FUV to near-IR extinction and infrared (IR) emission measurements from the diffuse interstellar medium. The models are characterized by different dust compositions and abundances. The problem we solve consists of determining the size distribution of the various dust components of the model. This problem is a typical ill-posed inversion problem which we solve using the regularization approach. We reproduce the Li Draine (2001, ApJ, 554, 778) results, however their model requires an excessive amount of interstellar silicon (48 ppM of hydrogen compared to the 36 ppM available for an ISM of solar composition) to be locked up in dust. We found that dust models consisting of PAHs, amorphous silicate, graphite, and composite grains made up from silicates, organic refractory, and water ice, provide an improved fit to the extinction and IR emission measurements, while still requiring a subsolar amount of silicon to be in the dust. This research was supported by NASA Astrophysical Theory Program NRA 99-OSS-01.

  9. Responding to Students' Chronic Illnesses

    ERIC Educational Resources Information Center

    Shaw, Steven R.; Glaser, Sarah E.; Stern, Melissa; Sferdenschi, Corina; McCabe, Paul C.

    2010-01-01

    Chronic illnesses are long-term or permanent medical conditions that have recurring effects on everyday life. Large and growing number of students have chronic illnesses that affect their emotional development, physical development, academic performance, and family interactions. The primary error in educating those students is assuming that the…

  10. Segmentation, classification, and pose estimation of military vehicles in low resolution laser radar images

    NASA Astrophysics Data System (ADS)

    Neulist, Joerg; Armbruster, Walter

    2005-05-01

    Model-based object recognition in range imagery typically involves matching the image data to the expected model data for each feasible model and pose hypothesis. Since the matching procedure is computationally expensive, the key to efficient object recognition is the reduction of the set of feasible hypotheses. This is particularly important for military vehicles, which may consist of several large moving parts such as the hull, turret, and gun of a tank, and hence require an eight or higher dimensional pose space to be searched. The presented paper outlines techniques for reducing the set of feasible hypotheses based on an estimation of target dimensions and orientation. Furthermore, the presence of a turret and a main gun and their orientations are determined. The vehicle parts dimensions as well as their error estimates restrict the number of model hypotheses whereas the position and orientation estimates and their error bounds reduce the number of pose hypotheses needing to be verified. The techniques are applied to several hundred laser radar images of eight different military vehicles with various part classifications and orientations. On-target resolution in azimuth, elevation and range is about 30 cm. The range images contain up to 20% dropouts due to atmospheric absorption. Additionally some target retro-reflectors produce outliers due to signal crosstalk. The presented algorithms are extremely robust with respect to these and other error sources. The hypothesis space for hull orientation is reduced to about 5 degrees as is the error for turret rotation and gun elevation, provided the main gun is visible.

  11. Good News for Borehole Climatology

    NASA Astrophysics Data System (ADS)

    Rath, Volker; Fidel Gonzalez-Rouco, J.; Goosse, Hugues

    2010-05-01

    Though the investigation of observed borehole temperatures has proved to be a valuable tool for the reconstruction of ground surface temperature histories, there are many open questions concerning the significance and accuracy of the reconstructions from these data. In particular, the temperature signal of the warming after the Last glacial Maximum (LGM) is still present in borehole temperature profiles. It influences the relatively shallow boreholes used in current paleoclimate inversions to estimate temperature changes in the last centuries. This is shown using Monte Carlo experiments on past surface temperature change, using plausible distributions for the most important parameters, i.e.,amplitude and timing of the glacial-interglacial transition, the prior average temperature, and petrophysical properties. It has been argued that the signature of the last glacial-interglacial transition could be responsible for the high amplitudes of millennial temperature reconstructions. However, in shallow boreholes the additional effect of past climate can reasonably approximated by a linear variation of temperature with depth, and thus be accommodated by a "biased" background heat flow. This is good news for borehole climate, but implies that the geological heat flow values have to be interpreted accordingly. Borehole climate reconstructions from these shallow are most probably underestimating past variability due to the diffusive character of the heat conduction process, and the smoothness constraints necessary for obtaining stable solutions of this ill-posed inverse problem. A simple correction based on subtracting an appropriate prior surface temperature history shows promising results reducing these errors considerably, also with deeper boreholes, where the heat flow signal can not be approximated linearly, and improves the comparisons with AOGCM modeling results.

  12. Calibration of hydrological model with programme PEST

    NASA Astrophysics Data System (ADS)

    Brilly, Mitja; Vidmar, Andrej; Kryžanowski, Andrej; Bezak, Nejc; Šraj, Mojca

    2016-04-01

    PEST is tool based on minimization of an objective function related to the root mean square error between the model output and the measurement. We use "singular value decomposition", section of the PEST control file, and Tikhonov regularization method for successfully estimation of model parameters. The PEST sometimes failed if inverse problems were ill-posed, but (SVD) ensures that PEST maintains numerical stability. The choice of the initial guess for the initial parameter values is an important issue in the PEST and need expert knowledge. The flexible nature of the PEST software and its ability to be applied to whole catchments at once give results of calibration performed extremely well across high number of sub catchments. Use of parallel computing version of PEST called BeoPEST was successfully useful to speed up calibration process. BeoPEST employs smart slaves and point-to-point communications to transfer data between the master and slaves computers. The HBV-light model is a simple multi-tank-type model for simulating precipitation-runoff. It is conceptual balance model of catchment hydrology which simulates discharge using rainfall, temperature and estimates of potential evaporation. Version of HBV-light-CLI allows the user to run HBV-light from the command line. Input and results files are in XML form. This allows to easily connecting it with other applications such as pre and post-processing utilities and PEST itself. The procedure was applied on hydrological model of Savinja catchment (1852 km2) and consists of twenty one sub-catchments. Data are temporary processed on hourly basis.

  13. A serial mediation model of workplace social support on work productivity: the role of self-stigma and job tenure self-efficacy in people with severe mental disorders.

    PubMed

    Villotti, Patrizia; Corbière, Marc; Dewa, Carolyn S; Fraccaroli, Franco; Sultan-Taïeb, Hélène; Zaniboni, Sara; Lecomte, Tania

    2017-09-12

    Compared to groups with other disabilities, people with a severe mental illness face the greatest stigma and barriers to employment opportunities. This study contributes to the understanding of the relationship between workplace social support and work productivity in people with severe mental illness working in Social Enterprises by taking into account the mediating role of self-stigma and job tenure self-efficacy. A total of 170 individuals with a severe mental disorder employed in a Social Enterprise filled out questionnaires assessing personal and work-related variables at Phase-1 (baseline) and Phase-2 (6-month follow-up). Process modeling was used to test for serial mediation. In the Social Enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems. When testing serial multiple mediations, the specific indirect effect of high workplace social support on work productivity through both low internalized stigma and high job tenure self-efficacy was significant with a point estimate of 1.01 (95% CI = 0.42, 2.28). Continued work in this area can provide guidance for organizations in the open labor market addressing the challenges posed by the work integration of people with severe mental illness. Implications for Rehabilitation: Work integration of people with severe mental disorders is difficult because of limited access to supportive and nondiscriminatory workplaces. Social enterprise represents an effective model for supporting people with severe mental disorders to integrate the labor market. In the social enterprise workplace, social support yields better perceptions of work productivity through lower levels of internalized stigma and higher confidence in facing job-related problems.

  14. Thyroid Allostasis–Adaptive Responses of Thyrotropic Feedback Control to Conditions of Strain, Stress, and Developmental Programming

    PubMed Central

    Chatzitomaris, Apostolos; Hoermann, Rudolf; Midgley, John E.; Hering, Steffen; Urban, Aline; Dietrich, Barbara; Abood, Assjana; Klein, Harald H.; Dietrich, Johannes W.

    2017-01-01

    The hypothalamus–pituitary–thyroid feedback control is a dynamic, adaptive system. In situations of illness and deprivation of energy representing type 1 allostasis, the stress response operates to alter both its set point and peripheral transfer parameters. In contrast, type 2 allostatic load, typically effective in psychosocial stress, pregnancy, metabolic syndrome, and adaptation to cold, produces a nearly opposite phenotype of predictive plasticity. The non-thyroidal illness syndrome (NTIS) or thyroid allostasis in critical illness, tumors, uremia, and starvation (TACITUS), commonly observed in hospitalized patients, displays a historically well-studied pattern of allostatic thyroid response. This is characterized by decreased total and free thyroid hormone concentrations and varying levels of thyroid-stimulating hormone (TSH) ranging from decreased (in severe cases) to normal or even elevated (mainly in the recovery phase) TSH concentrations. An acute versus chronic stage (wasting syndrome) of TACITUS can be discerned. The two types differ in molecular mechanisms and prognosis. The acute adaptation of thyroid hormone metabolism to critical illness may prove beneficial to the organism, whereas the far more complex molecular alterations associated with chronic illness frequently lead to allostatic overload. The latter is associated with poor outcome, independently of the underlying disease. Adaptive responses of thyroid homeostasis extend to alterations in thyroid hormone concentrations during fetal life, periods of weight gain or loss, thermoregulation, physical exercise, and psychiatric diseases. The various forms of thyroid allostasis pose serious problems in differential diagnosis of thyroid disease. This review article provides an overview of physiological mechanisms as well as major diagnostic and therapeutic implications of thyroid allostasis under a variety of developmental and straining conditions. PMID:28775711

  15. Combining facial dynamics with appearance for age estimation.

    PubMed

    Dibeklioglu, Hamdi; Alnajar, Fares; Ali Salah, Albert; Gevers, Theo

    2015-06-01

    Estimating the age of a human from the captured images of his/her face is a challenging problem. In general, the existing approaches to this problem use appearance features only. In this paper, we show that in addition to appearance information, facial dynamics can be leveraged in age estimation. We propose a method to extract and use dynamic features for age estimation, using a person's smile. Our approach is tested on a large, gender-balanced database with 400 subjects, with an age range between 8 and 76. In addition, we introduce a new database on posed disgust expressions with 324 subjects in the same age range, and evaluate the reliability of the proposed approach when used with another expression. State-of-the-art appearance-based age estimation methods from the literature are implemented as baseline. We demonstrate that for each of these methods, the addition of the proposed dynamic features results in statistically significant improvement. We further propose a novel hierarchical age estimation architecture based on adaptive age grouping. We test our approach extensively, including an exploration of spontaneous versus posed smile dynamics, and gender-specific age estimation. We show that using spontaneity information reduces the mean absolute error by up to 21%, advancing the state of the art for facial age estimation.

  16. Problem Posing and Solving with Mathematical Modeling

    ERIC Educational Resources Information Center

    English, Lyn D.; Fox, Jillian L.; Watters, James J.

    2005-01-01

    Mathematical modeling is explored as both problem posing and problem solving from two perspectives, that of the child and the teacher. Mathematical modeling provides rich learning experiences for elementary school children and their teachers.

  17. Common mental health problems in immigrants and refugees: general approach in primary care

    PubMed Central

    Kirmayer, Laurence J.; Narasiah, Lavanya; Munoz, Marie; Rashid, Meb; Ryder, Andrew G.; Guzder, Jaswant; Hassan, Ghayda; Rousseau, Cécile; Pottie, Kevin

    2011-01-01

    Background: Recognizing and appropriately treating mental health problems among new immigrants and refugees in primary care poses a challenge because of differences in language and culture and because of specific stressors associated with migration and resettlement. We aimed to identify risk factors and strategies in the approach to mental health assessment and to prevention and treatment of common mental health problems for immigrants in primary care. Methods: We searched and compiled literature on prevalence and risk factors for common mental health problems related to migration, the effect of cultural influences on health and illness, and clinical strategies to improve mental health care for immigrants and refugees. Publications were selected on the basis of relevance, use of recent data and quality in consultation with experts in immigrant and refugee mental health. Results: The migration trajectory can be divided into three components: premigration, migration and postmigration resettlement. Each phase is associated with specific risks and exposures. The prevalence of specific types of mental health problems is influenced by the nature of the migration experience, in terms of adversity experienced before, during and after resettlement. Specific challenges in migrant mental health include communication difficulties because of language and cultural differences; the effect of cultural shaping of symptoms and illness behaviour on diagnosis, coping and treatment; differences in family structure and process affecting adaptation, acculturation and intergenerational conflict; and aspects of acceptance by the receiving society that affect employment, social status and integration. These issues can be addressed through specific inquiry, the use of trained interpreters and culture brokers, meetings with families, and consultation with community organizations. Interpretation: Systematic inquiry into patients’ migration trajectory and subsequent follow-up on culturally appropriate indicators of social, vocational and family functioning over time will allow clinicians to recognize problems in adaptation and undertake mental health promotion, disease prevention or treatment interventions in a timely way. PMID:20603342

  18. Sensitivity computation of the ell1 minimization problem and its application to dictionary design of ill-posed problems

    NASA Astrophysics Data System (ADS)

    Horesh, L.; Haber, E.

    2009-09-01

    The ell1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salient features and intrinsic difficulties which are associated with the dictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging.

  19. Inverse analysis and regularisation in conditional source-term estimation modelling

    NASA Astrophysics Data System (ADS)

    Labahn, Jeffrey W.; Devaud, Cecile B.; Sipkens, Timothy A.; Daun, Kyle J.

    2014-05-01

    Conditional Source-term Estimation (CSE) obtains the conditional species mass fractions by inverting a Fredholm integral equation of the first kind. In the present work, a Bayesian framework is used to compare two different regularisation methods: zeroth-order temporal Tikhonov regulatisation and first-order spatial Tikhonov regularisation. The objectives of the current study are: (i) to elucidate the ill-posedness of the inverse problem; (ii) to understand the origin of the perturbations in the data and quantify their magnitude; (iii) to quantify the uncertainty in the solution using different priors; and (iv) to determine the regularisation method best suited to this problem. A singular value decomposition shows that the current inverse problem is ill-posed. Perturbations to the data may be caused by the use of a discrete mixture fraction grid for calculating the mixture fraction PDF. The magnitude of the perturbations is estimated using a box filter and the uncertainty in the solution is determined based on the width of the credible intervals. The width of the credible intervals is significantly reduced with the inclusion of a smoothing prior and the recovered solution is in better agreement with the exact solution. The credible intervals for temporal and spatial smoothing are shown to be similar. Credible intervals for temporal smoothing depend on the solution from the previous time step and a smooth solution is not guaranteed. For spatial smoothing, the credible intervals are not dependent upon a previous solution and better predict characteristics for higher mixture fraction values. These characteristics make spatial smoothing a promising alternative method for recovering a solution from the CSE inversion process.

  20. Diagnostic Error in Correctional Mental Health: Prevalence, Causes, and Consequences.

    PubMed

    Martin, Michael S; Hynes, Katie; Hatcher, Simon; Colman, Ian

    2016-04-01

    While they have important implications for inmates and resourcing of correctional institutions, diagnostic errors are rarely discussed in correctional mental health research. This review seeks to estimate the prevalence of diagnostic errors in prisons and jails and explores potential causes and consequences. Diagnostic errors are defined as discrepancies in an inmate's diagnostic status depending on who is responsible for conducting the assessment and/or the methods used. It is estimated that at least 10% to 15% of all inmates may be incorrectly classified in terms of the presence or absence of a mental illness. Inmate characteristics, relationships with staff, and cognitive errors stemming from the use of heuristics when faced with time constraints are discussed as possible sources of error. A policy example of screening for mental illness at intake to prison is used to illustrate when the risk of diagnostic error might be increased and to explore strategies to mitigate this risk. © The Author(s) 2016.

  1. Diagnosis of organic brain syndrome: an emergency department dilemma.

    PubMed

    Dubin, W R; Weiss, K J

    1984-01-01

    Delirium and dementia frequently pose a diagnostic dilemma for clinicians in the emergency department. The overlap of symptoms between organic brain syndrome and functional psychiatric illness, coupled with a dramatic presentation, often leads to a premature psychiatric diagnosis. In this paper, the authors discuss those symptoms of organic brain syndrome that most frequently generate diagnostic confusion in the emergency department and result in a misdiagnosis of functional illness.

  2. Problem-posing in education: transformation of the practice of the health professional.

    PubMed

    Casagrande, L D; Caron-Ruffino, M; Rodrigues, R A; Vendrúsculo, D M; Takayanagui, A M; Zago, M M; Mendes, M D

    1998-02-01

    This study was developed by a group of professionals from different areas (nurses and educators) concerned with health education. It proposes the use of a problem-posing model for the transformation of professional practice. The concept and functions of the model and their relationships with the educative practice of health professionals are discussed. The model of problem-posing education is presented (compared to traditional, "banking" education), and four innovative experiences of teaching-learning are reported based on this model. These experiences, carried out in areas of environmental and occupational health and patient education have shown the applicability of the problem-posing model to the practice of the health professional, allowing transformation.

  3. The Frame Constraint on Experimentally Elicited Speech Errors in Japanese

    ERIC Educational Resources Information Center

    Saito, Akie; Inoue, Tomoyoshi

    2017-01-01

    The so-called syllable position effect in speech errors has been interpreted as reflecting constraints posed by the frame structure of a given language, which is separately operating from linguistic content during speech production. The effect refers to the phenomenon that when a speech error occurs, replaced and replacing sounds tend to be in the…

  4. Spatially adapted second-order total generalized variational image deblurring model under impulse noise

    NASA Astrophysics Data System (ADS)

    Zhong, Qiu-Xiang; Wu, Chuan-Sheng; Shu, Qiao-Ling; Liu, Ryan Wen

    2018-04-01

    Image deblurring under impulse noise is a typical ill-posed problem which requires regularization methods to guarantee high-quality imaging. L1-norm data-fidelity term and total variation (TV) regularizer have been combined to contribute the popular regularization method. However, the TV-regularized variational image deblurring model often suffers from the staircase-like artifacts leading to image quality degradation. To enhance image quality, the detailpreserving total generalized variation (TGV) was introduced to replace TV to eliminate the undesirable artifacts. The resulting nonconvex optimization problem was effectively solved using the alternating direction method of multipliers (ADMM). In addition, an automatic method for selecting spatially adapted regularization parameters was proposed to further improve deblurring performance. Our proposed image deblurring framework is able to remove blurring and impulse noise effects while maintaining the image edge details. Comprehensive experiments have been conducted to demonstrate the superior performance of our proposed method over several state-of-the-art image deblurring methods.

  5. Källén-Lehmann spectroscopy for (un)physical degrees of freedom

    NASA Astrophysics Data System (ADS)

    Dudal, David; Oliveira, Orlando; Silva, Paulo J.

    2014-01-01

    We consider the problem of "measuring" the Källén-Lehmann spectral density of a particle (be it elementary or bound state) propagator by means of 4D lattice data. As the latter are obtained from operations at (Euclidean momentum squared) p2≥0, we are facing the generically ill-posed problem of converting a limited data set over the positive real axis to an integral representation, extending over the whole complex p2 plane. We employ a linear regularization strategy, commonly known as the Tikhonov method with the Morozov discrepancy principle, with suitable adaptations to realistic data, e.g. with an unknown threshold. An important virtue over the (standard) maximum entropy method is the possibility to also probe unphysical spectral densities, for example, of a confined gluon. We apply our proposal here to "physical" mock spectral data as a litmus test and then to the lattice SU(3) Landau gauge gluon at zero temperature.

  6. Novel Descattering Approach for Stereo Vision in Dense Suspended Scatterer Environments

    PubMed Central

    Nguyen, Chanh D. Tr.; Park, Jihyuk; Cho, Kyeong-Yong; Kim, Kyung-Soo; Kim, Soohyun

    2017-01-01

    In this paper, we propose a model-based scattering removal method for stereo vision for robot manipulation in indoor scattering media where the commonly used ranging sensors are unable to work. Stereo vision is an inherently ill-posed and challenging problem. It is even more difficult in the case of images of dense fog or dense steam scenes illuminated by active light sources. Images taken in such environments suffer attenuation of object radiance and scattering of the active light sources. To solve this problem, we first derive the imaging model for images taken in a dense scattering medium with a single active illumination close to the cameras. Based on this physical model, the non-uniform backscattering signal is efficiently removed. The descattered images are then utilized as the input images of stereo vision. The performance of the method is evaluated based on the quality of the depth map from stereo vision. We also demonstrate the effectiveness of the proposed method by carrying out the real robot manipulation task. PMID:28629139

  7. Generation of intervention strategy for a genetic regulatory network represented by a family of Markov Chains.

    PubMed

    Berlow, Noah; Pal, Ranadip

    2011-01-01

    Genetic Regulatory Networks (GRNs) are frequently modeled as Markov Chains providing the transition probabilities of moving from one state of the network to another. The inverse problem of inference of the Markov Chain from noisy and limited experimental data is an ill posed problem and often generates multiple model possibilities instead of a unique one. In this article, we address the issue of intervention in a genetic regulatory network represented by a family of Markov Chains. The purpose of intervention is to alter the steady state probability distribution of the GRN as the steady states are considered to be representative of the phenotypes. We consider robust stationary control policies with best expected behavior. The extreme computational complexity involved in search of robust stationary control policies is mitigated by using a sequential approach to control policy generation and utilizing computationally efficient techniques for updating the stationary probability distribution of a Markov chain following a rank one perturbation.

  8. [Problem-posing as a nutritional education strategy with obese teenagers].

    PubMed

    Rodrigues, Erika Marafon; Boog, Maria Cristina Faber

    2006-05-01

    Obesity is a public health issue with relevant social determinants in its etiology and where interventions with teenagers encounter complex biopsychological conditions. This study evaluated intervention in nutritional education through a problem-posing approach with 22 obese teenagers, treated collectively and individually for eight months. Speech acts were collected through the use of word cards, observer recording, and tape-recording. The study adopted a qualitative methodology, and the approach involved content analysis. Problem-posing facilitated changes in eating behavior, triggering reflections on nutritional practices, family circumstances, social stigma, interaction with health professionals, and religion. Teenagers under individual care posed problems more effectively in relation to eating, while those under collective care posed problems in relation to family and psychological issues, with effective qualitative eating changes in both groups. The intervention helped teenagers understand their life history and determinants of eating behaviors, spontaneously implementing eating changes and making them aware of possibilities for maintaining the new practices and autonomously exercising their role as protagonists in their own health care.

  9. Fruit fly optimization based least square support vector regression for blind image restoration

    NASA Astrophysics Data System (ADS)

    Zhang, Jiao; Wang, Rui; Li, Junshan; Yang, Yawei

    2014-11-01

    The goal of image restoration is to reconstruct the original scene from a degraded observation. It is a critical and challenging task in image processing. Classical restorations require explicit knowledge of the point spread function and a description of the noise as priors. However, it is not practical for many real image processing. The recovery processing needs to be a blind image restoration scenario. Since blind deconvolution is an ill-posed problem, many blind restoration methods need to make additional assumptions to construct restrictions. Due to the differences of PSF and noise energy, blurring images can be quite different. It is difficult to achieve a good balance between proper assumption and high restoration quality in blind deconvolution. Recently, machine learning techniques have been applied to blind image restoration. The least square support vector regression (LSSVR) has been proven to offer strong potential in estimating and forecasting issues. Therefore, this paper proposes a LSSVR-based image restoration method. However, selecting the optimal parameters for support vector machine is essential to the training result. As a novel meta-heuristic algorithm, the fruit fly optimization algorithm (FOA) can be used to handle optimization problems, and has the advantages of fast convergence to the global optimal solution. In the proposed method, the training samples are created from a neighborhood in the degraded image to the central pixel in the original image. The mapping between the degraded image and the original image is learned by training LSSVR. The two parameters of LSSVR are optimized though FOA. The fitness function of FOA is calculated by the restoration error function. With the acquired mapping, the degraded image can be recovered. Experimental results show the proposed method can obtain satisfactory restoration effect. Compared with BP neural network regression, SVR method and Lucy-Richardson algorithm, it speeds up the restoration rate and performs better. Both objective and subjective restoration performances are studied in the comparison experiments.

  10. Fostering Mathematical Creativity through Problem Posing and Modeling Using Dynamic Geometry: Viviani's Problem in the Classroom

    ERIC Educational Resources Information Center

    Contreras, José N.

    2013-01-01

    This paper discusses a classroom experience in which a group of prospective secondary mathematics teachers were asked to create, cooperatively (in class) and individually, problems related to Viviani's problem using a problem-posing framework. When appropriate, students used Sketchpad to explore the problem to better understand its attributes…

  11. Impact of Data Assimilation on Cost-Accuracy Tradeoff in Multi-Fidelity Models at the Example of an Infiltration Problem

    NASA Astrophysics Data System (ADS)

    Sinsbeck, Michael; Tartakovsky, Daniel

    2015-04-01

    Infiltration into top soil can be described by alternative models with different degrees of fidelity: Richards equation and the Green-Ampt model. These models typically contain uncertain parameters and forcings, rendering predictions of the state variables uncertain as well. Within the probabilistic framework, solutions of these models are given in terms of their probability density functions (PDFs) that, in the presence of data, can be treated as prior distributions. The assimilation of soil moisture data into model predictions, e.g., via a Bayesian updating of solution PDFs, poses a question of model selection: Given a significant difference in computational cost, is a lower-fidelity model preferable to its higher-fidelity counter-part? We investigate this question in the context of heterogeneous porous media, whose hydraulic properties are uncertain. While low-fidelity (reduced-complexity) models introduce a model error, their moderate computational cost makes it possible to generate more realizations, which reduces the (e.g., Monte Carlo) sampling or stochastic error. The ratio between these two errors determines the model with the smallest total error. We found assimilation of measurements of a quantity of interest (the soil moisture content, in our example) to decrease the model error, increasing the probability that the predictive accuracy of a reduced-complexity model does not fall below that of its higher-fidelity counterpart.

  12. Investigating Mathematics Teachers Candidates' Knowledge about Problem Solving Strategies through Problem Posing

    ERIC Educational Resources Information Center

    Ünlü, Melihan

    2017-01-01

    The aim of the study was to determine mathematics teacher candidates' knowledge about problem solving strategies through problem posing. This qualitative research was conducted with 95 mathematics teacher candidates studying at education faculty of a public university during the first term of the 2015-2016 academic year in Turkey. Problem Posing…

  13. The Chronically Ill Child in the School.

    ERIC Educational Resources Information Center

    Sexson, Sandra; Madan-Swain, Avi

    1995-01-01

    Examines the effects of chronic illness on the school-age population. Facilitating successful functioning of chronically ill youths is a growing problem. Focuses on problems encountered by the chronically ill student who has either been diagnosed with a chronic illness or who has survived such an illness. Discusses the role of the school…

  14. The Difference between Uncertainty and Information, and Why This Matters

    NASA Astrophysics Data System (ADS)

    Nearing, G. S.

    2016-12-01

    Earth science investigation and arbitration (for decision making) is very often organized around a concept of uncertainty. It seems relatively straightforward that the purpose of our science is to reduce uncertainty about how environmental systems will react and evolve under different conditions. I propose here that approaching a science of complex systems as a process of quantifying and reducing uncertainty is a mistake, and specifically a mistake that is rooted in certain rather hisoric logical errors. Instead I propose that we should be asking questions about information. I argue here that an information-based perspective facilitates almost trivial answers to environmental science questions that are either difficult or theoretically impossible to answer when posed as questions about uncertainty. In particular, I propose that an information-centric perspective leads to: Coherent and non-subjective hypothesis tests for complex system models. Process-level diagnostics for complex systems models. Methods for building complex systems models that allow for inductive inference without the need for a priori specification of likelihood functions or ad hoc error metrics. Asymptotically correct quantification of epistemic uncertainty. To put this in slightly more basic terms, I propose that an information-theoretic philosophy of science has the potential to resolve certain important aspects of the Demarcation Problem and the Duhem-Quine Problem, and that Hydrology and other Earth Systems Sciences can immediately capitalize on this to address some of our most difficult and persistent problems.

  15. Sleep Problems in Children and Adolescents with Common Medical Conditions

    PubMed Central

    Lewandowski, Amy S.; Ward, Teresa M.; Palermo, Tonya M.

    2011-01-01

    Synopsis Sleep is critically important to children’s health and well-being. Untreated sleep disturbances and sleep disorders pose significant adverse daytime consequences and place children at considerable risk for poor health outcomes. Sleep disturbances occur at a greater frequency in children with acute and chronic medical conditions compared to otherwise healthy peers. Sleep disturbances in medically ill children can be associated with sleep disorders (e.g., sleep disordered breathing, restless leg syndrome), co-morbid with acute and chronic conditions (e.g., asthma, arthritis, cancer), or secondary to underlying disease-related mechanisms (e.g. airway restriction, inflammation) treatment regimens, or hospitalization. Clinical management should include a multidisciplinary approach with particular emphasis on routine, regular sleep assessments and prevention of daytime consequences and promotion of healthy sleep habits and health outcomes. PMID:21600350

  16. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  17. DLTPulseGenerator: A library for the simulation of lifetime spectra based on detector-output pulses

    NASA Astrophysics Data System (ADS)

    Petschke, Danny; Staab, Torsten E. M.

    2018-01-01

    The quantitative analysis of lifetime spectra relevant in both life and materials sciences presents one of the ill-posed inverse problems and, hence, leads to most stringent requirements on the hardware specifications and the analysis algorithms. Here we present DLTPulseGenerator, a library written in native C++ 11, which provides a simulation of lifetime spectra according to the measurement setup. The simulation is based on pairs of non-TTL detector output-pulses. Those pulses require the Constant Fraction Principle (CFD) for the determination of the exact timing signal and, thus, the calculation of the time difference i.e. the lifetime. To verify the functionality, simulation results were compared to experimentally obtained data using Positron Annihilation Lifetime Spectroscopy (PALS) on pure tin.

  18. Topographic analysis of individual activation patterns in medial frontal cortex in schizophrenia

    PubMed Central

    Stern, Emily R.; Welsh, Robert C.; Fitzgerald, Kate D.; Taylor, Stephan F.

    2009-01-01

    Individual variability in the location of neural activations poses a unique problem for neuroimaging studies employing group averaging techniques to investigate the neural bases of cognitive and emotional functions. This may be especially challenging for studies examining patient groups, which often have limited sample sizes and increased intersubject variability. In particular, medial frontal cortex (MFC) dysfunction is thought to underlie performance monitoring dysfunction among patients with previous studies using group averaging to have yielded conflicting results. schizophrenia, yet compare schizophrenic patients to controls To examine individual activations in MFC associated with two aspects of performance monitoring, interference and error processing, functional magnetic resonance imaging (fMRI) data were acquired while 17 patients with schizophrenia and 21 healthy controls performed an event-related version of the multi-source interference task. Comparisons of averaged data revealed few differences between the groups. By contrast, topographic analysis of individual activations for errors showed that control subjects exhibited activations spanning across both posterior and anterior regions of MFC while patients primarily activated posterior MFC, possibly reflecting an impaired emotional response to errors in schizophrenia. This discrepancy between topographic and group-averaged results may be due to the significant dispersion among individual activations, particularly among healthy controls, highlighting the importance of considering intersubject variability when interpreting the medial frontal response to error commission. PMID:18819107

  19. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  20. Metadata and annotations for multi-scale electrophysiological data.

    PubMed

    Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A

    2009-01-01

    The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.

  1. Adaptive Leadership Framework for Chronic Illness

    PubMed Central

    Anderson, Ruth A.; Bailey, Donald E.; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S.; Thygeson, N. Marcus; Docherty, Sharron L.

    2015-01-01

    We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care. PMID:25647829

  2. The 2-D magnetotelluric inverse problem solved with optimization

    NASA Astrophysics Data System (ADS)

    van Beusekom, Ashley E.; Parker, Robert L.; Bank, Randolph E.; Gill, Philip E.; Constable, Steven

    2011-02-01

    The practical 2-D magnetotelluric inverse problem seeks to determine the shallow-Earth conductivity structure using finite and uncertain data collected on the ground surface. We present an approach based on using PLTMG (Piecewise Linear Triangular MultiGrid), a special-purpose code for optimization with second-order partial differential equation (PDE) constraints. At each frequency, the electromagnetic field and conductivity are treated as unknowns in an optimization problem in which the data misfit is minimized subject to constraints that include Maxwell's equations and the boundary conditions. Within this framework it is straightforward to accommodate upper and lower bounds or other conditions on the conductivity. In addition, as the underlying inverse problem is ill-posed, constraints may be used to apply various kinds of regularization. We discuss some of the advantages and difficulties associated with using PDE-constrained optimization as the basis for solving large-scale nonlinear geophysical inverse problems. Combined transverse electric and transverse magnetic complex admittances from the COPROD2 data are inverted. First, we invert penalizing size and roughness giving solutions that are similar to those found previously. In a second example, conventional regularization is replaced by a technique that imposes upper and lower bounds on the model. In both examples the data misfit is better than that obtained previously, without any increase in model complexity.

  3. A Matlab toolkit for three-dimensional electrical impedance tomography: a contribution to the Electrical Impedance and Diffuse Optical Reconstruction Software project

    NASA Astrophysics Data System (ADS)

    Polydorides, Nick; Lionheart, William R. B.

    2002-12-01

    The objective of the Electrical Impedance and Diffuse Optical Reconstruction Software project is to develop freely available software that can be used to reconstruct electrical or optical material properties from boundary measurements. Nonlinear and ill posed problems such as electrical impedance and optical tomography are typically approached using a finite element model for the forward calculations and a regularized nonlinear solver for obtaining a unique and stable inverse solution. Most of the commercially available finite element programs are unsuitable for solving these problems because of their conventional inefficient way of calculating the Jacobian, and their lack of accurate electrode modelling. A complete package for the two-dimensional EIT problem was officially released by Vauhkonen et al at the second half of 2000. However most industrial and medical electrical imaging problems are fundamentally three-dimensional. To assist the development we have developed and released a free toolkit of Matlab routines which can be employed to solve the forward and inverse EIT problems in three dimensions based on the complete electrode model along with some basic visualization utilities, in the hope that it will stimulate further development. We also include a derivation of the formula for the Jacobian (or sensitivity) matrix based on the complete electrode model.

  4. Identification of the population density of a species model with nonlocal diffusion and nonlinear reaction

    NASA Astrophysics Data System (ADS)

    Tuan, Nguyen Huy; Van Au, Vo; Khoa, Vo Anh; Lesnic, Daniel

    2017-05-01

    The identification of the population density of a logistic equation backwards in time associated with nonlocal diffusion and nonlinear reaction, motivated by biology and ecology fields, is investigated. The diffusion depends on an integral average of the population density whilst the reaction term is a global or local Lipschitz function of the population density. After discussing the ill-posedness of the problem, we apply the quasi-reversibility method to construct stable approximation problems. It is shown that the regularized solutions stemming from such method not only depend continuously on the final data, but also strongly converge to the exact solution in L 2-norm. New error estimates together with stability results are obtained. Furthermore, numerical examples are provided to illustrate the theoretical results.

  5. Technology utilization to prevent medication errors.

    PubMed

    Forni, Allison; Chu, Hanh T; Fanikos, John

    2010-01-01

    Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.

  6. The Effect of Random Error on Diagnostic Accuracy Illustrated with the Anthropometric Diagnosis of Malnutrition

    PubMed Central

    2016-01-01

    Background It is often thought that random measurement error has a minor effect upon the results of an epidemiological survey. Theoretically, errors of measurement should always increase the spread of a distribution. Defining an illness by having a measurement outside an established healthy range will lead to an inflated prevalence of that condition if there are measurement errors. Methods and results A Monte Carlo simulation was conducted of anthropometric assessment of children with malnutrition. Random errors of increasing magnitude were imposed upon the populations and showed that there was an increase in the standard deviation with each of the errors that became exponentially greater with the magnitude of the error. The potential magnitude of the resulting error of reported prevalence of malnutrition were compared with published international data and found to be of sufficient magnitude to make a number of surveys and the numerous reports and analyses that used these data unreliable. Conclusions The effect of random error in public health surveys and the data upon which diagnostic cut-off points are derived to define “health” has been underestimated. Even quite modest random errors can more than double the reported prevalence of conditions such as malnutrition. Increasing sample size does not address this problem, and may even result in less accurate estimates. More attention needs to be paid to the selection, calibration and maintenance of instruments, measurer selection, training & supervision, routine estimation of the likely magnitude of errors using standardization tests, use of statistical likelihood of error to exclude data from analysis and full reporting of these procedures in order to judge the reliability of survey reports. PMID:28030627

  7. [Errors in wound management].

    PubMed

    Filipović, Marinko; Novinscak, Tomislav

    2014-10-01

    Chronic ulcers have adverse effects on the patient quality of life and productivity, thus posing financial burden upon the healthcare system. Chronic wound healing is a complex process resulting from the interaction of the patient general health status, wound related factors, medical personnel skill and competence, and therapy related products. In clinical practice, considerable improvement has been made in the treatment of chronic wounds, which is evident in the reduced rate of the severe forms of chronic wounds in outpatient clinics. However, in spite of all the modern approaches, efforts invested by medical personnel and agents available for wound care, numerous problems are still encountered in daily practice. Most frequently, the problems arise from inappropriate education, of young personnel in particular, absence of multidisciplinary approach, and inadequate communication among the personnel directly involved in wound treatment. To perceive them more clearly, the potential problems or complications in the management of chronic wounds can be classified into the following groups: problems mostly related to the use of wound coverage and other etiology related specificities of wound treatment; problems related to incompatibility of the agents used in wound treatment; and problems arising from failure to ensure aseptic and antiseptic performance conditions.

  8. Folk concepts of mental disorders among Chinese-Australian patients and their caregivers.

    PubMed

    Hsiao, Fei-Hsiu; Klimidis, Steven; Minas, Harry I; Tan, Eng S

    2006-07-01

    This paper reports a study of (a) popular conceptions of mental illness throughout history, (b) how current social and cultural knowledge about mental illness influences Chinese-Australian patients' and caregivers' understanding of mental illness and the consequences of this for explaining and labelling patients' problems. According to traditional Chinese cultural knowledge about health and illness, Chinese people believe that psychotic illness is the only type of mental illness, and that non-psychotic illness is a physical illness. Regarding patients' problems as not being due to mental illness may result in delaying use of Western mental health services. Data collection took place in 2001. Twenty-eight Chinese-Australian patients with mental illness and their caregivers were interviewed at home, drawing on Kleinman's explanatory model and studies of cultural transmission. Interviews were tape-recorded and transcribed, and analysed for plots and themes. Chinese-Australians combined traditional knowledge with Western medical knowledge to develop their own labels for various kinds of mental disorders, including 'mental illness', 'physical illness', 'normal problems of living' and 'psychological problems'. As they learnt more about Western conceptions of psychology and psychiatry, their understanding of some disorders changed. What was previously ascribed to non-mental disorders was often re-labelled as 'mental illness' or 'psychological problems'. Educational programmes aimed at introducing Chinese immigrants to counselling and other psychiatric services could be made more effective if designers gave greater consideration to Chinese understanding of mental illness.

  9. A Problem-Solving Conceptual Framework and Its Implications in Designing Problem-Posing Tasks

    ERIC Educational Resources Information Center

    Singer, Florence Mihaela; Voica, Cristian

    2013-01-01

    The links between the mathematical and cognitive models that interact during problem solving are explored with the purpose of developing a reference framework for designing problem-posing tasks. When the process of solving is a successful one, a solver successively changes his/her cognitive stances related to the problem via transformations that…

  10. Opportunities to Pose Problems Using Digital Technology in Problem Solving Environments

    ERIC Educational Resources Information Center

    Aguilar-Magallón, Daniel Aurelio; Fernández, Willliam Enrique Poveda

    2017-01-01

    This article reports and analyzes different types of problems that nine students in a Master's Program in Mathematics Education posed during a course on problem solving. What opportunities (affordances) can a dynamic geometry system (GeoGebra) offer to allow in-service and in-training teachers to formulate and solve problems, and what type of…

  11. Do everyday problems of people with chronic illness interfere with their disease management?

    PubMed

    van Houtum, Lieke; Rijken, Mieke; Groenewegen, Peter

    2015-10-01

    Being chronically ill is a continuous process of balancing the demands of the illness and the demands of everyday life. Understanding how everyday life affects self-management might help to provide better professional support. However, little attention has been paid to the influence of everyday life on self-management. The purpose of this study is to examine to what extent problems in everyday life interfere with the self-management behaviour of people with chronic illness, i.e. their ability to manage their illness. To estimate the effects of having everyday problems on self-management, cross-sectional linear regression analyses with propensity score matching were conducted. Data was used from 1731 patients with chronic disease(s) who participated in a nationwide Dutch panel-study. One third of people with chronic illness encounter basic (e.g. financial, housing, employment) or social (e.g. partner, children, sexual or leisure) problems in their daily life. Younger people, people with poor health and people with physical limitations are more likely to have everyday problems. Experiencing basic problems is related to less active coping behaviour, while experiencing social problems is related to lower levels of symptom management and less active coping behaviour. The extent of everyday problems interfering with self-management of people with chronic illness depends on the type of everyday problems encountered, as well as on the type of self-management activities at stake. Healthcare providers should pay attention to the life context of people with chronic illness during consultations, as patients' ability to manage their illness is related to it.

  12. Programmable Infusion Pumps in ICUs: An Analysis of Corresponding Adverse Drug Events

    PubMed Central

    Bower, Anthony G.; Paddock, Susan M.; Hilborne, Lee H.; Wallace, Peggy; Rothschild, Jeffrey M.; Griffin, Anne; Fairbanks, Rollin J.; Carlson, Beverly; Panzer, Robert J.; Brook, Robert H.

    2007-01-01

    Background Patients in intensive care units (ICUs) frequently experience adverse drug events involving intravenous medications (IV-ADEs), which are often preventable. Objectives To determine how frequently preventable IV-ADEs in ICUs match the safety features of a programmable infusion pump with safety software (“smart pump”) and to suggest potential improvements in smart-pump design. Design Using retrospective medical-record review, we examined preventable IV-ADEs in ICUs before and after 2 hospitals replaced conventional pumps with smart pumps. The smart pumps alerted users when programmed to deliver duplicate infusions or continuous-infusion doses outside hospital-defined ranges. Participants 4,604 critically ill adults at 1 academic and 1 nonacademic hospital. Measurements Preventable IV-ADEs matching smart-pump features and errors involved in preventable IV-ADEs. Results Of 100 preventable IV-ADEs identified, 4 involved errors matching smart-pump features. Two occurred before and 2 after smart-pump implementation. Overall, 29% of preventable IV-ADEs involved overdoses; 37%, failures to monitor for potential problems; and 45%, failures to intervene when problems appeared. Error descriptions suggested that expanding smart pumps’ capabilities might enable them to prevent more IV-ADEs. Conclusion The smart pumps we evaluated are unlikely to reduce preventable IV-ADEs in ICUs because they address only 4% of them. Expanding smart-pump capabilities might prevent more IV-ADEs. PMID:18095043

  13. Outcomes and genotype-phenotype correlations in 52 individuals with VLCAD deficiency diagnosed by NBS and enrolled in the IBEM-IS database

    PubMed Central

    Pena, Loren D.M.; van Calcar, Sandra C.; Hansen, Joyanna; Edick, Mathew J.; Vockley, Cate Walsh; Leslie, Nancy; Cameron, Cynthia; Mohsen, Al-Walid; Berry, Susan A; Arnold, Georgianne L; Vockley, Jerry

    2016-01-01

    Very long chain acyl-CoA dehydrogenase (VLCAD) deficiency can present at various ages from the neonatal period to adulthood, and poses the greatest risk of complications during intercurrent illness or after prolonged fasting. Early diagnosis, treatment, and surveillance can reduce mortality; hence, the disorder is included in the newborn Recommended Uniform Screening Panel (RUSP) in the United States. The Inborn Errors of Metabolism Information System (IBEM-IS) was established in 2007 to collect longitudinal information on individuals with inborn errors of metabolism included in newborn screening (NBS) programs, including VLCAD deficiency. We retrospectively analyzed early outcomes for individuals who were diagnosed with VLCAD deficiency by NBS and describe initial presentations, diagnosis, clinical outcomes and treatment in a cohort of 52 individuals ages 1–18 years. Maternal prenatal symptoms were not reported, and most newborns remained asymptomatic. Cardiomyopathy was uncommon in the cohort, diagnosed in 2/52 cases. Elevations in creatine kinase were a common finding, and usually first occurred during the toddler period (1–3 years of age). Diagnostic evaluations required several testing modalities, most commonly plasma acylcarnitine profiles and molecular testing. Functional testing, including fibroblast acylcarnitine profiling and white blood cell or fibroblast enzyme assay, is a useful diagnostic adjunct if uncharacterized mutations are identified. PMID:27209629

  14. Refraction-compensated motion tracking of unrestrained small animals in positron emission tomography.

    PubMed

    Kyme, Andre; Meikle, Steven; Baldock, Clive; Fulton, Roger

    2012-08-01

    Motion-compensated radiotracer imaging of fully conscious rodents represents an important paradigm shift for preclinical investigations. In such studies, if motion tracking is performed through a transparent enclosure containing the awake animal, light refraction at the interface will introduce errors in stereo pose estimation. We have performed a thorough investigation of how this impacts the accuracy of pose estimates and the resulting motion correction, and developed an efficient method to predict and correct for refraction-based error. The refraction model underlying this study was validated using a state-of-the-art motion tracking system. Refraction-based error was shown to be dependent on tracking marker size, working distance, and interface thickness and tilt. Correcting for refraction error improved the spatial resolution and quantitative accuracy of motion-corrected positron emission tomography images. Since the methods are general, they may also be useful in other contexts where data are corrupted by refraction effects. Crown Copyright © 2012. Published by Elsevier B.V. All rights reserved.

  15. Algorithms and Array Design Criteria for Robust Imaging in Interferometry

    NASA Astrophysics Data System (ADS)

    Kurien, Binoy George

    Optical interferometry is a technique for obtaining high-resolution imagery of a distant target by interfering light from multiple telescopes. Image restoration from interferometric measurements poses a unique set of challenges. The first challenge is that the measurement set provides only a sparse-sampling of the object's Fourier Transform and hence image formation from these measurements is an inherently ill-posed inverse problem. Secondly, atmospheric turbulence causes severe distortion of the phase of the Fourier samples. We develop array design conditions for unique Fourier phase recovery, as well as a comprehensive algorithmic framework based on the notion of redundant-spaced-calibration (RSC), which together achieve reliable image reconstruction in spite of these challenges. Within this framework, we see that classical interferometric observables such as the bispectrum and closure phase can limit sensitivity, and that generalized notions of these observables can improve both theoretical and empirical performance. Our framework leverages techniques from lattice theory to resolve integer phase ambiguities in the interferometric phase measurements, and from graph theory, to select a reliable set of generalized observables. We analyze the expected shot-noise-limited performance of our algorithm for both pairwise and Fizeau interferometric architectures and corroborate this analysis with simulation results. We apply techniques from the field of compressed sensing to perform image reconstruction from the estimates of the object's Fourier coefficients. The end result is a comprehensive strategy to achieve well-posed and easily-predictable reconstruction performance in optical interferometry.

  16. The Structure of Ill-Structured (and Well-Structured) Problems Revisited

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2016-01-01

    In his 1973 article "The Structure of ill structured problems", Herbert Simon proposed that solving ill-structured problems could be modeled within the same information-processing framework developed for solving well-structured problems. This claim is reexamined within the context of over 40 years of subsequent research and theoretical…

  17. Performance of subjects with and without severe mental illness on a clinical test of problem solving.

    PubMed

    Marshall, R C; McGurk, S R; Karow, C M; Kairy, T J; Flashman, L A

    2006-06-01

    Severe mental illness is associated with impairments in executive functions, such as conceptual reasoning, planning, and strategic thinking all of which impact problem solving. The present study examined the utility of a novel assessment tool for problem solving, the Rapid Assessment of Problem Solving Test (RAPS) in persons with severe mental illness. Subjects were 47 outpatients with severe mental illness and an equal number healthy controls matched for age and gender. Results confirmed all hypotheses with respect to how subjects with severe mental illness would perform on the RAPS. Specifically, the severely mentally ill subjects (1) solved fewer problems on the RAPS, (2) when they did solve problems on the test, they did so far less efficiently than their healthy counterparts, and (3) the two groups differed markedly in the types of questions asked on the RAPS. The healthy control subjects tended to take a systematic, organized, but not always optimal approach to solving problems on the RAPS. The subjects with severe mental illness used some of the problem solving strategies of the healthy controls, but their performance was less consistent and tended to deteriorate when the complexity of the problem solving task increased. This was reflected by a high degree of guessing in lieu of asking constraint questions, particularly if a category-limited question was insufficient to continue the problem solving effort.

  18. Application of identification techniques to remote manipulator system flight data

    NASA Technical Reports Server (NTRS)

    Shepard, G. D.; Lepanto, J. A.; Metzinger, R. W.; Fogel, E.

    1983-01-01

    This paper addresses the application of identification techniques to flight data from the Space Shuttle Remote Manipulator System (RMS). A description of the remote manipulator, including structural and control system characteristics, sensors, and actuators is given. A brief overview of system identification procedures is presented, and the practical aspects of implementing system identification algorithms are discussed. In particular, the problems posed by desampling rate, numerical error, and system nonlinearities are considered. Simulation predictions of damping, frequency, and system order are compared with values identified from flight data to support an evaluation of RMS structural and control system models. Finally, conclusions are drawn regarding the application of identification techniques to flight data obtained from a flexible space structure.

  19. True and false concerns about neuroenhancement: a response to 'Neuroenhancers, addiction and research ethics', by D M Shaw.

    PubMed

    Heinz, Andreas; Kipke, Roland; Müller, Sabine; Wiesing, Urban

    2014-04-01

    In his critical comment on our paper in this journal, Shaw argues that 'false assumptions' which we have criticised are in fact correct ('Neuroenhancers, addiction and research ethics'). He suggests that the risk of addiction to neuroenhancers may not be relevant, and that safety and research in regard to neuroenhancement do not pose unique ethical problems. Here, we demonstrate that Shaw ignores key empirical research results, trivialises addiction, commits logical errors, confuses addictions and passions, argues on a speculative basis, and fails to distinguish the specific ethical conditions of clinical research from those relevant for research in healthy volunteers. Therefore, Shaw's criticism cannot convince.

  20. The challenge of gun control for mental health advocates.

    PubMed

    Pandya, Anand

    2013-09-01

    Mass shootings, such as the 2012 Newtown massacre, have repeatedly led to political discourse about limiting access to guns for individuals with serious mental illness. Although the political climate after such tragic events poses a considerable challenge to mental health advocates who wish to minimize unsympathetic portrayals of those with mental illness, such media attention may be a rare opportunity to focus attention on risks of victimization of those with serious mental illness and barriers to obtaining psychiatric care. Current federal gun control laws may discourage individuals from seeking psychiatric treatment and describe individuals with mental illness using anachronistic, imprecise, and gratuitously stigmatizing language. This article lays out potential talking points that may be useful after future gun violence.

  1. Phase of Illness in palliative care: Cross-sectional analysis of clinical data from community, hospital and hospice patients.

    PubMed

    Mather, Harriet; Guo, Ping; Firth, Alice; Davies, Joanna M; Sykes, Nigel; Landon, Alison; Murtagh, Fliss Em

    2018-02-01

    Phase of Illness describes stages of advanced illness according to care needs of the individual, family and suitability of care plan. There is limited evidence on its association with other measures of symptoms, and health-related needs, in palliative care. The aims of the study are as follows. (1) Describe function, pain, other physical problems, psycho-spiritual problems and family and carer support needs by Phase of Illness. (2) Consider strength of associations between these measures and Phase of Illness. Secondary analysis of patient-level data; a total of 1317 patients in three settings. Function measured using Australia-modified Karnofsky Performance Scale. Pain, other physical problems, psycho-spiritual problems and family and carer support needs measured using items on Palliative Care Problem Severity Scale. Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale items varied significantly by Phase of Illness. Mean function was highest in stable phase (65.9, 95% confidence interval = 63.4-68.3) and lowest in dying phase (16.6, 95% confidence interval = 15.3-17.8). Mean pain was highest in unstable phase (1.43, 95% confidence interval = 1.36-1.51). Multinomial regression: psycho-spiritual problems were not associated with Phase of Illness ( χ 2  = 2.940, df = 3, p = 0.401). Family and carer support needs were greater in deteriorating phase than unstable phase (odds ratio (deteriorating vs unstable) = 1.23, 95% confidence interval = 1.01-1.49). Forty-nine percent of the variance in Phase of Illness is explained by Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Phase of Illness has value as a clinical measure of overall palliative need, capturing additional information beyond Australia-modified Karnofsky Performance Scale and Palliative Care Problem Severity Scale. Lack of significant association between psycho-spiritual problems and Phase of Illness warrants further investigation.

  2. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    PubMed Central

    Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044

  3. Health IT for Patient Safety and Improving the Safety of Health IT.

    PubMed

    Magrabi, Farah; Ong, Mei-Sing; Coiera, Enrico

    2016-01-01

    Alongside their benefits health IT applications can pose new risks to patient safety. Problems with IT have been linked to many different types of clinical errors including prescribing and administration of medications; as well as wrong-patient, wrong-site errors, and delays in procedures. There is also growing concern about the risks of data breach and cyber-security. IT-related clinical errors have their origins in processes undertaken to design, build, implement and use software systems in a broader sociotechnical context. Safety can be improved with greater standardization of clinical software and by improving the quality of processes at different points in the technology life cycle, spanning design, build, implementation and use in clinical settings. Oversight processes can be set up at a regional or national level to ensure that clinical software systems meet specific standards. Certification and regulation are two mechanisms to improve oversight. In the absence of clear standards, guidelines are useful to promote safe design and implementation practices. Processes to identify and mitigate hazards can be formalised via a safety management system. Minimizing new patient safety risks is critical to realizing the benefits of IT.

  4. Scarlet fever.

    PubMed

    2016-04-27

    Essential facts Scarlet fever is characterised by a rash that usually accompanies a sore throat and flushed cheeks. It is mainly a childhood illness. While this contagious disease rarely poses a danger to life today, outbreaks in the past led to many deaths.

  5. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  6. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  7. 28 CFR 549.46 - Procedures for involuntary administration of psychiatric medication.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... an immediate threat of: (A) Bodily harm to self or others; (B) Serious destruction of property... the mental illness or disorder, the inmate is dangerous to self or others, poses a serious threat of...

  8. Ill-defined problem solving in amnestic mild cognitive impairment: linking episodic memory to effective solution generation.

    PubMed

    Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M

    2015-02-01

    It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Legal protection of the right to work and employment for persons with mental health problems: a review of legislation across the world.

    PubMed

    Nardodkar, Renuka; Pathare, Soumitra; Ventriglio, Antonio; Castaldelli-Maia, João; Javate, Kenneth R; Torales, Julio; Bhugra, Dinesh

    2016-08-01

    The right to work and employment is indispensable for social integration of persons with mental health problems. This study examined whether existing laws pose structural barriers in the realization of right to work and employment of persons with mental health problems across the world. It reviewed disability-specific, human rights legislation, and labour laws of all UN Member States in the context of Article 27 of the UN Convention on the Rights of Persons with Disabilities (CRPD). It wes found that laws in 62% of countries explicitly mention mental disability/impairment/illness in the definition of disability. In 64% of countries, laws prohibit discrimination against persons with mental health during recruitment; in one-third of countries laws prohibit discontinuation of employment. More than half (56%) the countries have laws in place which offer access to reasonable accommodation in the workplace. In 59% of countries laws promote employment of persons with mental health problems through different affirmative actions. Nearly 50 years after the adoption of the International Covenant on Economic, Social, and Cultural Rights and 10 years after the adoption of CRPD by the UN General Assembly, legal discrimination against persons with mental health problems continues to exist globally. Countries and policy-makers need to implement legislative measures to ensure non-discrimination of persons with mental health problems during employment.

  10. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    NASA Astrophysics Data System (ADS)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  11. Pre-Service Elementary Teachers' Motivation and Ill-Structured Problem Solving in Korea

    ERIC Educational Resources Information Center

    Kim, Min Kyeong; Cho, Mi Kyung

    2016-01-01

    This article examines the use and application of an ill-structured problem to pre-service elementary teachers in Korea in order to find implications of pre-service teacher education with regard to contextualized problem solving by analyzing experiences of ill-structured problem solving. Participants were divided into small groups depending on the…

  12. Application of structured support vector machine backpropagation to a convolutional neural network for human pose estimation.

    PubMed

    Witoonchart, Peerajak; Chongstitvatana, Prabhas

    2017-08-01

    In this study, for the first time, we show how to formulate a structured support vector machine (SSVM) as two layers in a convolutional neural network, where the top layer is a loss augmented inference layer and the bottom layer is the normal convolutional layer. We show that a deformable part model can be learned with the proposed structured SVM neural network by backpropagating the error of the deformable part model to the convolutional neural network. The forward propagation calculates the loss augmented inference and the backpropagation calculates the gradient from the loss augmented inference layer to the convolutional layer. Thus, we obtain a new type of convolutional neural network called an Structured SVM convolutional neural network, which we applied to the human pose estimation problem. This new neural network can be used as the final layers in deep learning. Our method jointly learns the structural model parameters and the appearance model parameters. We implemented our method as a new layer in the existing Caffe library. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Examining the Preparatory Effects of Problem Generation and Solution Generation on Learning from Instruction

    ERIC Educational Resources Information Center

    Kapur, Manu

    2018-01-01

    The goal of this paper is to isolate the preparatory effects of problem-generation from solution generation in problem-posing contexts, and their underlying mechanisms on learning from instruction. Using a randomized-controlled design, students were assigned to one of two conditions: (a) problem-posing with solution generation, where they…

  14. Examining Interactions between Problem Posing and Problem Solving with Prospective Primary Teachers: A Case of Using Fractions

    ERIC Educational Resources Information Center

    Xie, Jinxia; Masingila, Joanna O.

    2017-01-01

    Existing studies have quantitatively evidenced the relatedness between problem posing and problem solving, as well as the magnitude of this relationship. However, the nature and features of this relationship need further qualitative exploration. This paper focuses on exploring the interactions, i.e., mutual effects and supports, between problem…

  15. Adaptive leadership framework for chronic illness: framing a research agenda for transforming care delivery.

    PubMed

    Anderson, Ruth A; Bailey, Donald E; Wu, Bei; Corazzini, Kirsten; McConnell, Eleanor S; Thygeson, N Marcus; Docherty, Sharron L

    2015-01-01

    We propose the Adaptive Leadership Framework for Chronic Illness as a novel framework for conceptualizing, studying, and providing care. This framework is an application of the Adaptive Leadership Framework developed by Heifetz and colleagues for business. Our framework views health care as a complex adaptive system and addresses the intersection at which people with chronic illness interface with the care system. We shift focus from symptoms to symptoms and the challenges they pose for patients/families. We describe how providers and patients/families might collaborate to create shared meaning of symptoms and challenges to coproduce appropriate approaches to care.

  16. A new computerized ionosphere tomography model using the mapping function and an application to the study of seismic-ionosphere disturbance

    NASA Astrophysics Data System (ADS)

    Kong, Jian; Yao, Yibin; Liu, Lei; Zhai, Changzhi; Wang, Zemin

    2016-08-01

    A new algorithm for ionosphere tomography using the mapping function is proposed in this paper. First, the new solution splits the integration process into four layers along the observation ray, and then, the single-layer model (SLM) is applied to each integration part using a mapping function. Next, the model parameters are estimated layer by layer with the Kalman filtering method by introducing the scale factor (SF) γ to solve the ill-posed problem. Finally, the inversed images of different layers are combined into the final CIT image. We utilized simulated data from 23 IGS GPS stations around Europe to verify the estimation accuracy of the new algorithm; the results show that the new CIT model has better accuracy than the SLM in dense data areas and the CIT residuals are more closely grouped. The stability of the new algorithm is discussed by analyzing model accuracy under different error levels (the max errors are 5TECU, 10TECU, 15TECU, respectively). In addition, the key preset parameter, SFγ , which is given by the International Reference Ionosphere model (IRI2012). The experiment is designed to test the sensitivity of the new algorithm to SF variations. The results show that the IRI2012 is capable of providing initial SF values. Also in this paper, the seismic-ionosphere disturbance (SID) of the 2011 Japan earthquake is studied using the new CIT algorithm. Combined with the TEC time sequence of Sat.15, we find that the SID occurrence time and reaction area are highly related to the main shock time and epicenter. According to CIT images, there is a clear vertical electron density upward movement (from the 150-km layer to the 450-km layer) during this SID event; however, the peak value areas in the different layers were different, which means that the horizontal movement velocity is not consistent among the layers. The potential physical triggering mechanism is also discussed in this paper. Compared with the SLM, the RMS of the new CIT model is improved by 16.78%, while the CIT model could provide the three-dimensional variation in the ionosphere.

  17. Clinical review: Medication errors in critical care

    PubMed Central

    Moyen, Eric; Camiré, Eric; Stelfox, Henry Thomas

    2008-01-01

    Medication errors in critical care are frequent, serious, and predictable. Critically ill patients are prescribed twice as many medications as patients outside of the intensive care unit (ICU) and nearly all will suffer a potentially life-threatening error at some point during their stay. The aim of this article is to provide a basic review of medication errors in the ICU, identify risk factors for medication errors, and suggest strategies to prevent errors and manage their consequences. PMID:18373883

  18. The influence of initial conditions on dispersion and reactions

    NASA Astrophysics Data System (ADS)

    Wood, B. D.

    2016-12-01

    In various generalizations of the reaction-dispersion problem, researchers have developed frameworks in which the apparent dispersion coefficient can be negative. Such dispersion coefficients raise several difficult questions. Most importantly, the presence of a negative dispersion coefficient at the macroscale leads to a macroscale representation that illustrates an apparent decrease in entropy with increasing time; this, then, appears to be in violation of basic thermodynamic principles. In addition, the proposition of a negative dispersion coefficient leads to an inherently ill-posed mathematical transport equation. The ill-posedness of the problem arises because there is no unique initial condition that corresponds to a later-time concentration distribution (assuming that if discontinuous initial conditions are allowed). In this presentation, we explain how the phenomena of negative dispersion coefficients actually arise because the governing differential equation for early times should, when derived correctly, incorporate a term that depends upon the initial and boundary conditions. The process of reactions introduces a similar phenomena, where the structure of the initial and boundary condition influences the form of the macroscopic balance equations. When upscaling is done properly, new equations are developed that include source terms that are not present in the classical (late-time) reaction-dispersion equation. These source terms depend upon the structure of the initial condition of the reacting species, and they decrease exponentially in time (thus, they converge to the conventional equations at asymptotic times). With this formulation, the resulting dispersion tensor is always positive-semi-definite, and the reaction terms directly incorporate information about the state of mixedness of the system. This formulation avoids many of the problems that would be engendered by defining negative-definite dispersion tensors, and properly represents the effective rate of reaction at early times.

  19. Bayesian tomography by interacting Markov chains

    NASA Astrophysics Data System (ADS)

    Romary, T.

    2017-12-01

    In seismic tomography, we seek to determine the velocity of the undergound from noisy first arrival travel time observations. In most situations, this is an ill posed inverse problem that admits several unperfect solutions. Given an a priori distribution over the parameters of the velocity model, the Bayesian formulation allows to state this problem as a probabilistic one, with a solution under the form of a posterior distribution. The posterior distribution is generally high dimensional and may exhibit multimodality. Moreover, as it is known only up to a constant, the only sensible way to addressthis problem is to try to generate simulations from the posterior. The natural tools to perform these simulations are Monte Carlo Markov chains (MCMC). Classical implementations of MCMC algorithms generally suffer from slow mixing: the generated states are slow to enter the stationary regime, that is to fit the observations, and when one mode of the posterior is eventually identified, it may become difficult to visit others. Using a varying temperature parameter relaxing the constraint on the data may help to enter the stationary regime. Besides, the sequential nature of MCMC makes them ill fitted toparallel implementation. Running a large number of chains in parallel may be suboptimal as the information gathered by each chain is not mutualized. Parallel tempering (PT) can be seen as a first attempt to make parallel chains at different temperatures communicate but only exchange information between current states. In this talk, I will show that PT actually belongs to a general class of interacting Markov chains algorithm. I will also show that this class enables to design interacting schemes that can take advantage of the whole history of the chain, by authorizing exchanges toward already visited states. The algorithms will be illustrated with toy examples and an application to first arrival traveltime tomography.

  20. Adaptive Numerical Dissipation Control in High Order Schemes for Multi-D Non-Ideal MHD

    NASA Technical Reports Server (NTRS)

    Yee, H. C.; Sjoegreen, B.

    2005-01-01

    The required type and amount of numerical dissipation/filter to accurately resolve all relevant multiscales of complex MHD unsteady high-speed shock/shear/turbulence/combustion problems are not only physical problem dependent, but also vary from one flow region to another. In addition, proper and efficient control of the divergence of the magnetic field (Div(B)) numerical error for high order shock-capturing methods poses extra requirements for the considered type of CPU intensive computations. The goal is to extend our adaptive numerical dissipation control in high order filter schemes and our new divergence-free methods for ideal MHD to non-ideal MHD that include viscosity and resistivity. The key idea consists of automatic detection of different flow features as distinct sensors to signal the appropriate type and amount of numerical dissipation/filter where needed and leave the rest of the region free from numerical dissipation contamination. These scheme-independent detectors are capable of distinguishing shocks/shears, flame sheets, turbulent fluctuations and spurious high-frequency oscillations. The detection algorithm is based on an artificial compression method (ACM) (for shocks/shears), and redundant multiresolution wavelets (WAV) (for the above types of flow feature). These filters also provide a natural and efficient way for the minimization of Div(B) numerical error.

Top