Science.gov

Sample records for posteriori parameter choice

  1. Marginal Maximum A Posteriori Item Parameter Estimation for the Generalized Graded Unfolding Model

    ERIC Educational Resources Information Center

    Roberts, James S.; Thompson, Vanessa M.

    2011-01-01

    A marginal maximum a posteriori (MMAP) procedure was implemented to estimate item parameters in the generalized graded unfolding model (GGUM). Estimates from the MMAP method were compared with those derived from marginal maximum likelihood (MML) and Markov chain Monte Carlo (MCMC) procedures in a recovery simulation that varied sample size,…

  2. Parameter Choices for Approximation by Harmonic Splines

    NASA Astrophysics Data System (ADS)

    Gutting, Martin

    2016-04-01

    The approximation by harmonic trial functions allows the construction of the solution of boundary value problems in geoscience, e.g., in terms of harmonic splines. Due to their localizing properties regional modeling or the improvement of a global model in a part of the Earth's surface is possible with splines. Fast multipole methods have been developed for some cases of the occurring kernels to obtain a fast matrix-vector multiplication. The main idea of the fast multipole algorithm consists of a hierarchical decomposition of the computational domain into cubes and a kernel approximation for the more distant points. This reduces the numerical effort of the matrix-vector multiplication from quadratic to linear in reference to the number of points for a prescribed accuracy of the kernel approximation. The application of the fast multipole method to spline approximation which also allows the treatment of noisy data requires the choice of a smoothing parameter. We investigate different methods to (ideally automatically) choose this parameter with and without prior knowledge of the noise level. Thereby, the performance of these methods is considered for different types of noise in a large simulation study. Applications to gravitational field modeling are presented as well as the extension to boundary value problems where the boundary is the known surface of the Earth itself.

  3. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    SciTech Connect

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  4. A Posteriori Transit Probabilities

    NASA Astrophysics Data System (ADS)

    Stevens, Daniel J.; Gaudi, B. Scott

    2013-08-01

    masses in these regimes. We therefore suggest that companions with minimum masses in these regimes might be better-than-expected targets for transit follow-up, and we identify promising targets from RV-detected planets in the literature. Finally, we consider the uncertainty in the transit probability arising from uncertainties in the input parameters, and the effect of ignoring the dependence of the transit probability on the true semimajor axis on i.

  5. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  6. CHOICE OF PROTON DRIVER PARAMETERS FOR A NEUTRINO FACTORY.

    SciTech Connect

    KIRK, H.G.; BERG, J.S.; FERNOW, R.C.; GALLARDO, J.C.; SIMOS, N.; WENG, W.

    2006-06-23

    We discuss criteria for designing an optimal ''green field'' proton driver for a neutrino factory. The driver parameters are determined by considerations of space charge, power capabilities of the target, beam loading and available RF peak power.

  7. Choice.

    PubMed

    Greenberg, Jay

    2008-09-01

    Understanding how and why analysands make the choices they do is central to both the clinical and the theoretical projects of psychoanalysis. And yet we know very little about the process of choice or about the relationship between choices and motives. A striking parallel is to be found between the ways choice is narrated in ancient Greek texts and the experience of analysts as they observe patients making choices in everyday clinical work. Pursuing this convergence of classical and contemporary sensibilities will illuminate crucial elements of the various meanings of choice, and of the way that these meanings change over the course of psychoanalytic treatment. PMID:18802123

  8. Specifics of Mode Parameters Choice Under Twin Arc Welding of Fillet Welds

    NASA Astrophysics Data System (ADS)

    Melnikov, A. U.; Fiveyskiy, A. M.; Sholokhov, M. A.

    2016-04-01

    The present article covers the specifics of mode parameters choice under twin arc welding of fillet welds. The necessity of mode parameters adjustment at the second arc due to heated metal of the first arc was proven. The obtained correction indexes allow us to determine with satisfactory accuracy the mode parameters under given dimensions of weld joint.

  9. Choice of Proton Driver Parameters for a Neutrino Factory

    SciTech Connect

    Kirk,H.G.; Berg, J. S.; Fernow, R. C.; Gallardo, J. C.; Simos, N.; Weng, W.-T.; Brooks, S.

    2006-06-26

    We discuss criteria for designing an optimal 'green field' proton driver for a neutrino factory. The driver parameters are determined by considerations of space charge, power capabilities of the target, beam loading and available RF peak power. A neutrino factory may be the best experimental tool to unravel the physics involved in neutrino oscillation and CP violation phenomena [1]. To have sufficient neutrino flux for acceptable physics results within 5 years requires about 10{sup 22} protons on target per year, which corresponds to 1-4 MW of proton beam power from the proton driver depending on the beam energy. In the past, there were individual proposals from different laboratories of a particular design of proton driver capable of delivering beam power from 2 to 4 MW, without consistent attention paid to the needs or requirements from the downstream systems. In this study, we try to identify the requirements from those down stream systems first, then see whether it is possible to design a proton driver to meet those needs. Such a study will also assist site specific proposals to further improve on their designs to better serve the need of a proton driver for neutrino factory applications.

  10. Method study of parameter choice for a circular proton-proton collider

    NASA Astrophysics Data System (ADS)

    Su, Feng; Gao, Jie; Xiao, Ming; Wang, Dou; Wang, Yi-Wei; Bai, Sha; Bian, Tian-Jian

    2016-01-01

    In this paper we show a systematic method of appropriate parameter choice for a circular proton-proton collider by using an analytical expression for the beam-beam tune shift limit, starting from a given design goal and technical limitations. A suitable parameter space has been explored. Based on the parameter scan, sets of appropriate parameters designed for a 50 km and 100 km circular proton-proton collider are proposed. Supported by National Natural Science Foundation of China (11175192)

  11. Empirical estimation of consistency parameter in intertemporal choice based on Tsallis’ statistics

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2007-07-01

    Impulsivity and inconsistency in intertemporal choice have been attracting attention in econophysics and neuroeconomics. Although loss of self-control by substance abusers is strongly related to their inconsistency in intertemporal choice, researchers in neuroeconomics and psychopharmacology have usually studied impulsivity in intertemporal choice using a discount rate (e.g. hyperbolic k), with little effort being expended on parameterizing subject's inconsistency in intertemporal choice. Recent studies using Tsallis’ statistics-based econophysics have found a discount function (i.e. q-exponential discount function), which may continuously parameterize a subject's consistency in intertemporal choice. In order to examine the usefulness of the consistency parameter (0⩽q⩽1) in the q-exponential discounting function in behavioral studies, we experimentally estimated the consistency parameter q in Tsallis’ statistics-based discounting function by assessing the points of subjective equality (indifference points) at seven delays (1 week-25 years) in humans (N=24). We observed that most (N=19) subjects’ intertemporal choice was completely inconsistent ( q=0, i.e. hyperbolic discounting), the mean consistency (0⩽q⩽1) was smaller than 0.5, and only one subject had a completely consistent intertemporal choice ( q=1, i.e. exponential discounting). There was no significant correlation between impulsivity and inconsistency parameters. Our results indicate that individual differences in consistency in intertemporal choice can be parameterized by introducing a q-exponential discount function and most people discount delayed rewards hyperbolically, rather than exponentially (i.e. mean q is smaller than 0.5). Further, impulsivity and inconsistency in intertemporal choice can be considered as separate behavioral tendencies. The usefulness of the consistency parameter q in psychopharmacological studies of addictive behavior was demonstrated in the present study.

  12. [ETHICAL PRINCIPALS AND A POSTERIORI JUSTIFICATIONS].

    PubMed

    Heintz, Monica

    2015-12-01

    It is difficult to conceive that the human being, while being the same everywhere, could be cared for in such different ways in other societies. Anthropologists acknowledge that the diversity of cultures implies a diversity of moral values, thus that in a multicultural society the individual could draw upon different moral frames to justify the peculiarities of her/his demand of care. But how could we determine what is the moral frame that catalyzes behaviour while all we can record are a posteriori justifications of actions? In most multicultural societies where several moralframes coexist, there is an implicit hierarchy between ethical systems derived from a hierarchy of power which falsifies these a posteriori justifications. Moreover anthropologists often fail to acknowledge that individual behaviour does not always reflect individual values, but is more often the result of negotiations between the moralframes available in society and her/his own desires and personal experience. This is certainly due to the difficulty to account for a dynamic and complex interplay of moral values that cannot be analysed as a system. The impact of individual experience on the way individuals give or receive care could also be only weakly linked to a moral system even when this reference comes up explicitly in the a posteriori justifications. PMID:27120823

  13. Parameter choice matters: validating probe parameters for use in mixed-solvent simulations.

    PubMed

    Lexa, Katrina W; Goh, Garrett B; Carlson, Heather A

    2014-08-25

    Probe mapping is a common approach for identifying potential binding sites in structure-based drug design; however, it typically relies on energy minimizations of probes in the gas phase and a static protein structure. The mixed-solvent molecular dynamics (MixMD) approach was recently developed to account for full protein flexibility and solvation effects in hot-spot mapping. Our first study used only acetonitrile as a probe, and here, we have augmented the set of functional group probes through careful testing and parameter validation. A diverse range of probes are needed in order to map complex binding interactions. A small variation in probe parameters can adversely effect mixed-solvent behavior, which we highlight with isopropanol. We tested 11 solvents to identify six with appropriate behavior in TIP3P water to use as organic probes in the MixMD method. In addition to acetonitrile and isopropanol, we have identified acetone, N-methylacetamide, imidazole, and pyrimidine. These probe solvents will enable MixMD studies to recover hydrogen-bonding sites, hydrophobic pockets, protein-protein interactions, and aromatic hotspots. Also, we show that ternary-solvent systems can be incorporated within a single simulation. Importantly, these binary and ternary solvents do not require artificial repulsion terms like other methods. Within merely 5 ns, layered solvent boxes become evenly mixed for soluble probes. We used radial distribution functions to evaluate solvent behavior, determine adequate mixing, and confirm the absence of phase separation. We recommend that radial distribution functions should be used to assess adequate sampling in all mixed-solvent techniques rather than the current practice of examining the solvent ratios at the edges of the solvent box. PMID:25058662

  14. Robust maximum a posteriori image super-resolution

    NASA Astrophysics Data System (ADS)

    Vrigkas, Michalis; Nikou, Christophoros; Kondi, Lisimachos P.

    2014-07-01

    A global robust M-estimation scheme for maximum a posteriori (MAP) image super-resolution which efficiently addresses the presence of outliers in the low-resolution images is proposed. In iterative MAP image super-resolution, the objective function to be minimized involves the highly resolved image, a parameter controlling the step size of the iterative algorithm, and a parameter weighing the data fidelity term with respect to the smoothness term. Apart from the robust estimation of the high-resolution image, the contribution of the proposed method is twofold: (1) the robust computation of the regularization parameters controlling the relative strength of the prior with respect to the data fidelity term and (2) the robust estimation of the optimal step size in the update of the high-resolution image. Experimental results demonstrate that integrating these estimations into a robust framework leads to significant improvement in the accuracy of the high-resolution image.

  15. A Small-Sample Choice of the Tuning Parameter in Ridge Regression

    PubMed Central

    Boonstra, Philip S.; Mukherjee, Bhramar; Taylor, Jeremy M. G.

    2015-01-01

    We propose new approaches for choosing the shrinkage parameter in ridge regression, a penalized likelihood method for regularizing linear regression coefficients, when the number of observations is small relative to the number of parameters. Existing methods may lead to extreme choices of this parameter, which will either not shrink the coefficients enough or shrink them by too much. Within this “small-n, large-p” context, we suggest a correction to the common generalized cross-validation (GCV) method that preserves the asymptotic optimality of the original GCV. We also introduce the notion of a “hyperpenalty”, which shrinks the shrinkage parameter itself, and make a specific recommendation regarding the choice of hyperpenalty that empirically works well in a broad range of scenarios. A simple algorithm jointly estimates the shrinkage parameter and regression coefficients in the hyperpenalized likelihood. In a comprehensive simulation study of small-sample scenarios, our proposed approaches offer superior prediction over nine other existing methods. PMID:26985140

  16. On the choice of GARCH parameters for efficient modelling of real stock price dynamics

    NASA Astrophysics Data System (ADS)

    Pokhilchuk, K. A.; Savel'ev, S. E.

    2016-04-01

    We propose two different methods for optimal choice of GARCH(1,1) parameters for the efficient modelling of stock prices by using a particular return series. Using (as an example) stock return data for Intel Corporation, we vary parameters to fit the average volatility as well as fourth (linked to kurtosis of data) and eighth statistical moments and observe pure convergence of our simulated eighth moment to the stock data. Results indicate that fitting higher-order moments of a return series might not be an optimal approach for choosing GARCH parameters. In contrast, the simulated exponent of the Fourier spectrum decay is much less noisy and can easily fit the corresponding decay of the empirical Fourier spectrum of the used return series of Intel stock, allowing us to efficiently define all GARCH parameters. We compare the estimates of GARCH parameters obtained by fitting price data Fourier spectra with the ones obtained from standard software packages and conclude that the obtained estimates here are deeper in the stability region of parameters. Thus, the proposed method of using Fourier spectra of stock data to estimate GARCH parameters results in a more robust and stable stochastic process but with a shorter characteristic autocovariance time.

  17. Maximum a posteriori decoder for digital communications

    NASA Technical Reports Server (NTRS)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  18. Ridge Regression in Prediction Problems: Automatic Choice of the Ridge Parameter

    PubMed Central

    Cule, Erika; De Iorio, Maria

    2013-01-01

    To date, numerous genetic variants have been identified as associated with diverse phenotypic traits. However, identified associations generally explain only a small proportion of trait heritability and the predictive power of models incorporating only known-associated variants has been small. Multiple regression is a popular framework in which to consider the joint effect of many genetic variants simultaneously. Ordinary multiple regression is seldom appropriate in the context of genetic data, due to the high dimensionality of the data and the correlation structure among the predictors. There has been a resurgence of interest in the use of penalised regression techniques to circumvent these difficulties. In this paper, we focus on ridge regression, a penalised regression approach that has been shown to offer good performance in multivariate prediction problems. One challenge in the application of ridge regression is the choice of the ridge parameter that controls the amount of shrinkage of the regression coefficients. We present a method to determine the ridge parameter based on the data, with the aim of good performance in high-dimensional prediction problems. We establish a theoretical justification for our approach, and demonstrate its performance on simulated genetic data and on a real data example. Fitting a ridge regression model to hundreds of thousands to millions of genetic variants simultaneously presents computational challenges. We have developed an R package, ridge, which addresses these issues. Ridge implements the automatic choice of ridge parameter presented in this paper, and is freely available from CRAN. PMID:23893343

  19. Robust contrast source inversion method with automatic choice rule of regularization parameters for ultrasound waveform tomography

    NASA Astrophysics Data System (ADS)

    Lin, Hongxiang; Azuma, Takashi; Qu, Xiaolei; Takagi, Shu

    2016-07-01

    We consider ultrasound waveform tomography using an ultrasound prototype equipped with the ring-array transducers. For this purpose, we use robust contrast source inversion (robust CSI), viz extended contrast source inversion, to reconstruct the sound-speed image from the wave-field data. The robust CSI method is implemented by the alternating minimization method. An automatic choice rule is employed into the alternating minimization method in order to heuristically determine a suitable regularization parameter while iterating. We prove the convergence of this algorithm. The numerical examples show that the robust CSI method with the automatic choice rule improves the spatial resolution of medical images and enhances the robustness, even when the wave-field data of a wavelength of 6.16 mm contaminated by 5% noise are used. The numerical results also show that the images reconstructed by the proposed method yield a spatial resolution of approximately half the wavelength that may be adequate for imaging a breast tumor at Stage I.

  20. Parameters of rewards on choice behavior in Siamese fighting fish (Betta splendens).

    PubMed

    Shapiro, Martin S; Jensen, Ashley L

    2009-09-01

    Five experiments were conducted with Siamese fighting fish (Betta splendens) to investigate how choices in a T-maze were affected by parameters of a social reward (aggression display to another male): presence or absence, amount, delay and distance traveled. Bettas showed a preference for the side associated with the presence of another male rather than the side associated with nothing (Exp 1), a greater length of time of the reward (Exp 2) and shorter delay (Exp 3). The animals were indifferent when one side offered a longer delay to a longer reward time compared with a shorter delay to a shorter reward time (Exp 4). What was most surprising, however, was that fish preferred to choose the side that was associated with swimming a greater distance to reach an opponent male (Exp 5). These experiments demonstrate that, while some parameters of a visual reward affect behavior in predictable ways (greater amount, shorter delay), the complex motivations underlying inter-male aggression can produce what appear to be paradoxical results. PMID:19615613

  1. Optimal choice of the parameters for ventilation and methane drainage in a longwall face with caving

    SciTech Connect

    Dziurzynski, W.; Nawrat, S.

    1995-12-31

    An increasing concentration of coal production, especially in the circumstances of intensive methane inflow makes the coal mine managing staff apply new techniques of safe mining. It paves also the way for scientists to develop new directions of investigations and implementation of state-of-the-art technical solutions. Simultaneously, it could be noticed that the funds assigned for expansive {open_quote}in situ{close_quotes} investigation are continuously decreasing. Better and better results are reached when applying computer technique in calculations of the parameters of ventilation process. Recent theoretical and experimental investigations of air and gas (methane) flow in longwall areas with caving, combined with the implementation of methane drainage system allowed to create, a mathematical model and consequently to elaborate a computer supported numerical simulation of discussed phenomena. The mathematical model has been modified and the simulation program was prepared in such a way that the software is convenient for a user looking for an optimal solution. The paper presented the methodology of optimal choice of following parameters: (1) ventilation system; and (2) rate of flow through the wall. The procedure takes into consideration keeping a safe level of concentration methane in air flowing through the longwall as well as the criterion of maximum methane concentration within the methane drainage pipe line. Results of variant computer simulation regarding the longwall with caving are shown in graphs and tables.

  2. Minimization of multi-penalty functionals by alternating iterative thresholding and optimal parameter choices

    NASA Astrophysics Data System (ADS)

    Naumova, Valeriya; Peter, Steffen

    2014-12-01

    Inspired by several recent developments in regularization theory, optimization, and signal processing, we present and analyze a numerical approach to multi-penalty regularization in spaces of sparsely represented functions. The sparsity prior is motivated by the largely expected geometrical/structured features of high-dimensional data, which may not be well-represented in the framework of typically more isotropic Hilbert spaces. In this paper, we are particularly interested in regularizers which are able to correctly model and separate the multiple components of additively mixed signals. This situation is rather common as pure signals may be corrupted by additive noise. To this end, we consider a regularization functional composed by a data-fidelity term, where signal and noise are additively mixed, a non-smooth and non-convex sparsity promoting term, and a penalty term to model the noise. We propose and analyze the convergence of an iterative alternating algorithm based on simple iterative thresholding steps to perform the minimization of the functional. By means of this algorithm, we explore the effect of choosing different regularization parameters and penalization norms in terms of the quality of recovering the pure signal and separating it from additive noise. For a given fixed noise level numerical experiments confirm a significant improvement in performance compared to standard one-parameter regularization methods. By using high-dimensional data analysis methods such as principal component analysis, we are able to show the correct geometrical clustering of regularized solutions around the expected solution. Eventually, for the compressive sensing problems considered in our experiments we provide a guideline for a choice of regularization norms and parameters.

  3. Electron transport in magnetrons by a posteriori Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Costin, C.; Minea, T. M.; Popa, G.

    2014-02-01

    Electron transport across magnetic barriers is crucial in all magnetized plasmas. It governs not only the plasma parameters in the volume, but also the fluxes of charged particles towards the electrodes and walls. It is particularly important in high-power impulse magnetron sputtering (HiPIMS) reactors, influencing the quality of the deposited thin films, since this type of discharge is characterized by an increased ionization fraction of the sputtered material. Transport coefficients of electron clouds released both from the cathode and from several locations in the discharge volume are calculated for a HiPIMS discharge with pre-ionization operated in argon at 0.67 Pa and for very short pulses (few µs) using the a posteriori Monte Carlo simulation technique. For this type of discharge electron transport is characterized by strong temporal and spatial dependence. Both drift velocity and diffusion coefficient depend on the releasing position of the electron cloud. They exhibit minimum values at the centre of the race-track for the secondary electrons released from the cathode. The diffusion coefficient of the same electrons increases from 2 to 4 times when the cathode voltage is doubled, in the first 1.5 µs of the pulse. These parameters are discussed with respect to empirical Bohm diffusion.

  4. A model selection algorithm for a posteriori probability estimation with neural networks.

    PubMed

    Arribas, Juan Ignacio; Cid-Sueiro, Jesús

    2005-07-01

    This paper proposes a novel algorithm to jointly determine the structure and the parameters of a posteriori probability model based on neural networks (NNs). It makes use of well-known ideas of pruning, splitting, and merging neural components and takes advantage of the probabilistic interpretation of these components. The algorithm, so called a posteriori probability model selection (PPMS), is applied to an NN architecture called the generalized softmax perceptron (GSP) whose outputs can be understood as probabilities although results shown can be extended to more general network architectures. Learning rules are derived from the application of the expectation-maximization algorithm to the GSP-PPMS structure. Simulation results show the advantages of the proposed algorithm with respect to other schemes. PMID:16121722

  5. Superconvergence and recovery type a posteriori error estimation for hybrid stress finite element method

    NASA Astrophysics Data System (ADS)

    Bai, YanHong; Wu, YongKe; Xie, XiaoPing

    2016-09-01

    Superconvergence and a posteriori error estimators of recovery type are analyzed for the 4-node hybrid stress quadrilateral finite element method proposed by Pian and Sumihara (Int. J. Numer. Meth. Engrg., 1984, 20: 1685-1695) for linear elasticity problems. Uniform superconvergence of order $O(h^{1+\\min\\{\\alpha,1\\}})$ with respect to the Lam\\'{e} constant $\\lambda$ is established for both the recovered gradients of the displacement vector and the stress tensor under a mesh assumption, where $\\alpha>0$ is a parameter characterizing the distortion of meshes from parallelograms to quadrilaterals. A posteriori error estimators based on the recovered quantities are shown to be asymptotically exact. Numerical experiments confirm the theoretical results.

  6. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice.

    PubMed

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henrickson, Kristian C; Henricakson, Kristian C; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers' route choice behavior. PMID:26761209

  7. A Two-Stage Algorithm for Origin-Destination Matrices Estimation Considering Dynamic Dispersion Parameter for Route Choice

    PubMed Central

    Wang, Yong; Ma, Xiaolei; Liu, Yong; Gong, Ke; Henricakson, Kristian C.; Xu, Maozeng; Wang, Yinhai

    2016-01-01

    This paper proposes a two-stage algorithm to simultaneously estimate origin-destination (OD) matrix, link choice proportion, and dispersion parameter using partial traffic counts in a congested network. A non-linear optimization model is developed which incorporates a dynamic dispersion parameter, followed by a two-stage algorithm in which Generalized Least Squares (GLS) estimation and a Stochastic User Equilibrium (SUE) assignment model are iteratively applied until the convergence is reached. To evaluate the performance of the algorithm, the proposed approach is implemented in a hypothetical network using input data with high error, and tested under a range of variation coefficients. The root mean squared error (RMSE) of the estimated OD demand and link flows are used to evaluate the model estimation results. The results indicate that the estimated dispersion parameter theta is insensitive to the choice of variation coefficients. The proposed approach is shown to outperform two established OD estimation methods and produce parameter estimates that are close to the ground truth. In addition, the proposed approach is applied to an empirical network in Seattle, WA to validate the robustness and practicality of this methodology. In summary, this study proposes and evaluates an innovative computational approach to accurately estimate OD matrices using link-level traffic flow data, and provides useful insight for optimal parameter selection in modeling travelers’ route choice behavior. PMID:26761209

  8. Analysis of the geophysical data using a posteriori algorithms

    NASA Astrophysics Data System (ADS)

    Voskoboynikova, Gyulnara; Khairetdinov, Marat

    2016-04-01

    The problems of monitoring, prediction and prevention of extraordinary natural and technogenic events are priority of modern problems. These events include earthquakes, volcanic eruptions, the lunar-solar tides, landslides, falling celestial bodies, explosions utilized stockpiles of ammunition, numerous quarry explosion in open coal mines, provoking technogenic earthquakes. Monitoring is based on a number of successive stages, which include remote registration of the events responses, measurement of the main parameters as arrival times of seismic waves or the original waveforms. At the final stage the inverse problems associated with determining the geographic location and time of the registration event are solving. Therefore, improving the accuracy of the parameters estimation of the original records in the high noise is an important problem. As is known, the main measurement errors arise due to the influence of external noise, the difference between the real and model structures of the medium, imprecision of the time definition in the events epicenter, the instrumental errors. Therefore, posteriori algorithms more accurate in comparison with known algorithms are proposed and investigated. They are based on a combination of discrete optimization method and fractal approach for joint detection and estimation of the arrival times in the quasi-periodic waveforms sequence in problems of geophysical monitoring with improved accuracy. Existing today, alternative approaches to solving these problems does not provide the given accuracy. The proposed algorithms are considered for the tasks of vibration sounding of the Earth in times of lunar and solar tides, and for the problem of monitoring of the borehole seismic source location in trade drilling.

  9. A Posteriori Analysis for Hydrodynamic Simulations Using Adjoint Methodologies

    SciTech Connect

    Woodward, C S; Estep, D; Sandelin, J; Wang, H

    2009-02-26

    This report contains results of analysis done during an FY08 feasibility study investigating the use of adjoint methodologies for a posteriori error estimation for hydrodynamics simulations. We developed an approach to adjoint analysis for these systems through use of modified equations and viscosity solutions. Targeting first the 1D Burgers equation, we include a verification of the adjoint operator for the modified equation for the Lax-Friedrichs scheme, then derivations of an a posteriori error analysis for a finite difference scheme and a discontinuous Galerkin scheme applied to this problem. We include some numerical results showing the use of the error estimate. Lastly, we develop a computable a posteriori error estimate for the MAC scheme applied to stationary Navier-Stokes.

  10. Cognitive Models of Risky Choice: Parameter Stability and Predictive Accuracy of Prospect Theory

    ERIC Educational Resources Information Center

    Glockner, Andreas; Pachur, Thorsten

    2012-01-01

    In the behavioral sciences, a popular approach to describe and predict behavior is cognitive modeling with adjustable parameters (i.e., which can be fitted to data). Modeling with adjustable parameters allows, among other things, measuring differences between people. At the same time, parameter estimation also bears the risk of overfitting. Are…

  11. A Posteriori Restoration of Block Transform-Compressed Data

    NASA Technical Reports Server (NTRS)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  12. Suitable parameter choice on quantitative morphology of A549 cell in epithelial–mesenchymal transition

    PubMed Central

    Ren, Zhou-Xin; Yu, Hai-Bin; Li, Jian-Sheng; Shen, Jun-Ling; Du, Wen-Sen

    2015-01-01

    Evaluation of morphological changes in cells is an integral part of study on epithelial to mesenchymal transition (EMT), however, only a few papers reported the changes in quantitative parameters and no article compared different parameters for demanding better parameters. In the study, the purpose was to investigate suitable parameters for quantitative evaluation of EMT morphological changes. A549 human lung adenocarcinoma cell line was selected for the study. Some cells were stimulated by transforming growth factor-β1 (TGF-β1) for EMT, and other cells were as control without TGF-β1 stimulation. Subsequently, cells were placed in phase contrast microscope and three arbitrary fields were captured and saved with a personal computer. Using the tools of Photoshop software, some cells in an image were selected, segmented out and exchanged into unique hue, and other part in the image was shifted into another unique hue. The cells were calculated with 29 morphological parameters by Image Pro Plus software. A parameter between cells with or without TGF-β1 stimulation was compared statistically and nine parameters were significantly different between them. Receiver operating characteristic curve (ROC curve) of a parameter was described with SPSS software and F-test was used to compare two areas under the curves (AUCs) in Excel. Among them, roundness and radius ratio were the most AUCs and were significant higher than the other parameters. The results provided a new method with quantitative assessment of cell morphology during EMT, and found out two parameters, roundness and radius ratio, as suitable for quantification. PMID:26182364

  13. Sensitivity of Human Choice to Manipulations of Parameters of Positive and Negative Sound Reinforcement

    ERIC Educational Resources Information Center

    Lambert, Joseph M.

    2013-01-01

    The purpose of this study was to determine whether altering parameters of positive and negative reinforcement in identical ways could influence behavior maintained by each in different ways. Three undergraduate students participated in a series of assessments designed to identify preferred and aversive sounds with similar reinforcing values.…

  14. The Impact of Escape Alternative Position Change in Multiple-Choice Test on the Psychometric Properties of a Test and Its Items Parameters

    ERIC Educational Resources Information Center

    Hamadneh, Iyad Mohammed

    2015-01-01

    This study aimed at investigating the impact changing of escape alternative position in multiple-choice test on the psychometric properties of a test and it's items parameters (difficulty, discrimination & guessing), and estimation of examinee ability. To achieve the study objectives, a 4-alternative multiple choice type achievement test…

  15. Comparing species decisions in a dichotomous choice task: adjusting task parameters improves performance in monkeys.

    PubMed

    Prétôt, Laurent; Bshary, Redouan; Brosnan, Sarah F

    2016-07-01

    In comparative psychology, both similarities and differences among species are studied to better understand the evolution of their behavior. To do so, we first test species in tasks using similar procedures, but if differences are found, it is important to determine their underlying cause(s) (e.g., are they due to ecology, cognitive ability, an artifact of the study, and/or some other factor?). In our previous work, primates performed unexpectedly poorly on an apparently simple two-choice discrimination task based on the natural behavior of cleaner fish, while the fish did quite well. In this task, if the subjects first chose one of the options (ephemeral) they received both food items, but if they chose the other (permanent) option first, the ephemeral option disappeared. Here, we test several proposed explanations for primates' relatively poorer performance. In Study 1, we used a computerized paradigm that differed from the previous test by removing interaction with human experimenters, which may be distracting, and providing a more standardized testing environment. In Study 2, we adapted the computerized paradigm from Study 1 to be more relevant to primate ecology. Monkeys' overall performance in these adapted tasks matched the performance of the fish in the original study, showing that with the appropriate modifications they can solve the task. We discuss these results in light of comparative research, which requires balancing procedural similarity with considerations of how the details of the task or the context may influence how different species perceive and solve tasks differently. PMID:27086302

  16. Parameter Choice and Constraint in Hydrologic Models for Evaluating Land Use Change

    NASA Astrophysics Data System (ADS)

    Jackson, C. R.

    2011-12-01

    Hydrologic models are used to answer questions, from simple, "what is the expected 100-year peak flow for a basin?", to complex, "how will land use change alter flow pathways, flow time series, and water chemistry?" Appropriate model structure and complexity depend on the questions being addressed. Numerous studies of simple transfer models for converting climate signals into streamflows suggest that only three or four parameters are needed. The conceptual corollary to such models is a single hillslope bucket with storage, evapotranspiration, fast flow, and slow flow. While having the benefit of low uncertainty, such models are ill-suited to addressing land use questions. Land use questions require models that can simulate effects of changes in vegetation, alterations of soil characteristics, and resulting changes in flow pathways. For example, minimum goals for a hydrologic model evaluating bioenergy feedstock production might include: 1) calculate Horton overland flow based on surface conductivities and saturated surface flow based on relative moisture content in the topsoils, 2) allow reinfiltration of Horton overland flow created by bare soils, compacted soils, and pavement (roads, logging roads, skid trails, landings), 3) account for root zone depth and LAI in transpiration calculations, 4) allow mixing of hillslope flows in the riparian aquifer, 5) allow separate simulation of the riparian soils and vegetation and upslope soils and vegetation, 6) incorporate important aspects of topography and stratigraphy, and 7) estimate residence times in different flow paths. How many parameters are needed for such a model, and what information beside streamflow can be collected to constrain the parameters? Additional information that can be used for evaluating and testing watershed models are in-situ conductivity measurements, soil porosity, soil moisture dynamics, shallow perched groundwater behavior, interflow occurrence, groundwater behavior, regional ET estimates

  17. Parameter choices for a muon recirculating linear accelerator from 5 to 63 GeV

    SciTech Connect

    Berg, J. S.

    2014-06-19

    A recirculating linear accelerator (RLA) has been proposed to accelerate muons from 5 to 63 GeV for a muon collider. It should be usable both for a Higgs factory and as a stage for a higher energy collider. First, the constraints due to the beam loading are computed. Next, an expression for the longitudinal emittance growth to lowest order in the longitudinal emittance is worked out. After finding the longitudinal expression, a simplified model that describes the arcs and their approximate expression for the time of flight dependence on energy in those arcs is found. Finally, these results are used to estimate the parameters required for the RLA arcs and the linac phase.

  18. Dynamics, analytical solutions and choice of parameters for towed space debris with flexible appendages

    NASA Astrophysics Data System (ADS)

    Aslanov, Vladimir S.; Yudintsev, Vadim V.

    2015-01-01

    Active debris removal is one of the promising techniques that will decrease the population of large, non-functional spacecraft (space debris) on orbit. Properties of space debris should be taken into account during planning an active debris removal mission. In this paper the thrusting phase of tethered deorbit of large space debris with flexible appendages is considered. The goal of the work is to investigate the mutual influence of the tether vibrations and the vibrations of flexible appendages during thrusting phase. A mathematical model of the space tug and the towed space debris with flexible appendages is developed. Parameters of the system are determined with assumptions that the system is moving in straight line, avoiding high amplitude vibrations of flexible appendages. The expression of the discriminant indicates that the vibrations of the tether and flexible appendages influence each other. A critical tether stiffness exists for the given space tug mass that should be avoided.

  19. Guidelines in the Choice of Parameters for Hybrid Laser Arc Welding with Fiber Lasers

    NASA Astrophysics Data System (ADS)

    Eriksson, I.; Powell, J.; Kaplan, A.

    Laser arc hybrid welding has been a promising technology for three decades and laser welding in combination with gas metal arc welding (GMAW) has shown that it is an extremely promising technique. On the other hand the process is often considered complicated and difficult to set up correctly. An important factor in setting up the hybrid welding process is an understanding of the GMAW process. It is especially important to understand how the wire feed rate and the arc voltage (the two main parameters) affect the process. In this paper the authors show that laser hybrid welding with a 1 μm laser is similar to ordinary GMAW, and several guidelines are therefore inherited by the laser hybrid process.

  20. Implications of the subjectivity in hydrologic model choice and parameter identification on the portrayal of climate change impact

    NASA Astrophysics Data System (ADS)

    Mendoza, Pablo; Clark, Martyn; Rajagopalan, Balaji; MIzukami, Naoki; Gutmann, Ethan; Newman, Andy; Barlage, Michael; Brekke, Levi; Arnold, Jeffrey

    2014-05-01

    Climate change studies involve several methodological choices that affect the hydrological sensitivities obtained, including emission scenarios, climate models, downscaling techniques and hydrologic modeling approaches. Among these, hydrologic model structure selection (i.e. the set of equations that describe catchment processes) and parameter identification are particularly relevant and usually have a strong subjective component. This subjectivity is not only limited to engineering applications, but also extends to many of our research studies, resulting in problems such as missing processes in our models, inappropriate parameterizations and compensatory effects of model parameters (i.e. getting the right answers for the wrong reasons). The goal of this research is to assess the impact of our modeling decisions on projected changes in water balance and catchment behavior for future climate scenarios. Additionally, we aim to better understand the relative importance of hydrologic model structures and parameters on the portrayal of climate change impact. Therefore, we compare hydrologic sensitivities coming from four different models structures (PRMS, VIC, Noah and Noah-MP) with those coming from parameter sets identified using different decisions related to model calibration (objective function, multiple local optima and calibration forcing dataset). We found that both model structure selection and parameter estimation strategy (objective function and forcing dataset) affect the direction and magnitude of climate change signal. Furthermore, the relative effect of subjective decisions on projected variations of catchment behavior depends on the hydrologic signature measure analyzed. Finally, parameter sets with similar values of the objective function may not affect current and future changes in water balance, but may lead to very different sensitivities in hydrologic behavior.

  1. A unified approach for a posteriori high-order curved mesh generation using solid mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Sevilla, Ruben; Gil, Antonio J.

    2016-06-01

    The paper presents a unified approach for the a posteriori generation of arbitrary high-order curvilinear meshes via a solid mechanics analogy. The approach encompasses a variety of methodologies, ranging from the popular incremental linear elastic approach to very sophisticated non-linear elasticity. In addition, an intermediate consistent incrementally linearised approach is also presented and applied for the first time in this context. Utilising a consistent derivation from energy principles, a theoretical comparison of the various approaches is presented which enables a detailed discussion regarding the material characterisation (calibration) employed for the different solid mechanics formulations. Five independent quality measures are proposed and their relations with existing quality indicators, used in the context of a posteriori mesh generation, are discussed. Finally, a comprehensive range of numerical examples, both in two and three dimensions, including challenging geometries of interest to the solids, fluids and electromagnetics communities, are shown in order to illustrate and thoroughly compare the performance of the different methodologies. This comparison considers the influence of material parameters and number of load increments on the quality of the generated high-order mesh, overall computational cost and, crucially, the approximation properties of the resulting mesh when considering an isoparametric finite element formulation.

  2. A unified approach for a posteriori high-order curved mesh generation using solid mechanics

    NASA Astrophysics Data System (ADS)

    Poya, Roman; Sevilla, Ruben; Gil, Antonio J.

    2016-09-01

    The paper presents a unified approach for the a posteriori generation of arbitrary high-order curvilinear meshes via a solid mechanics analogy. The approach encompasses a variety of methodologies, ranging from the popular incremental linear elastic approach to very sophisticated non-linear elasticity. In addition, an intermediate consistent incrementally linearised approach is also presented and applied for the first time in this context. Utilising a consistent derivation from energy principles, a theoretical comparison of the various approaches is presented which enables a detailed discussion regarding the material characterisation (calibration) employed for the different solid mechanics formulations. Five independent quality measures are proposed and their relations with existing quality indicators, used in the context of a posteriori mesh generation, are discussed. Finally, a comprehensive range of numerical examples, both in two and three dimensions, including challenging geometries of interest to the solids, fluids and electromagnetics communities, are shown in order to illustrate and thoroughly compare the performance of the different methodologies. This comparison considers the influence of material parameters and number of load increments on the quality of the generated high-order mesh, overall computational cost and, crucially, the approximation properties of the resulting mesh when considering an isoparametric finite element formulation.

  3. Effects of using a posteriori methods for the conservation of integral invariants. [for weather forecasting

    NASA Technical Reports Server (NTRS)

    Takacs, Lawrence L.

    1988-01-01

    The nature and effect of using a posteriori adjustments to nonconservative finite-difference schemes to enforce integral invariants of the corresponding analytic system are examined. The method of a posteriori integral constraint restoration is analyzed for the case of linear advection, and the harmonic response associated with the a posteriori adjustments is examined in detail. The conservative properties of the shallow water system are reviewed, and the constraint restoration algorithm applied to the shallow water equations are described. A comparison is made between forecasts obtained using implicit and a posteriori methods for the conservation of mass, energy, and potential enstrophy in the complete nonlinear shallow-water system.

  4. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with >100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  5. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects

    Technology Transfer Automated Retrieval System (TEKTRAN)

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with '100 genotyped sons with genetic evaluations based on progeny tests. Fo...

  6. A posteriori operation detection in evolving software models

    PubMed Central

    Langer, Philip; Wimmer, Manuel; Brosch, Petra; Herrmannsdörfer, Markus; Seidl, Martina; Wieland, Konrad; Kappel, Gerti

    2013-01-01

    As every software artifact, also software models are subject to continuous evolution. The operations applied between two successive versions of a model are crucial for understanding its evolution. Generic approaches for detecting operations a posteriori identify atomic operations, but neglect composite operations, such as refactorings, which leads to cluttered difference reports. To tackle this limitation, we present an orthogonal extension of existing atomic operation detection approaches for detecting also composite operations. Our approach searches for occurrences of composite operations within a set of detected atomic operations in a post-processing manner. One major benefit is the reuse of specifications available for executing composite operations also for detecting applications of them. We evaluate the accuracy of the approach in a real-world case study and investigate the scalability of our implementation in an experiment. PMID:23471366

  7. A posteriori pointwise error estimates for the boundary element method

    SciTech Connect

    Paulino, G.H.; Gray, L.J.; Zarikian, V.

    1995-01-01

    This report presents a new approach for a posteriori pointwise error estimation in the boundary element method. The estimator relies upon the evaluation of hypersingular integral equations, and is therefore intrinsic to the boundary integral equation approach. This property allows some theoretical justification by mathematically correlating the exact and estimated errors. A methodology is developed for approximating the error on the boundary as well as in the interior of the domain. In the interior, error estimates for both the function and its derivatives (e.g. potential and interior gradients for potential problems, displacements and stresses for elasticity problems) are presented. Extensive computational experiments have been performed for the two dimensional Laplace equation on interior domains, employing Dirichlet and mixed boundary conditions. The results indicate that the error estimates successfully track the form of the exact error curve. Moreover, a reasonable estimate of the magnitude of the actual error is also obtained.

  8. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    SciTech Connect

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  9. A posteriori model validation for the temporal order of directed functional connectivity maps

    PubMed Central

    Beltz, Adriene M.; Molenaar, Peter C. M.

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  10. A posteriori model validation for the temporal order of directed functional connectivity maps.

    PubMed

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data). PMID:26379489

  11. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment

    PubMed Central

    Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-01-01

    Purpose This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. Materials and Methods A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. Results The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. Conclusion A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment. PMID:27051635

  12. On the Least-Squares Fitting of Correlated Data: a Priorivs a PosterioriWeighting

    NASA Astrophysics Data System (ADS)

    Tellinghuisen, Joel

    1996-10-01

    One of the methods in common use for analyzing large data sets is a two-step procedure, in which subsets of the full data are first least-squares fitted to a preliminary set of parameters, and the latter are subsequently merged to yield the final parameters. The second step of this procedure is properly a correlated least-squares fit and requires the variance-covariance matrices from the first step to construct the weight matrix for the merge. There is, however, an ambiguity concerning the manner in which the first-step variance-covariance matrices are assessed, which leads to different statistical properties for the quantities determined in the merge. The issue is one ofa priorivsa posterioriassessment of weights, which is an application of what was originally calledinternalvsexternal consistencyby Birge [Phys. Rev.40,207-227 (1932)] and Deming ("Statistical Adjustment of Data." Dover, New York, 1964). In the present work the simplest case of a merge fit-that of an average as obtained from a global fit vs a two-step fit of partitioned data-is used to illustrate that only in the case of a priori weighting do the results have the usually expected and desired statistical properties: normal distributions for residuals,tdistributions for parameters assessed a posteriori, and χ2distributions for variances.

  13. A posteriori subcell limiting of the discontinuous Galerkin finite element method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Zanotti, Olindo; Loubère, Raphaël; Diot, Steven

    2014-12-01

    The purpose of this work is to propose a novel a posteriori finite volume subcell limiter technique for the Discontinuous Galerkin finite element method for nonlinear systems of hyperbolic conservation laws in multiple space dimensions that works well for arbitrary high order of accuracy in space and time and that does not destroy the natural subcell resolution properties of the DG method. High order time discretization is achieved via a one-step ADER approach that uses a local space-time discontinuous Galerkin predictor method to evolve the data locally in time within each cell. Our new limiting strategy is based on the so-called MOOD paradigm, which a posteriori verifies the validity of a discrete candidate solution against physical and numerical detection criteria after each time step. Here, we employ a relaxed discrete maximum principle in the sense of piecewise polynomials and the positivity of the numerical solution as detection criteria. Within the DG scheme on the main grid, the discrete solution is represented by piecewise polynomials of degree N. For those troubled cells that need limiting, our new limiter approach recomputes the discrete solution by scattering the DG polynomials at the previous time step onto a set of Ns=2N+1 finite volume subcells per space dimension. A robust but accurate ADER-WENO finite volume scheme then updates the subcell averages of the conservative variables within the detected troubled cells. The recomputed subcell averages are subsequently gathered back into high order cell-centered DG polynomials on the main grid via a subgrid reconstruction operator. The choice of Ns=2N+1 subcells is optimal since it allows to match the maximum admissible time step of the finite volume scheme on the subgrid with the maximum admissible time step of the DG scheme on the main grid, minimizing at the same time also the local truncation error of the subcell finite volume scheme. It furthermore provides an excellent subcell resolution of

  14. Rigorous A-Posteriori Assessment of Accuracy in EMG Decomposition

    PubMed Central

    McGill, Kevin C.; Marateb, Hamid R.

    2010-01-01

    If EMG decomposition is to be a useful tool for scientific investigation, it is essential to know that the results are accurate. Because of background noise, waveform variability, motor-unit action potential (MUAP) indistinguishability, and perplexing superpositions, accuracy assessment is not straightforward. This paper presents a rigorous statistical method for assessing decomposition accuracy based only on evidence from the signal itself. The method uses statistical decision theory in a Bayesian framework to integrate all the shape- and firing-time-related information in the signal to compute an objective a-posteriori measure of confidence in the accuracy of each discharge in the decomposition. The assessment is based on the estimated statistical properties of the MUAPs and noise and takes into account the relative likelihood of every other possible decomposition. The method was tested on 3 pairs of real EMG signals containing 4–7 active MUAP trains per signal that had been decomposed by a human expert. It rated 97% of the identified MUAP discharges as accurate to within ±0.5 ms with a confidence level of 99%, and detected 6 decomposition errors. Cross-checking between signal pairs verified all but 2 of these assertions. These results demonstrate that the approach is reliable and practical for real EMG signals. PMID:20639182

  15. A posteriori uncertainty quantification of PIV-based pressure data

    NASA Astrophysics Data System (ADS)

    Azijli, Iliass; Sciacchitano, Andrea; Ragni, Daniele; Palha, Artur; Dwight, Richard P.

    2016-05-01

    A methodology for a posteriori uncertainty quantification of pressure data retrieved from particle image velocimetry (PIV) is proposed. It relies upon the Bayesian framework, where the posterior distribution (probability distribution of the true velocity, given the PIV measurements) is obtained from the prior distribution (prior knowledge of properties of the velocity field, e.g., divergence-free) and the statistical model of PIV measurement uncertainty. Once the posterior covariance matrix of the velocity is known, it is propagated through the discretized Poisson equation for pressure. Numerical assessment of the proposed method on a steady Lamb-Oseen vortex shows excellent agreement with Monte Carlo simulations, while linear uncertainty propagation underestimates the uncertainty in the pressure by up to 30 %. The method is finally applied to an experimental test case of a turbulent boundary layer in air, obtained using time-resolved tomographic PIV. Simultaneously with the PIV measurements, microphone measurements were carried out at the wall. The pressure reconstructed from the tomographic PIV data is compared to the microphone measurements. Realizing that the uncertainty of the latter is significantly smaller than the PIV-based pressure, this allows us to obtain an estimate for the true error of the former. The comparison between true error and estimated uncertainty demonstrates the accuracy of the uncertainty estimates on the pressure. In addition, enforcing the divergence-free constraint is found to result in a significantly more accurate reconstructed pressure field. The estimated uncertainty confirms this result.

  16. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    SciTech Connect

    Ju, Lili; Tian, Li; Wang, Desheng

    2009-01-01

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  17. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    NASA Technical Reports Server (NTRS)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  18. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.

    PubMed

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-11-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  19. Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction

    PubMed Central

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. PMID:23271835

  20. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  1. A new approach to a maximum à posteriori-based kernel classification method.

    PubMed

    Nopriadi; Yamashita, Yukihiko

    2012-09-01

    This paper presents a new approach to a maximum a posteriori (MAP)-based classification, specifically, MAP-based kernel classification trained by linear programming (MAPLP). Unlike traditional MAP-based classifiers, MAPLP does not directly estimate a posterior probability for classification. Instead, it introduces a kernelized function to an objective function that behaves similarly to a MAP-based classifier. To evaluate the performance of MAPLP, a binary classification experiment was performed with 13 datasets. The results of this experiment are compared with those coming from conventional MAP-based kernel classifiers and also from other state-of-the-art classification methods. It shows that MAPLP performs promisingly against the other classification methods. It is argued that the proposed approach makes a significant contribution to MAP-based classification research; the approach widens the freedom to choose an objective function, it is not constrained to the strict sense Bayesian, and can be solved by linear programming. A substantial advantage of our proposed approach is that the objective function is undemanding, having only a single parameter. This simplicity, thus, allows for further research development in the future. PMID:22721808

  2. Maximum a posteriori classification of multifrequency, multilook, synthetic aperture radar intensity data

    NASA Technical Reports Server (NTRS)

    Rignot, E.; Chellappa, R.

    1993-01-01

    We present a maximum a posteriori (MAP) classifier for classifying multifrequency, multilook, single polarization SAR intensity data into regions or ensembles of pixels of homogeneous and similar radar backscatter characteristics. A model for the prior joint distribution of the multifrequency SAR intensity data is combined with a Markov random field for representing the interactions between region labels to obtain an expression for the posterior distribution of the region labels given the multifrequency SAR observations. The maximization of the posterior distribution yields Bayes's optimum region labeling or classification of the SAR data or its MAP estimate. The performance of the MAP classifier is evaluated by using computer-simulated multilook SAR intensity data as a function of the parameters in the classification process. Multilook SAR intensity data are shown to yield higher classification accuracies than one-look SAR complex amplitude data. The MAP classifier is extended to the case in which the radar backscatter from the remotely sensed surface varies within the SAR image because of incidence angle effects. The results obtained illustrate the practicality of the method for combining SAR intensity observations acquired at two different frequencies and for improving classification accuracy of SAR data.

  3. Estimation of parameters in linear structural relationships: Sensitivity to the choice of the ratio of error variances

    NASA Technical Reports Server (NTRS)

    Lakshminarayanan, M. Y.; Gunst, R. F.

    1984-01-01

    Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. The use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio is investigated.

  4. Estimation of parameters in linear structural relationships Sensitivity to the choice of the ratio of error variances

    NASA Technical Reports Server (NTRS)

    Lakshminarayanan, M. Y.; Gunst, R. F.

    1984-01-01

    Maximum likelihood estimation of parameters in linear structural relationships under normality assumptions requires knowledge of one or more of the model parameters if no replication is available. The most common assumption added to the model definition is that the ratio of the error variances of the response and predictor variates is known. This paper investigates the use of asymptotic formulae for variances and mean squared errors as a function of sample size and the assumed value for the error variance ratio.

  5. Ontology based log content extraction engine for a posteriori security control.

    PubMed

    Azkia, Hanieh; Cuppens-Boulahia, Nora; Cuppens, Frédéric; Coatrieux, Gouenou

    2012-01-01

    In a posteriori access control, users are accountable for actions they performed and must provide evidence, when required by some legal authorities for instance, to prove that these actions were legitimate. Generally, log files contain the needed data to achieve this goal. This logged data can be recorded in several formats; we consider here IHE-ATNA (Integrating the healthcare enterprise-Audit Trail and Node Authentication) as log format. The difficulty lies in extracting useful information regardless of the log format. A posteriori access control frameworks often include a log filtering engine that provides this extraction function. In this paper we define and enforce this function by building an IHE-ATNA based ontology model, which we query using SPARQL, and show how the a posteriori security controls are made effective and easier based on this function. PMID:22874291

  6. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  7. A posteriori information effects on culpability judgments from a cross-cultural perspective.

    PubMed

    Wan, Wendy W N; Chiu, Chi-Yue; Luk, Chung-Leung

    2005-10-01

    A posteriori information about the moral attributes of the victim of a crime can affect an observer's judgment on the culpability of the actor of the crime so that negative moral attributes of the victim will lead to a lower judgment of culpability. The authors found this effect of a posteriori information among 118 American and 123 Chinese participants, but the underlying mechanisms were different between the two cultural groups. The Americans considered the psychological state of the actor during the crime, whereas the Chinese considered the morality of the actor during the crime. The authors discussed these results in light of the respondents' implicit theories of morality. PMID:16201675

  8. Application of the a posteriori granddaughter design to the Holstein genome

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An a posteriori granddaughter design was applied to determine haplotype effects for the Holstein genome. A total of 52 grandsire families, each with >=100 genotyped sons with genetic evaluations based on progeny tests, were analyzed for 33 traits (milk, fat, and protein yields; fat and protein perce...

  9. Weighted Maximum-a-Posteriori Estimation in Tests Composed of Dichotomous and Polytomous Items

    ERIC Educational Resources Information Center

    Sun, Shan-Shan; Tao, Jian; Chang, Hua-Hua; Shi, Ning-Zhong

    2012-01-01

    For mixed-type tests composed of dichotomous and polytomous items, polytomous items often yield more information than dichotomous items. To reflect the difference between the two types of items and to improve the precision of ability estimation, an adaptive weighted maximum-a-posteriori (WMAP) estimation is proposed. To evaluate the performance of…

  10. FORTRAN IV Program for Analysis of Covariance with A Priori or A Posteriori Mean Comparisons

    ERIC Educational Resources Information Center

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing a complete analysis of covariance is described. Requiring minimal core space, the program provides all group and overall summary statistics for the analysis, a test of homogeneity of regression, and all posttest mean comparisons for a priori or a posteriori testing. (Author/JKS)

  11. Application of a posteriori error estimates for the steady Stokes-Brinkman equation in 2D

    NASA Astrophysics Data System (ADS)

    Hasal, Martin; Burda, Pavel

    2016-06-01

    The paper deals with the Stokes-Brinkman equation. We investigate a posteriori error estimates for the Stokes-Brinkman equation on two-dimensional polygonal domains. Special attention is paid to the value of the hydraulic conductivity coefficients. We present numerical results for an incompressible flow problem in a domain with corners.

  12. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    ERIC Educational Resources Information Center

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  13. Nonmarket valuation of water quality in a rural transition economy in Turkey applying an a posteriori bid design

    NASA Astrophysics Data System (ADS)

    Bederli Tümay, Aylin; Brouwer, Roy

    2007-05-01

    In this paper, we investigate the economic benefits associated with public investments in wastewater treatment in one of the special protected areas along Turkey's touristic Mediterranean coast, the Köyceǧiz-Dalyan watershed. The benefits, measured in terms of boatable, fishable, swimmable and drinkable water quality, are estimated using a public survey format following the contingent valuation (CV) method. The study presented here is the first of its kind in Turkey. The study's main objective is to assess public perception, understanding, and valuation of improved wastewater treatment facilities in the two largest population centers in the watershed, facing the same water pollution problems as a result of lack of appropriate wastewater treatment. We test the validity and reliability of the application of the CV methodology to this specific environmental problem in a rural transition economy and evaluate the transferability of the results within the watershed. In order to facilitate willingness to pay (WTP) value elicitation we apply a novel dichotomous choice procedure where bid design takes place a posteriori instead of a priori. The statistical efficiency of different bid vectors is evaluated in terms of the estimated welfare measures' mean square errors using Monte Carlo simulation. The robustness of bid function specification is analyzed through average WTP and standard deviation estimated using parametric and nonparametric methods.

  14. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra.

    PubMed

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-28

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%. PMID:27250295

  15. The Impact of Microphysical Schemes and Parameter Choices on MM5 Simulations of Warm-Season High Latitude Cloud and Precipitation Systems

    NASA Astrophysics Data System (ADS)

    Tilley, J. S.; Kramm, G.

    2002-12-01

    Recently, an increasing variety of schemes to represent cloud microphysical processes have been incorporated into mesoscale models. These schemes, which are usually "bulk" approaches to the microphysics in order to reduce computational cost, range from the rather simple to relatively complex in terms of the processes represented and their formulation. The schemes are based upon various theoretical, laboratory, field measurement, and cloud modeling studies that have appeared in the literature over the past forty years, studies that have focused almost exclusively on mid-latitude and tropical areas. While significant effort has been exercised to validate such microphysical schemes in mid-latitude and tropical environments, relatively little systematic work has been done to consider how such schemes would behave in high latitudes. This is particularly the case for sophisticated regional models such as the Penn State/NCAR MM5, where the microphysical scheme used must interact with other physical schemes in complex and nonlinear ways. This issue is an important one to consider from the perspectives of aviation weather, quantitative precipitation forecasts and radiative transfer, the latter having importance to regional and global climate modeling applications. In this paper we examine the impacts of different cloud microphysical treatments on MM5 simulations of warm season high latitude cloud and precipitation systems. We examine the sensitivity of simulated mesoscale cloud, precipitation and dynamic fields to (1) the choice of the various microphysical schemes routinely available with the MM5 system, and (2) modifications to key parameters (baseline ice nuclei concentrations, temperature thresholds and supersaturation thresholds) within individual parameterization schemes. Our experiments focus on a period during mid-June 1998 during the Surface Heat Budget of the Arctic (SHEBA) Experiment. Through the period there is considerable cloud property data available over the

  16. Maximum a posteriori estimation of crystallographic phases in X-ray diffraction tomography

    PubMed Central

    Gürsoy, Doĝa; Biçer, Tekin; Almer, Jonathan D.; Kettimuthu, Raj; Stock, Stuart R.; De Carlo, Francesco

    2015-01-01

    A maximum a posteriori approach is proposed for X-ray diffraction tomography for reconstructing three-dimensional spatial distribution of crystallographic phases and orientations of polycrystalline materials. The approach maximizes the a posteriori density which includes a Poisson log-likelihood and an a priori term that reinforces expected solution properties such as smoothness or local continuity. The reconstruction method is validated with experimental data acquired from a section of the spinous process of a porcine vertebra collected at the 1-ID-C beamline of the Advanced Photon Source, at Argonne National Laboratory. The reconstruction results show significant improvement in the reduction of aliasing and streaking artefacts, and improved robustness to noise and undersampling compared to conventional analytical inversion approaches. The approach has the potential to reduce data acquisition times, and significantly improve beamtime efficiency. PMID:25939627

  17. A Posteriori Error Estimation for Discontinuous Galerkin Approximations of Hyperbolic Systems

    NASA Technical Reports Server (NTRS)

    Larson, Mats G.; Barth, Timothy J.

    1999-01-01

    This article considers a posteriori error estimation of specified functionals for first-order systems of conservation laws discretized using the discontinuous Galerkin (DG) finite element method. Using duality techniques, we derive exact error representation formulas for both linear and nonlinear functionals given an associated bilinear or nonlinear variational form. Weighted residual approximations of the exact error representation formula are then proposed and numerically evaluated for Ringleb flow, an exact solution of the 2-D Euler equations.

  18. [Methods of a posteriori identification of food patterns in Brazilian children: a systematic review].

    PubMed

    Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2016-01-01

    The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used. PMID:26816172

  19. Quantitative evaluation of efficiency of the methods for a posteriori filtration of the slip-rate time histories

    NASA Astrophysics Data System (ADS)

    Kristekova, M.; Galis, M.; Moczo, P.; Kristek, J.

    2012-04-01

    Simulated slip-rate time histories often are not free from spurious high-frequency oscillations. This is because the used spatial grid is not fine enough to properly discretize possibly broad-spectrum slip-rate and stress variations and the spatial breakdown zone of the propagating rupture. In order to reduce the oscillations some numerical modelers apply the artificial damping. An alternative way is the application of the adaptive smoothing algorithm (ASA, Galis et al. 2010). The other modelers, however, rely on the a posteriori filtration. If the oscillations do not affect (change) development and propagation of the rupture during simulations, it is possible to apply a posteriori filtration to reduce the oscillations. Often, however, the a posteriori filtration is a problematic trade-off between suppression of oscillations and distortion of a true slip rate. We present quantitative comparison of efficiency of several methods. We have analyzed slip-rate time histories simulated by the FEM-TSN method. Signals containing spurious high-frequency oscillations and signals after application of a posteriori filtering have been compared to the reference signal. The reference signal was created by application of a careful iterative and adjusted denoising of the slip rate simulated using the finest (technically possible) spatial grid. We performed extensive numerical simulations in order to test efficiency of a posteriori filtration for slip rates with different level and nature of spurious oscillations. We show that the time-frequency analysis and time-frequency misfit criteria (Kristekova et al. 2006, 2009) are suitable tools for evaluation of efficiency of a posteriori filtration methods and also clear indicators of possible distortions introduced by a posteriori filtration.

  20. Object detection and amplitude estimation based on maximum a posteriori reconstructions

    SciTech Connect

    Hanson, K.M.

    1990-01-01

    We report on the behavior of the linear maximum a posteriori (MAP) tomographic reconstruction technique as a function of the assumed rms noise {sigma}{sub n} in the measurements, which specifies the degree of confidence in the measurement data. The unconstrained MAP reconstructions are evaluated on the basis of the performance of two related tasks; object detection and amplitude estimation. It is found that the detectability of medium-sized discs remains constant up to relatively large {sigma}{sub n} before slowly diminishing. However, the amplitudes of the discs estimated from the MAP reconstructions increasingly deviate from their actual values as {sigma}{sub n} increases.

  1. A Posteriori Error Estimation for a Nodal Method in Neutron Transport Calculations

    SciTech Connect

    Azmy, Y.Y.; Buscaglia, G.C.; Zamonsky, O.M.

    1999-11-03

    An a posteriori error analysis of the spatial approximation is developed for the one-dimensional Arbitrarily High Order Transport-Nodal method. The error estimator preserves the order of convergence of the method when the mesh size tends to zero with respect to the L{sup 2} norm. It is based on the difference between two discrete solutions that are available from the analysis. The proposed estimator is decomposed into error indicators to allow the quantification of local errors. Some test problems with isotropic scattering are solved to compare the behavior of the true error to that of the estimated error.

  2. An a posteriori error estimator for shape optimization: application to EIT

    NASA Astrophysics Data System (ADS)

    Giacomini, M.; Pantz, O.; Trabelsi, K.

    2015-11-01

    In this paper we account for the numerical error introduced by the Finite Element approximation of the shape gradient to construct a guaranteed shape optimization method. We present a goal-oriented strategy inspired by the complementary energy principle to construct a constant-free, fully-computable a posteriori error estimator and to derive a certified upper bound of the error in the shape gradient. The resulting Adaptive Boundary Variation Algorithm (ABVA) is able to identify a genuine descent direction at each iteration and features a reliable stopping criterion for the optimization loop. Some preliminary numerical results for the inverse identification problem of Electrical Impedance Tomography are presented.

  3. Application of a posteriori granddaughter and modified granddaughter designs to determine Holstein haplotype effects.

    PubMed

    Weller, J I; VanRaden, P M; Wiggans, G R

    2013-08-01

    A posteriori and modified granddaughter designs were applied to determine haplotype effects for Holstein bulls and cows with BovineSNP50 [~50,000 single nucleotide polymorphisms (SNP); Illumina Inc., San Diego, CA] genotypes. The a posteriori granddaughter design was applied to 52 sire families, each with ≥100 genotyped sons with genetic evaluations based on progeny tests. For 33 traits (milk, fat, and protein yields; fat and protein percentages; somatic cell score; productive life; daughter pregnancy rate; heifer and cow conception rates; service-sire and daughter calving ease; service-sire and daughter stillbirth; 18 conformation traits; and net merit), the analysis was applied to the autosomal segment with the SNP with the greatest effect in the genomic evaluation of each trait. All traits except 2 had a within-family haplotype effect. The same design was applied with the genetic evaluations of sons corrected for SNP effects associated with chromosomes besides the one under analysis. The number of within-family contrasts was 166 without adjustment and 211 with adjustment. Of the 52 bulls analyzed, 36 had BovineHD (high density; Illumina Inc.) genotypes that were used to test for concordance between sire quantitative trait loci and SNP genotypes; complete concordance was not obtained for any effects. Of the 31 traits with effects from the a posteriori granddaughter design, 21 were analyzed with the modified granddaughter design. Only sires with a contrast for the a posteriori granddaughter design and ≥200 granddaughters with a record usable for genetic evaluation were included. Calving traits could not be analyzed because individual cow evaluations were not computed. Eight traits had within-family haplotype effects. With respect to milk and fat yields and fat percentage, the results on Bos taurus autosome (BTA) 14 corresponded to the hypothesis that a missense mutation in the diacylglycerol O-acyltransferase 1 (DGAT1) gene is the main causative mutation

  4. School Choice.

    ERIC Educational Resources Information Center

    The Progress of Education Reform 1999-2001, 1999

    1999-01-01

    This publication is the first in a series of reports that examine policy issues in education. It looks at the four major forms of school choice--charter schools, open enrollment, home schooling, and vouchers--and how they are changing the landscape of public education. School choice is one of the fastest-growing innovations in public education,…

  5. Real-time maximum a-posteriori image reconstruction for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Jabbar, Anwar A.; Dilipkumar, Shilpa; C K, Rasmi; Rajan, K.; Mondal, Partha P.

    2015-08-01

    Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is ≈200-fold faster (for large dataset) when compared to existing CPU based systems.

  6. Quantitative optical coherence tomography by maximum a-posteriori estimation of signal intensity

    NASA Astrophysics Data System (ADS)

    Chan, Aaron C.; Kurokawa, Kazuhiro; Makita, Shuichi; Hong, Young-Joo; Miyazawa, Arata; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    A maximum a-posteriori (MAP) estimator for signal amplitude of optical coherence tomography (OCT) is presented. This estimator provides an accurate and low bias estimation of the correct OCT signal amplitude even at very low signal-tonoise ratios. As a result, contrast improvement of retinal OCT images is demonstrated. In addition, this estimation method allows for an estimation reliability to be calculated. By combining the MAP estimator with a previously demonstrated attenuation imaging algorithm, we present attenuation coefficient images of the retina. From the reliability derived from the MAP image one can also determine which regions of the attenuation images are unreliable. From Jones matrix OCT data of the optic nerve head (ONH), we also demonstrate that combining MAP with polarization diversity (PD) OCT images can generate intensity images with fewer birefringence artifacts, resulting in better attenuation images. Analysis of the MAP intensity images shows higher image SNR than averaging.

  7. A posteriori correction of camera characteristics from large image data sets.

    PubMed

    Afanasyev, Pavel; Ravelli, Raimond B G; Matadeen, Rishi; De Carlo, Sacha; van Duinen, Gijs; Alewijnse, Bart; Peters, Peter J; Abrahams, Jan-Pieter; Portugal, Rodrigo V; Schatz, Michael; van Heel, Marin

    2015-01-01

    Large datasets are emerging in many fields of image processing including: electron microscopy, light microscopy, medical X-ray imaging, astronomy, etc. Novel computer-controlled instrumentation facilitates the collection of very large datasets containing thousands of individual digital images. In single-particle cryogenic electron microscopy ("cryo-EM"), for example, large datasets are required for achieving quasi-atomic resolution structures of biological complexes. Based on the collected data alone, large datasets allow us to precisely determine the statistical properties of the imaging sensor on a pixel-by-pixel basis, independent of any "a priori" normalization routinely applied to the raw image data during collection ("flat field correction"). Our straightforward "a posteriori" correction yields clean linear images as can be verified by Fourier Ring Correlation (FRC), illustrating the statistical independence of the corrected images over all spatial frequencies. The image sensor characteristics can also be measured continuously and used for correcting upcoming images. PMID:26068909

  8. Maximum a posteriori video super-resolution using a new multichannel image prior.

    PubMed

    Belekos, Stefanos P; Galatsanos, Nikolaos P; Katsaggelos, Aggelos K

    2010-06-01

    Super-resolution (SR) is the term used to define the process of estimating a high-resolution (HR) image or a set of HR images from a set of low-resolution (LR) observations. In this paper we propose a class of SR algorithms based on the maximum a posteriori (MAP) framework. These algorithms utilize a new multichannel image prior model, along with the state-of-the-art single channel image prior and observation models. A hierarchical (two-level) Gaussian nonstationary version of the multichannel prior is also defined and utilized within the same framework. Numerical experiments comparing the proposed algorithms among themselves and with other algorithms in the literature, demonstrate the advantages of the adopted multichannel approach. PMID:20129860

  9. Machine learning source separation using maximum a posteriori nonnegative matrix factorization.

    PubMed

    Gao, Bin; Woo, Wai Lok; Ling, Bingo W-K

    2014-07-01

    A novel unsupervised machine learning algorithm for single channel source separation is presented. The proposed method is based on nonnegative matrix factorization, which is optimized under the framework of maximum a posteriori probability and Itakura-Saito divergence. The method enables a generalized criterion for variable sparseness to be imposed onto the solution and prior information to be explicitly incorporated through the basis vectors. In addition, the method is scale invariant where both low and high energy components of a signal are treated with equal importance. The proposed algorithm is a more complete and efficient approach for matrix factorization of signals that exhibit temporal dependency of the frequency patterns. Experimental tests have been conducted and compared with other algorithms to verify the efficiency of the proposed method. PMID:24217003

  10. Conjugate quasilinear Dirichlet and Neumann problems and a posteriori error bounds

    NASA Technical Reports Server (NTRS)

    Lavery, J. E.

    1976-01-01

    Quasilinear Dirichlet and Neumann problems on a rectangle D with boundary D prime are considered. Using these concepts, conjugate problems, that is, a pair of one Dirichlet and one Neumann problem, the minima of the energies of which add to zero, are introduced. From the concept of conjugate problems, two-sided bounds for the energy of the exact solution of any given Dirichlet or Neumann problem are constructed. These two-sided bounds for the energy at the exact solution are in turn used to obtain a posteriori error bounds for the norm of the difference of the approximate and exact solutions of the problem. These bounds do not involve the unknown exact solution and are easily constructed numerically.

  11. Volcanic eruption source parameters from active and passive microwave sensors

    NASA Astrophysics Data System (ADS)

    Montopoli, Mario; Marzano, Frank S.; Cimini, Domenico; Mereu, Luigi

    2016-04-01

    It is well known, in the volcanology community, that precise information of the source parameters characterising an eruption are of predominant interest for the initialization of the Volcanic Transport and Dispersion Models (VTDM). Source parameters of main interest would be the top altitude of the volcanic plume, the flux of the mass ejected at the emission source, which is strictly related to the cloud top altitude, the distribution of volcanic mass concentration along the vertical column as well as the duration of the eruption and the erupted volume. Usually, the combination of a-posteriori field and numerical studies allow constraining the eruption source parameters for a given volcanic event thus making possible the forecast of ash dispersion and deposition from future volcanic eruptions. So far, remote sensors working at visible and infrared channels (cameras and radiometers) have been mainly used to detect, track and provide estimates of the concentration content and the prevailing size of the particles propagating within the ash clouds up to several thousand of kilometres far from the source as well as track back, a-posteriori, the accuracy of the VATDM outputs thus testing the initial choice made for the source parameters. Acoustic wave (infrasound) and microwave fixed scan radar (voldorad) were also used to infer source parameters. In this work we want to put our attention on the role of sensors operating at microwave wavelengths as complementary tools for the real time estimations of source parameters. Microwaves can benefit of the operability during night and day and a relatively negligible sensitivity to the presence of clouds (non precipitating weather clouds) at the cost of a limited coverage and larger spatial resolution when compared with infrared sensors. Thanks to the aforementioned advantages, the products from microwaves sensors are expected to be sensible mostly to the whole path traversed along the tephra cloud making microwaves particularly

  12. Edge-based a posteriori error estimators for generation of d-dimensional quasi-optimal meshes

    SciTech Connect

    Lipnikov, Konstantin; Agouzal, Abdellatif; Vassilevski, Yuri

    2009-01-01

    We present a new method of metric recovery for minimization of L{sub p}-norms of the interpolation error or its gradient. The method uses edge-based a posteriori error estimates. The method is analyzed for conformal simplicial meshes in spaces of arbitrary dimension d.

  13. FORTRAN IV Program for One-Way Analysis of Variance with A Priori or A Posteriori Mean Comparisons

    ERIC Educational Resources Information Center

    Fordyce, Michael W.

    1977-01-01

    A flexible Fortran program for computing one way analysis of variance is described. Requiring minimal core space, the program provides a variety of useful group statistics, all summary statistics for the analysis, and all mean comparisons for a priori or a posteriori testing. (Author/JKS)

  14. A Maximum A Posteriori Probability and Time-Varying Approach for Inferring Gene Regulatory Networks from Time Course Gene Microarray Data.

    PubMed

    Chan, Shing-Chow; Zhang, Li; Wu, Ho-Chun; Tsui, Kai-Man

    2015-01-01

    Unlike most conventional techniques with static model assumption, this paper aims to estimate the time-varying model parameters and identify significant genes involved at different timepoints from time course gene microarray data. We first formulate the parameter identification problem as a new maximum a posteriori probability estimation problem so that prior information can be incorporated as regularization terms to reduce the large estimation variance of the high dimensional estimation problem. Under this framework, sparsity and temporal consistency of the model parameters are imposed using L1-regularization and novel continuity constraints, respectively. The resulting problem is solved using the L-BFGS method with the initial guess obtained from the partial least squares method. A novel forward validation measure is also proposed for the selection of regularization parameters, based on both forward and current prediction errors. The proposed method is evaluated using a synthetic benchmark testing data and a publicly available yeast Saccharomyces cerevisiae cell cycle microarray data. For the latter particularly, a number of significant genes identified at different timepoints are found to be biological significant according to previous findings in biological experiments. These suggest that the proposed approach may serve as a valuable tool for inferring time-varying gene regulatory networks in biological studies. PMID:26357083

  15. Choice Matters.

    ERIC Educational Resources Information Center

    Hicks, Darcy

    2001-01-01

    Describes how the author allows the children to make choices about their art and writing, enabling them to make connections between their own lives and work. Suggests that educators need to provide doorways to the things that give students ideas: books, music, objects, pictures, smells, sounds, and textures. (SG)

  16. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different

  17. Resolution enhancement of hyperspectral imagery using maximum a posteriori estimation with a stochastic mixing model

    NASA Astrophysics Data System (ADS)

    Eismann, Michael Theodore

    A maximum a posteriori estimation method is developed and tested for enhancing the spatial resolution of hyperspectral imagery using higher resolution, coincident, panchromatic or multispectral imagery. The approach incorporates a stochastic mixing model of the underlying spectral scene content to develop a cost function that simultaneously optimizes the estimated hyperspectral scene relative to the observed hyperspectral and auxiliary imagery, as well as the local statistics of the spectral mixing model. The incorporation of the stochastic mixing model is found to be the key ingredient to reconstructing sub-pixel spectral information. It provides the necessary constraints for establishing a well-conditioned linear system of equations that can be solved for the high resolution image estimate. The research presented includes a mathematical formulation of the estimation approach and stochastic mixing model, as well as enhancement results for a variety of both synthetic and actual imagery. Both direct and iterative solution methodologies are developed, the latter being necessary to effectively treat imagery with arbitrarily specified spectral and spatial response functions. The performance of the method is qualitatively and quantitatively compared to that of previously developed resolution enhancement approaches. It is found that this novel approach is generally able to reconstruct sub-pixel information in several principal components of the high resolution hyperspectral image estimate. In contrast, the enhancement for conventional methods such as principal component substitution and least-squares estimation is mostly limited to the first principal component.

  18. On Evaluation of Recharge Model Uncertainty: a Priori and a Posteriori

    SciTech Connect

    Ming Ye; Karl Pohlmann; Jenny Chapman; David Shafer

    2006-01-30

    Hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Hydrologic analyses typically rely on a single conceptual-mathematical model, which ignores conceptual model uncertainty and may result in bias in predictions and under-estimation of predictive uncertainty. This study is to assess conceptual model uncertainty residing in five recharge models developed to date by different researchers based on different theories for Nevada and Death Valley area, CA. A recently developed statistical method, Maximum Likelihood Bayesian Model Averaging (MLBMA), is utilized for this analysis. In a Bayesian framework, the recharge model uncertainty is assessed, a priori, using expert judgments collected through an expert elicitation in the form of prior probabilities of the models. The uncertainty is then evaluated, a posteriori, by updating the prior probabilities to estimate posterior model probability. The updating is conducted through maximum likelihood inverse modeling by calibrating the Death Valley Regional Flow System (DVRFS) model corresponding to each recharge model against observations of head and flow. Calibration results of DVRFS for the five recharge models are used to estimate three information criteria (AIC, BIC, and KIC) used to rank and discriminate these models. Posterior probabilities of the five recharge models, evaluated using KIC, are used as weights to average head predictions, which gives posterior mean and variance. The posterior quantities incorporate both parametric and conceptual model uncertainties.

  19. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  20. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    SciTech Connect

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  1. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGESBeta

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  2. The choice of the mathematical method for prediction of electrochemical accumulator parameters value in power installations of space-rocket objects

    NASA Astrophysics Data System (ADS)

    Bezruchko, K. V.; Davidov, A. O.; Katorgina, J. G.; Logvin, V. M.; Kharchenko, A. A.

    2013-11-01

    The review and analysis of several mathematical methods for prediction of electrochemical accumulator parameters are provided in the article: according to the mathematical expectation, the latest entry, a statistical prediction, Box-Jenkins model, decomposition Volta, ARMA, ARIMA and Kalman filter. The results of these methods for prediction of the electrochemical battery 22НКГ-4CK characteristics which is a part of spacecraft power plant of the “Mikrosputnik” type are given. Possible usage of these methods for long prediction of electrochemical accumulator characteristics on space-rocket objects power plants is showed.

  3. Hard choices.

    PubMed

    Furedi, A

    1999-01-01

    The cultural discourse that frames the abortion debate has changed and become more complex over the years. To date, concerns about the need to defend the choice have shifted to moral and ethical issues surrounding abortion. The right of women to abortion can be situated in the context of ethical principles, which are basic to what we hold valuable in the modern society. The ethical principle of "procreative autonomy", the right of humans to control their own role in procreation has an unusually significant place in modern political culture in which human dignity was an important feature. Central to human dignity was the principle that "people possess the moral right and responsibility to answer the basic questions about the value and meaning of their own lives." Another crucial issue is the need to defend the "bodily autonomy" of women. Forcing women to support the fetus against her will flies against such principles as the need for voluntary consent to medical treatment. These arguments do not suggest for a moral indifference towards abortion choices, but as Ronald Dworkin argues, "tolerance is a cost we must pay for our adventure in liberty." PMID:12178906

  4. A Posteriori Study of a DNS Database Describing Super critical Binary-Species Mixing

    NASA Technical Reports Server (NTRS)

    Bellan, Josette; Taskinoglu, Ezgi

    2012-01-01

    Currently, the modeling of supercritical-pressure flows through Large Eddy Simulation (LES) uses models derived for atmospheric-pressure flows. Those atmospheric-pressure flows do not exhibit the particularities of high densitygradient magnitude features observed both in experiments and simulations of supercritical-pressure flows in the case of two species mixing. To assess whether the current LES modeling is appropriate and if found not appropriate to propose higher-fidelity models, a LES a posteriori study has been conducted for a mixing layer that initially contains different species in the lower and upper streams, and where the initial pressure is larger than the critical pressure of either species. An initially-imposed vorticity perturbation promotes roll-up and a double pairing of four initial span-wise vortices into an ultimate vortex that reaches a transitional state. The LES equations consist of the differential conservation equations coupled with a real-gas equation of state, and the equation set uses transport properties depending on the thermodynamic variables. Unlike all LES models to date, the differential equations contain, additional to the subgrid scale (SGS) fluxes, a new SGS term that is a pressure correction in the momentum equation. This additional term results from filtering of Direct Numerical Simulation (DNS) equations, and represents the gradient of the difference between the filtered pressure and the pressure computed from the filtered flow field. A previous a priori analysis, using a DNS database for the same configuration, found this term to be of leading order in the momentum equation, a fact traced to the existence of high-densitygradient magnitude regions that populated the entire flow; in the study, models were proposed for the SGS fluxes as well as this new term. In the present study, the previously proposed constantcoefficient SGS-flux models of the a priori investigation are tested a posteriori in LES, devoid of or including, the

  5. Relevance of the choice of spark plasma sintering parameters in obtaining a suitable microstructure for iodine-bearing apatite designed for the conditioning of I-129

    NASA Astrophysics Data System (ADS)

    Campayo, L.; Le Gallet, S.; Perret, D.; Courtois, E.; Cau Dit Coumes, C.; Grin, Yu.; Bernard, F.

    2015-02-01

    The high chemical durability of iodine-bearing apatite phases makes them potentially attractive for immobilizing radioactive iodine. Reactive spark plasma sintering provides a dense ceramic as a wasteform. A design-of-experiments (DOE) approach was adopted to identify the main process/material parameters and their first order interactions in order to specify experimental conditions guaranteeing complete reaction, relative density of the wasteform exceeding 92% and the largest possible grain size. For a disposal of the wasteform in a deep geological repository, these characteristics allow minimization of the iodine release by contact with groundwater. It was found that sintering at a temperature of 450 °C with an initial specific surface area of 3.3 m2 g-1 for the powder reactants is sufficient in itself to achieve the targeted characteristics of the wasteform. However, this relies on a liquid sintering regime the efficiency of which can be limited by the lead iodide initial content in the mix as well as by its particle size.

  6. Choice-Based Conjoint Analysis: Classification vs. Discrete Choice Models

    NASA Astrophysics Data System (ADS)

    Giesen, Joachim; Mueller, Klaus; Taneva, Bilyana; Zolliker, Peter

    Conjoint analysis is a family of techniques that originated in psychology and later became popular in market research. The main objective of conjoint analysis is to measure an individual's or a population's preferences on a class of options that can be described by parameters and their levels. We consider preference data obtained in choice-based conjoint analysis studies, where one observes test persons' choices on small subsets of the options. There are many ways to analyze choice-based conjoint analysis data. Here we discuss the intuition behind a classification based approach, and compare this approach to one based on statistical assumptions (discrete choice models) and to a regression approach. Our comparison on real and synthetic data indicates that the classification approach outperforms the discrete choice models.

  7. Effect of parameter choice in root water uptake models - the arrangement of root hydraulic properties within the root architecture affects dynamics and efficiency of root water uptake

    NASA Astrophysics Data System (ADS)

    Bechmann, M.; Schneider, C.; Carminati, A.; Vetterlein, D.; Attinger, S.; Hildebrandt, A.

    2014-10-01

    Detailed three-dimensional models of root water uptake have become increasingly popular for investigating the process of root water uptake. However, they suffer from a lack of information on important parameters, particularly on the spatial distribution of root axial and radial conductivities, which vary greatly along a root system. In this paper we explore how the arrangement of those root hydraulic properties and branching within the root system affects modelled uptake dynamics, xylem water potential and the efficiency of root water uptake. We first apply a simple model to illustrate the mechanisms at the scale of single roots. By using two efficiency indices based on (i) the collar xylem potential ("effort") and (ii) the integral amount of unstressed root water uptake ("water yield"), we show that an optimal root length emerges, depending on the ratio between roots axial and radial conductivity. Young roots with high capacity for radial uptake are only efficient when they are short. Branching, in combination with mature transport roots, enables soil exploration and substantially increases active young root length at low collar potentials. Second, we investigate how this shapes uptake dynamics at the plant scale using a comprehensive three-dimensional root water uptake model. Plant-scale dynamics, such as the average uptake depth of entire root systems, were only minimally influenced by the hydraulic parameterization. However, other factors such as hydraulic redistribution, collar potential, internal redistribution patterns and instantaneous uptake depth depended strongly on the arrangement on the arrangement of root hydraulic properties. Root systems were most efficient when assembled of different root types, allowing for separation of root function in uptake (numerous short apical young roots) and transport (longer mature roots). Modelling results became similar when this heterogeneity was accounted for to some degree (i.e. if the root systems contained between

  8. Application of the a posteriori granddaughter design to the Holstein genome.

    PubMed

    Weller, J I; Cole, J B; Vanraden, P M; Wiggans, G R

    2014-04-01

    An a posteriori granddaughter design was applied to estimate quantitative trait loci genotypes of sires with many sons in the US Holstein population. The results of this analysis can be used to determine concordance between specific polymorphisms and segregating quantitative trait loci. Determination of the actual polymorphisms responsible for observed genetic variation should increase the accuracy of genomic evaluations and rates of genetic gain. A total of 52 grandsire families, each with ⩾100 genotyped sons with genetic evaluations based on progeny tests, were analyzed for 33 traits (milk, fat and protein yields; fat and protein percentages; somatic cell score (SCS); productive life; daughter pregnancy rate; heifer and cow conception rates; service-sire and daughter calving ease; service-sire and daughter stillbirth rates; 18 conformation traits; and net merit). Of 617 haplotype segments spanning the entire bovine genome and each including ~5×106 bp, 5 cM and 50 genes, 608 autosomal segments were analyzed. A total of 19 335 unique haplotypes were found among the 52 grandsires. There were a total of 133 chromosomal segment-by-trait combinations, for which the nominal probability of significance for the haplotype effect was <10-8, which corresponds to genome-wide significance of <10-4. The number of chromosomal regions that met this criterion by trait ranged from one for rear legs (rear view) to seven for net merit. For each of the putative quantitative trait loci, at least one grandsire family had a within-family contrast with a t-value of >3. Confidence intervals (CIs) were estimated by the nonparametric bootstrap for the largest effect for each of nine traits. The bootstrap distribution generated by 100 samples was bimodal only for net merit, which had the widest 90% CI (eight haplotype segments). This may be due to the fact that net merit is a composite trait. For all other chromosomes, the CI spanned less than a third of the chromosome. The narrowest CI (a

  9. Blind deconvolution of images with model discrepancies using maximum a posteriori estimation with heavy-tailed priors

    NASA Astrophysics Data System (ADS)

    Kotera, Jan; Å roubek, Filip

    2015-02-01

    Single image blind deconvolution aims to estimate the unknown blur from a single observed blurred image and recover the original sharp image. Such task is severely ill-posed and typical approaches involve some heuristic or other steps without clear mathematical explanation to arrive at an acceptable solution. We show that a straight- forward maximum a posteriori estimation incorporating sparse priors and mechanism to deal with boundary artifacts, combined with an efficient numerical method can produce results which compete with or outperform much more complicated state-of-the-art methods. Our method is naturally extended to deal with overexposure in low-light photography, where linear blurring model is violated.

  10. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  11. School Choice vs. School Choice. Policy Backgrounder.

    ERIC Educational Resources Information Center

    Goodman, John C.; Moore, Matt

    This paper recommends replacing the existing U.S. school choice system, which relies on the housing market to ration educational opportunity, with one that creates a level playing field upon which schools compete for students, and students and their parents exercise choice. Section 1 describes the current school choice system, which works well for…

  12. A Novel Gibbs Maximum A Posteriori (GMAP) Approach on Bayesian Nonlinear Mixed-Effects Population Pharmacokinetics (PK) Models

    PubMed Central

    Kim, Seongho; Hall, Stephen D.; Li, Lang

    2009-01-01

    In this paper, various Bayesian Monte Carlo Markov Chain (MCMC) methods and the proposed algorithm, Gibbs maximum a posteriori (GMAP) algorithm, are compared for implementing the nonlinear mixed-effects model in pharmacokinetics (PK) studies. An intravenous two-compartmental PK model is adopted to fit the PK data from the midazolam (MDZ) studies, which recruited 24 individuals with 9 different time points per subject. The three-stage hierarchical nonlinear mixed model is constructed. Data analysis and model performance comparisons show that GMAP converges the fastest, and provides reliable results. At the mean time, data augmentation (DA) methods are used for the Random-walk Metropolis method. Data analysis shows that the speed of the convergence of Random-walk Metropolis can be improved by DA, but all of them are not as fast as GMAP. The performance of GMAP and various MCMC algorithms are compared through Midazolam data analysis and simulation. PMID:20183435

  13. A posteriori error estimates for continuous/discontinuous Galerkin approximations of the Kirchhoff-Love buckling problem

    NASA Astrophysics Data System (ADS)

    Hansbo, Peter; Larson, Mats G.

    2015-11-01

    Second order buckling theory involves a one-way coupled coupled problem where the stress tensor from a plane stress problem appears in an eigenvalue problem for the fourth order Kirchhoff plate. In this paper we present an a posteriori error estimate for the critical buckling load and mode corresponding to the smallest eigenvalue and associated eigenvector. A particular feature of the analysis is that we take the effect of approximate computation of the stress tensor and also provide an error indicator for the plane stress problem. The Kirchhoff plate is discretized using a continuous/discontinuous finite element method based on standard continuous piecewise polynomial finite element spaces. The same finite element spaces can be used to solve the plane stress problem.

  14. Reliable and efficient a posteriori error estimation for adaptive IGA boundary element methods for weakly-singular integral equations

    PubMed Central

    Feischl, Michael; Gantner, Gregor; Praetorius, Dirk

    2015-01-01

    We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698

  15. Choice in Public Education.

    ERIC Educational Resources Information Center

    Young, Timothy W.; Clinchy, Evans

    There has been much recent debate in both educational and political circles about the utility of choice as a means of improving the educational system. This book argues that any discussion of choice must address choice in public schools. The book is organized into seven chapters. Chapter 1 provides an overview of choice in public education,…

  16. Making Smart Food Choices

    MedlinePlus

    ... turn JavaScript on. Feature: Healthy Aging Making Smart Food Choices Past Issues / Winter 2015 Table of Contents ... NIH www.nia.nih.gov/Go4Life Making Smart Food Choices To maintain a healthy weight, balance the ...

  17. Making Smart Food Choices

    MedlinePlus

    ... Go4Life Get Free Stuff Be a Partner Making Smart Food Choices Regular physical activity and a healthy ... through physical activity. Download the Tip Sheet Making Smart Food Choices (PDF, 488.99 KB) You Might ...

  18. Savage Misunderstandings about Choice.

    ERIC Educational Resources Information Center

    Gura, Mark

    1993-01-01

    Although Jonathan Kozol is well-informed about choice program imperfections, schools of choice are superior to traditional schools. In places like East Harlem, school choice is helping transform youngsters from captive, disenfranchised malcontents to true students involved in their education. The challenge is to make every district school worthy…

  19. Doing School Choice Right

    ERIC Educational Resources Information Center

    Hill, Paul T.

    2005-01-01

    School choice is growing, due to growth of charter schools, private voucher programs, and No Child Left Behind requirements that school districts offer options to children in low-performing schools. Growth can bring dangers if choice is implemented carelessly. Recent research on choice shows that program design and implementation matter: the…

  20. The impact of the choice of radiative transfer model and inversion method on the OSIRIS ozone and nitrogen dioxide retrievals

    NASA Astrophysics Data System (ADS)

    Haley, Craig; McLinden, Chris; Sioris, Christopher; Brohede, Samuel

    Key to the retrieval of stratospheric minor species information from limb-scatter measurements are the selections of a radiative transfer model (RTM) and inversion method (solver). Here we assess the impact of choice of RTM and solver on the retrievals of stratospheric ozone and nitrogen dioxide from the OSIRIS instrument using the ‘Ozone Triplet' and Differential Optical Absorption Spectroscopy (DOAS) techniques that are used in the operational Level 2 processing algorithms. The RTMs assessed are LIMBTRAN, VECTOR, SCIARAYS, and SASKTRAN. The solvers studied include the Maximum A Posteriori (MAP), Maximum Likelihood (ML), Iterative Least Squares (ILS), and Chahine methods.

  1. Your Genes, Your Choices

    MedlinePlus

    Table of Contents Your Genes, Your Choices describes the Human Genome Project, the science behind it, and the ethical, legal, and social issues that are ... Nothing could be further from the truth. Your Genes, Your Choices points out how the progress of ...

  2. The Psychology of Choice.

    ERIC Educational Resources Information Center

    Lickona, Thomas

    A basic quality of the open classroom is that children are encouraged to make choices. Psychological rationales for allowing children to make choices are taken from psychological theory: (1) the objective of education, stated by Piaget and others, is to develop creative and independent thinkers; (2) children are intrinsically motivated to learn:…

  3. The Choice Controversy.

    ERIC Educational Resources Information Center

    Cookson, Peter W., Jr., Ed.

    Issues in school choice--constitutionality, feasibility, equity, and educational productivity--are examined in this book. The controversy requires an ongoing analysis of the origins of the school-choice movement, the kinds of plans proposed and implemented, their educational and social consequences, and the philosophical assumptions underlying the…

  4. Children's Choices for 2008

    ERIC Educational Resources Information Center

    Reading Teacher, 2008

    2008-01-01

    Each year 12,500 school children from different regions of the United States read and vote on the newly published children's and young adults' trade books that they like best. The Children's Choices for 2008 list is the 34th in a series that first appeared as "Classroom Choices" in the November 1975 issue of "The Reading Teacher" (RT), a…

  5. Children's Choices for 2002.

    ERIC Educational Resources Information Center

    Reading Teacher, 2002

    2002-01-01

    Presents annotations of children's choices of the top 100 children's and young adults' trade books for 2002. Lists books selected for the Children's Choice by reading levels: beginning readers; young readers; intermediate readers; and advanced readers. Provides tips and activities for parents, primary caregivers, and educators. (SG)

  6. Making School Choice Work

    ERIC Educational Resources Information Center

    DeArmond, Michael; Jochim, Ashley; Lake, Robin

    2014-01-01

    School choice is increasingly the new normal in urban education. But in cities with multiple public school options, how can civic leaders create a choice system that works for all families, whether they choose a charter or district public school? To answer this question, the Center on Reinventing Public Education (CRPE) researchers surveyed 4,000…

  7. More Choice, Less Crime

    ERIC Educational Resources Information Center

    Dills, Angela K.; Hernandez-Julian, Rey

    2011-01-01

    Previous research debates whether public school choice improves students' academic outcomes, but there is little examination of its effects on their nonacademic outcomes. We use data from a nationally representative sample of high school students, a previously developed Tiebout choice measure, and metropolitan-level data on teenage arrest rates to…

  8. Latinos and School Choice

    ERIC Educational Resources Information Center

    Gastic, Billie; Coronado, Diana Salas

    2011-01-01

    The authors describe how Latino students are underrepresented in public schools of choice. They provide evidence to refute the claim that Latino students who choose to leave assigned public schools enroll in religious schools instead. Charter schools stand out as the type of public schools of choice where Latino students are well represented.…

  9. Career Choice Conflict.

    ERIC Educational Resources Information Center

    Behymer, Jo; Cockriel, Irvin W.

    1988-01-01

    The study attempted to determine the effect of availability of scholarships and loans on the career choice of high school juniors and seniors. A survey of 911 college-bound students revealed that 89 percent considered availability of scholarships important to career choice, and 84 percent considered loan availability important. (CH)

  10. Choice: The Historical Perspective.

    ERIC Educational Resources Information Center

    Wagoner, Jennings L., Jr.

    The issue of choice in U.S. education is traced historically. Consideration is given to the purposes of publicly supported education and reasons underlying the historic distinction between public and private education. It is suggested that the issue of choice concerns the rights and obligations of the individual and the state. The relationship…

  11. School Choice Marches forward

    ERIC Educational Resources Information Center

    Butcher, Jonathan

    2013-01-01

    One year ago, the "Wall Street Journal" dubbed 2011 "the year of school choice," opining that "this year is shaping up as the best for reformers in a very long time." School-choice laws took great strides in 2011, both in the number of programs that succeeded across states and also in the size and scope of the adopted programs. Yet education…

  12. Measuring saliency in images: which experimental parameters for the assessment of image quality?

    NASA Astrophysics Data System (ADS)

    Fredembach, Clement; Woolfe, Geoff; Wang, Jue

    2012-01-01

    Predicting which areas of an image are perceptually salient or attended to has become an essential pre-requisite of many computer vision applications. Because observers are notoriously unreliable in remembering where they look a posteriori, and because asking where they look while observing the image necessarily in uences the results, ground truth about saliency and visual attention has to be obtained by gaze tracking methods. From the early work of Buswell and Yarbus to the most recent forays in computer vision there has been, perhaps unfortunately, little agreement on standardisation of eye tracking protocols for measuring visual attention. As the number of parameters involved in experimental methodology can be large, their individual in uence on the nal results is not well understood. Consequently, the performance of saliency algorithms, when assessed by correlation techniques, varies greatly across the literature. In this paper, we concern ourselves with the problem of image quality. Specically: where people look when judging images. We show that in this case, the performance gap between existing saliency prediction algorithms and experimental results is signicantly larger than otherwise reported. To understand this discrepancy, we rst devise an experimental protocol that is adapted to the task of measuring image quality. In a second step, we compare our experimental parameters with the ones of existing methods and show that a lot of the variability can directly be ascribed to these dierences in experimental methodology and choice of variables. In particular, the choice of a task, e.g., judging image quality vs. free viewing, has a great impact on measured saliency maps, suggesting that even for a mildly cognitive task, ground truth obtained by free viewing does not adapt well. Careful analysis of the prior art also reveals that systematic bias can occur depending on instrumental calibration and the choice of test images. We conclude this work by proposing a

  13. Colorado's clean energy choices

    SciTech Connect

    Strawn, N.; Jones, J.

    2000-04-15

    The daily choices made as consumers affect the environment and the economy. Based on the state of today's technology and economics, Colorado consumers can include energy efficiency and renewable energy into many aspects of their lives. These choices include where they obtain electricity, how they use energy at home, and how they transport themselves from one place to another. In addition to outlining how they can use clean energy, Colorado's Clean Energy Choices gives consumers contacts and links to Web sites for where to get more information.

  14. Model choice for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Bàrdossy, Andràs

    2015-04-01

    Present and future water management decisions are often supported by modelling. The choice of the appropriate model and model parameters depend on the decision related question, the quality of the model and the available information. While spatially detailed physics based models might seem very transferable, the uncertainty of the parametrization and of the input may lead to highly diverging results, which are of no use for decision making. The optimal model choice requires a quantification of the input/natural parameter uncertainty. As a next step the influence of this uncertainty on predictions using models with different complexity has to be quantified. Finally the influence of this prediction uncertainty on the decisions to be taken has to be assessed. Different data/information availability and modelling questions thus might require different modelling approaches. A framework for this model choice and parametrization problem will be presented together with examples from regions with very different data availability and data quality.

  15. Sequential sampling and paradoxes of risky choice.

    PubMed

    Bhatia, Sudeep

    2014-10-01

    The common-ratio, common-consequence, reflection, and event-splitting effects are some of the best-known findings in decision-making research. They represent robust violations of expected utility theory, and together form a benchmark against which descriptive theories of risky choice are tested. These effects are not currently predicted by sequential sampling models of risky choice, such as decision field theory (Busemeyer & Townsend 1993). This paper, however, shows that a minor extension to decision field theory, which allows for stochastic error in event sampling, can provide a parsimonious, cognitively plausible explanation for these effects. Moreover, these effects are guaranteed to emerge for a large range of parameter values, including best-fit parameters obtained from preexisting choice data. PMID:24898202

  16. Choice, changeover, and travel

    PubMed Central

    Baum, William M.

    1982-01-01

    Since foraging in nature can be viewed as instrumental behavior, choice between sources of food, known as “patches,” can be viewed as choice between instrumental response alternatives. Whereas the travel required to change alternatives deters changeover in nature, the changeover delay (COD) usually deters changeover in the laboratory. In this experiment, pigeons were exposed to laboratory choice situations, concurrent variable-interval schedules, that were standard except for the introduction of a travel requirement for changeover. As the travel requirement increased, rate of changeover decreased and preference for a favored alternative strengthened. When the travel requirement was small, the relations between choice and relative reinforcement revealed the usual tendencies toward matching and undermatching. When the travel requirement was large, strong overmatching occurred. These results, together with those from experiments in which changeover was deterred by punishment or a fixed-ratio requirement, deviate from the matching law, even when a correction is made for cost of changeover. If one accepted an argument that the COD is analogous to travel, the results suggest that the norm in choice relations would be overmatching. This overmatching, however, might only be the sign of an underlying strategy approximating optimization. PMID:16812283

  17. Measuring improved patient choice.

    PubMed

    Holmes-Rovner, M; Rovner, D R

    2000-08-01

    Patient decision support (PDS) tools or decision aids have been developed as adjuncts to the clinical encounter. Their aim is to support evidence-based patient choice. Clinical trials of PDS tools have used an array of outcome measures to determine efficacy, including knowledge, satisfaction, health status and consistency between patient choice and values. This paper proposes that the correlation between 'subjective expected utility' (SEU) and decision may be the best primary endpoint for trials. SEU is a measure usually used in behavioural decision theory. The paper first describes how decision support tools may use decision analysis to structure the presentation of evidence and guide patient decision-making. Uses of expected utility (EU) are suggested for evaluating PDS tools when improving population health status is the objective. SEU is the theoretically better measure when internal consistency of patient choices is the objective. PMID:11083037

  18. Choosing health, constrained choices.

    PubMed

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease. PMID:20028669

  19. The problem of choice

    PubMed Central

    Naqvi, Hassan R; Mathur, Shawn; Covarrubias, David; Curcio, Josephine A; Schmidt, Christian

    2008-01-01

    Convictions are a driving force for actions. Considering that every individual has a different set of convictions and larger groups act once a consensus decision is reached, one can see that debate is an inherent exercise in decision-making. This requires a sustainably generated surplus to allow time for intellectual exchange, gathering of information and dissemination of findings. It is essential that the full spectrum of options remain treated equally. At the end of this process, a choice has to be made. Looking back at a later time point, a retrospective analysis sometimes reveals that the choice was neither completely free nor a truly conscious one. Leaving the issue of consequences of a once made decision aside, we wish to contribute to the debate of the problem of choice. PMID:19025607

  20. Special Issue Topic: School Choice.

    ERIC Educational Resources Information Center

    Brogan, Bernard R.; And Others

    1991-01-01

    Includes "The Choice Movement" (Brogan); "Choice in American Education" (Witte); "Role of Parents in Education" (Mawdsley); "As Arrows in the Hand" (Coons); "Vouchers in Wisconsin" (Underwood); "Milwaukee Parental Choice Program (MPCP)" (Grover); "Civil Liberties and the MPCP" (Bolick); "Comments on School Choice" (Jauch); "Two Classes of…

  1. The Choice for Learning

    ERIC Educational Resources Information Center

    Bennett, Scott

    2006-01-01

    We are building conventional library space without making the paradigm shift our digital environment requires. The chief obstacles to change lie in our conception of readers as information consumers, in our allegiance to library operations as the drivers of library design, and in the choice made between foundational and non-foundational views of…

  2. Deterministic Walks with Choice

    SciTech Connect

    Beeler, Katy E.; Berenhaut, Kenneth S.; Cooper, Joshua N.; Hunter, Meagan N.; Barr, Peter S.

    2014-01-10

    This paper studies deterministic movement over toroidal grids, integrating local information, bounded memory and choice at individual nodes. The research is motivated by recent work on deterministic random walks, and applications in multi-agent systems. Several results regarding passing tokens through toroidal grids are discussed, as well as some open questions.

  3. Children's Choices for 2003.

    ERIC Educational Resources Information Center

    Reading Teacher, 2003

    2003-01-01

    Presents 103 titles for the 2003 Children's Choice grouped by reading levels: beginning, young, intermediate, and advanced readers. Provides the title, author, illustrator, publisher, ISBN, and price for each title as well as a brief annotation prepared by a review team. (SG)

  4. Children's Choice for 2001.

    ERIC Educational Resources Information Center

    Reading Teacher, 2001

    2001-01-01

    Presents a 25-item annotated bibliography for beginning readers, 30 items for young readers, 19 items for intermediate readers, and 24 items for advanced readers--all selected by children. Gives tips for parents, primary caregivers, and educators. Describes the Children's Choice project and book selection. (SG)

  5. Geography in Parental Choice

    ERIC Educational Resources Information Center

    Bell, Courtney

    2009-01-01

    If we are to fully understand the demand side of school choice, we have to understand geography. But geography is not simply distance and commute time. It is also neighborhood and community. Using two conceptions of geography--space and place--I investigate how and when geography factored into parents' thinking. Drawing on spatial analyses of…

  6. Supporting Family Choice

    ERIC Educational Resources Information Center

    Murray, Mary M.; Christensen, Kimberly A.; Umbarger, Gardner T.; Rade, Karin C.; Aldridge, Kathryn; Niemeyer, Judith A.

    2007-01-01

    Supporting family choice in the decision-making process is recommended practice in the field of early childhood and early childhood special education. These decisions may relate to the medical, educational, social, recreational, therapeutic/rehabilitative, and community aspects of the child's disability. Although this practice conveys the message…

  7. Choices, Frameworks and Refinement

    NASA Technical Reports Server (NTRS)

    Campbell, Roy H.; Islam, Nayeem; Johnson, Ralph; Kougiouris, Panos; Madany, Peter

    1991-01-01

    In this paper we present a method for designing operating systems using object-oriented frameworks. A framework can be refined into subframeworks. Constraints specify the interactions between the subframeworks. We describe how we used object-oriented frameworks to design Choices, an object-oriented operating system.

  8. Variation, Repetition, and Choice

    ERIC Educational Resources Information Center

    Abreu-Rodrigues, Josele; Lattal, Kennon A.; dos Santos, Cristiano V.; Matos, Ricardo A.

    2005-01-01

    Experiment 1 investigated the controlling properties of variability contingencies on choice between repeated and variable responding. Pigeons were exposed to concurrent-chains schedules with two alternatives. In the REPEAT alternative, reinforcers in the terminal link depended on a single sequence of four responses. In the VARY alternative, a…

  9. Learning from School Choice.

    ERIC Educational Resources Information Center

    Peterson, Paul E., Ed.; Hassel, Bryan C., Ed.

    This volume contains revised versions of 16 essays presented at a conference, "Rethinking School Governance," hosted by Harvard's Program on Education Policy and Governance in June 1997. Part 1, "Introduction," contains two chapters: (1) "School Choice: A Report Card" (Paul E. Peterson); and (2) "The Case for Charter Schools" (Bryan C. Hassel).…

  10. Fixing the c Parameter in the Three-Parameter Logistic Model

    ERIC Educational Resources Information Center

    Han, Kyung T.

    2012-01-01

    For several decades, the "three-parameter logistic model" (3PLM) has been the dominant choice for practitioners in the field of educational measurement for modeling examinees' response data from multiple-choice (MC) items. Past studies, however, have pointed out that the c-parameter of 3PLM should not be interpreted as a guessing parameter. This…

  11. Influencing choice without awareness.

    PubMed

    Olson, Jay A; Amlani, Alym A; Raz, Amir; Rensink, Ronald A

    2015-12-01

    Forcing occurs when a magician influences the audience's decisions without their awareness. To investigate the mechanisms behind this effect, we examined several stimulus and personality predictors. In Study 1, a magician flipped through a deck of playing cards while participants were asked to choose one. Although the magician could influence the choice almost every time (98%), relatively few (9%) noticed this influence. In Study 2, participants observed rapid series of cards on a computer, with one target card shown longer than the rest. We expected people would tend to choose this card without noticing that it was shown longest. Both stimulus and personality factors predicted the choice of card, depending on whether the influence was noticed. These results show that combining real-world and laboratory research can be a powerful way to study magic and can provide new methods to study the feeling of free will. PMID:25666736

  12. Recursive rational choice

    SciTech Connect

    Lewis, A.A.

    1981-11-01

    It is the purpose of the present study to indicate the means by which Kramer's results may be generalized to considerations of stronger computing devices than the finite state automata considered in Kramer's approach, and to domains of alternatives having the cardinality of the continuum. The means we employ in the approach makes use of the theory of recursive functions in the context of Church's Thesis. The result, which we consider as a preliminary result to a more general research program, shows that a choice function that is rational in the sense of Richter (not necessarily regular) when defined on a restricted family of subsets of a continuum of alternatives, when recursively represented by a partial predicate on equivalence classes of approximations by rational numbers, is recursively unsolvable. By way of Church's Thesis, therefore, such a function cannot be realized by means of a very general class of effectively computable procedures. An additional consequence that can be derived from the result of recursive unsolvability of rational choice in this setting is the placement of a minimal bound on the amount of computational complexity entailed by effective realizations of rational choice.

  13. Alternative fuels and vehicles choice model

    SciTech Connect

    Greene, D.L.

    1994-10-01

    This report describes the theory and implementation of a model of alternative fuel and vehicle choice (AFVC), designed for use with the US Department of Energy`s Alternative Fuels Trade Model (AFTM). The AFTM is a static equilibrium model of the world supply and demand for liquid fuels, encompassing resource production, conversion processes, transportation, and consumption. The AFTM also includes fuel-switching behavior by incorporating multinomial logit-type equations for choice of alternative fuel vehicles and alternative fuels. This allows the model to solve for market shares of vehicles and fuels, as well as for fuel prices and quantities. The AFVC model includes fuel-flexible, bi-fuel, and dedicated fuel vehicles. For multi-fuel vehicles, the choice of fuel is subsumed within the vehicle choice framework, resulting in a nested multinomial logit design. The nesting is shown to be required by the different price elasticities of fuel and vehicle choice. A unique feature of the AFVC is that its parameters are derived directly from the characteristics of alternative fuels and vehicle technologies, together with a few key assumptions about consumer behavior. This not only establishes a direct link between assumptions and model predictions, but facilitates sensitivity testing, as well. The implementation of the AFVC model as a spreadsheet is also described.

  14. Optimal filtration of the atmospheric parameters profiles

    NASA Technical Reports Server (NTRS)

    Zuev, V. E.; Glazov, G. N.; Igonin, G. M.

    1986-01-01

    The idea of optimal Marcovian filtration of fluctuating profiles from lidar signals is developed but as applied to a double-frequency sounding which allows the use of large cross sections of elastic scattering and correct separation of the contributions due to aerosol and Rayleigh scatterings from the total lidar return. The filtration efficiency is shown under different conditions of sounding using a computer model. The accuracy of restituted profiles (temperature, pressure, density) is determined by the elements of a posteriori matrix K. The results obtained allow the determination of the lidar power required for providing the necessary accuracy of restitution of the atmospheric parameter profiles at chosen wavelengths of sounding in the ultraviolet and visible range.

  15. The Malleability of Intertemporal Choice.

    PubMed

    Lempert, Karolina M; Phelps, Elizabeth A

    2016-01-01

    Intertemporal choices are ubiquitous: people often have to choose between outcomes realized at different times. Although it is generally believed that people have stable tendencies toward being impulsive or patient, an emerging body of evidence indicates that intertemporal choice is malleable and can be profoundly influenced by context. How the choice is framed, or the state of the decision-maker at the time of choice, can induce a shift in preference. Framing effects are underpinned by allocation of attention to choice attributes, reference dependence, and time construal. Incidental affective states and prospection also influence intertemporal choice. We advocate that intertemporal choice models account for these context effects, and encourage the use of this knowledge to nudge people toward making more advantageous choices. PMID:26483153

  16. Dynamics of Choice: A Tutorial

    ERIC Educational Resources Information Center

    Baum, William M.

    2010-01-01

    Choice may be defined as the allocation of behavior among activities. Since all activities take up time, choice is conveniently thought of as the allocation of time among activities, even if activities like pecking are most easily measured by counting. Since dynamics refers to change through time, the dynamics of choice refers to change of…

  17. Understanding Career Choices in Context.

    ERIC Educational Resources Information Center

    Minor, Carole W.; Vermeulen, Mary E.; Coy, Doris Rhea

    Over several years, challenges have been made to traditional theories of career choice. One of these challenges has been to consider the contexts in which individuals live and how this can influence career choices. The purpose of this model is to create a framework to explain the influences on career choices over the lifespan. The term "career…

  18. Overconfidence and Career Choice

    PubMed Central

    Schulz, Jonathan F.; Thöni, Christian

    2016-01-01

    People self-assess their relative ability when making career choices. Thus, confidence in their own abilities is likely an important factor for selection into various career paths. In a sample of 711 first-year students we examine whether there are systematic differences in confidence levels across fields of study. We find that our experimental confidence measures significantly vary between fields of study: While students in business related academic disciplines (Political Science, Law, Economics, and Business Administration) exhibit the highest confidence levels, students of Humanities range at the other end of the scale. This may have important implications for subsequent earnings and professions students select themselves in. PMID:26808273

  19. Overconfidence and Career Choice.

    PubMed

    Schulz, Jonathan F; Thöni, Christian

    2016-01-01

    People self-assess their relative ability when making career choices. Thus, confidence in their own abilities is likely an important factor for selection into various career paths. In a sample of 711 first-year students we examine whether there are systematic differences in confidence levels across fields of study. We find that our experimental confidence measures significantly vary between fields of study: While students in business related academic disciplines (Political Science, Law, Economics, and Business Administration) exhibit the highest confidence levels, students of Humanities range at the other end of the scale. This may have important implications for subsequent earnings and professions students select themselves in. PMID:26808273

  20. A Bayesian approach to tracking patients having changing pharmacokinetic parameters

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Jelliffe, Roger W.

    2004-01-01

    This paper considers the updating of Bayesian posterior densities for pharmacokinetic models associated with patients having changing parameter values. For estimation purposes it is proposed to use the Interacting Multiple Model (IMM) estimation algorithm, which is currently a popular algorithm in the aerospace community for tracking maneuvering targets. The IMM algorithm is described, and compared to the multiple model (MM) and Maximum A-Posteriori (MAP) Bayesian estimation methods, which are presently used for posterior updating when pharmacokinetic parameters do not change. Both the MM and MAP Bayesian estimation methods are used in their sequential forms, to facilitate tracking of changing parameters. Results indicate that the IMM algorithm is well suited for tracking time-varying pharmacokinetic parameters in acutely ill and unstable patients, incurring only about half of the integrated error compared to the sequential MM and MAP methods on the same example.

  1. Variation, Repetition, And Choice

    PubMed Central

    Abreu-Rodrigues, Josele; Lattal, Kennon A; dos Santos, Cristiano V; Matos, Ricardo A

    2005-01-01

    Experiment 1 investigated the controlling properties of variability contingencies on choice between repeated and variable responding. Pigeons were exposed to concurrent-chains schedules with two alternatives. In the REPEAT alternative, reinforcers in the terminal link depended on a single sequence of four responses. In the VARY alternative, a response sequence in the terminal link was reinforced only if it differed from the n previous sequences (lag criterion). The REPEAT contingency generated low, constant levels of sequence variation whereas the VARY contingency produced levels of sequence variation that increased with the lag criterion. Preference for the REPEAT alternative tended to increase directly with the degree of variation required for reinforcement. Experiment 2 examined the potential confounding effects in Experiment 1 of immediacy of reinforcement by yoking the interreinforcer intervals in the REPEAT alternative to those in the VARY alternative. Again, preference for REPEAT was a function of the lag criterion. Choice between varying and repeating behavior is discussed with respect to obtained behavioral variability, probability of reinforcement, delay of reinforcement, and switching within a sequence. PMID:15828592

  2. Motherhood as a choice.

    PubMed

    Mcfadden, P

    1994-06-01

    The choice of motherhood for women and women's rights have been forbidden in law by men, in religious doctrines by men, and within the medical system by men. Women in poverty have little say in determining whether to have children or not. When choice is exercised for abortion, poor women have unsafe and illegal abortions, which can be life-threatening. Rich women have safer options. Women historically have allowed their rights to be eroded by gender inequality and patriarchal manipulation. The religious right and the Roman Catholic church have been allowed to speak and decide for women. Abortion rights are not about western influences, but about maternal mortality. The right to make choices about one's life is the fundamental premise of the universal rights of all human beings. African governments have signed the UN Convention on elimination of all forms of discrimination against women, but the practice of human rights has not been implemented at the local and family level. Motherhood needs to be demystified. Motherhood is linked with the absence of personhood and bodily integrity. The rhetoric of moral obligations and the rights of the unborn child take precedence over the rights of women. The right of an African woman not to have children is not recognized in most Africa societies. The issue of AIDS creates an even more difficult milieu for women. The interests of the family and the interests of men overwhelm the interests of women to protect themselves. Motherhood is essential to validating one's heterosexuality and gaining stature, and females without a child are marginalized and unrecognized. Women whose babies do not survive are marginalized further than barren women. Men derive power from women's birthing. The terminology of male power is replete with expressions such as "pregnant with promise" and "miscarriage of justice's", no one says "uterus envy." Male psychologists only recognize "penis envy." Men need children for purposes of property, lineage, and

  3. A simple robust and accurate a posteriori sub-cell finite volume limiter for the discontinuous Galerkin method on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael; Loubère, Raphaël

    2016-08-01

    In this paper we propose a simple, robust and accurate nonlinear a posteriori stabilization of the Discontinuous Galerkin (DG) finite element method for the solution of nonlinear hyperbolic PDE systems on unstructured triangular and tetrahedral meshes in two and three space dimensions. This novel a posteriori limiter, which has been recently proposed for the simple Cartesian grid case in [62], is able to resolve discontinuities at a sub-grid scale and is substantially extended here to general unstructured simplex meshes in 2D and 3D. It can be summarized as follows: At the beginning of each time step, an approximation of the local minimum and maximum of the discrete solution is computed for each cell, taking into account also the vertex neighbors of an element. Then, an unlimited discontinuous Galerkin scheme of approximation degree N is run for one time step to produce a so-called candidate solution. Subsequently, an a posteriori detection step checks the unlimited candidate solution at time t n + 1 for positivity, absence of floating point errors and whether the discrete solution has remained within or at least very close to the bounds given by the local minimum and maximum computed in the first step. Elements that do not satisfy all the previously mentioned detection criteria are flagged as troubled cells. For these troubled cells, the candidate solution is discarded as inappropriate and consequently needs to be recomputed. Within these troubled cells the old discrete solution at the previous time tn is scattered onto small sub-cells (Ns = 2 N + 1 sub-cells per element edge), in order to obtain a set of sub-cell averages at time tn. Then, a more robust second order TVD finite volume scheme is applied to update the sub-cell averages within the troubled DG cells from time tn to time t n + 1. The new sub-grid data at time t n + 1 are finally gathered back into a valid cell-centered DG polynomial of degree N by using a classical conservative and higher order

  4. Parental Voucher Enrollment Decisions: Choice within Choice in New Orleans

    ERIC Educational Resources Information Center

    Beabout, Brian R.; Cambre, Belinda M.

    2013-01-01

    Set in the context of a choice-saturated public school system, this study examines the school choice process of low-income parents who participated in Louisiana's 2008 voucher program. Based on semistructured interviews with 16 parents at 1 Catholic school, we report that spirituality, small class and school size, character/values,…

  5. Addiction: Choice or Compulsion?

    PubMed Central

    Henden, Edmund; Melberg, Hans Olav; Røgeberg, Ole Jørgen

    2013-01-01

    Normative thinking about addiction has traditionally been divided between, on the one hand, a medical model which sees addiction as a disease characterized by compulsive and relapsing drug use over which the addict has little or no control and, on the other, a moral model which sees addiction as a choice characterized by voluntary behavior under the control of the addict. Proponents of the former appeal to evidence showing that regular consumption of drugs causes persistent changes in the brain structures and functions known to be involved in the motivation of behavior. On this evidence, it is often concluded that becoming addicted involves a transition from voluntary, chosen drug use to non-voluntary compulsive drug use. Against this view, proponents of the moral model provide ample evidence that addictive drug use involves voluntary chosen behavior. In this article we argue that although they are right about something, both views are mistaken. We present a third model that neither rules out the view of addictive drug use as compulsive, nor that it involves voluntary chosen behavior. PMID:23966955

  6. Hybrid discrete choice models: Gained insights versus increasing effort.

    PubMed

    Mariel, Petr; Meyerhoff, Jürgen

    2016-10-15

    Hybrid choice models expand the standard models in discrete choice modelling by incorporating psychological factors as latent variables. They could therefore provide further insights into choice processes and underlying taste heterogeneity but the costs of estimating these models often significantly increase. This paper aims at comparing the results from a hybrid choice model and a classical random parameter logit. Point of departure for this analysis is whether researchers and practitioners should add hybrid choice models to their suite of models routinely estimated. Our comparison reveals, in line with the few prior studies, that hybrid models gain in efficiency by the inclusion of additional information. The use of one of the two proposed approaches, however, depends on the objective of the analysis. If disentangling preference heterogeneity is most important, hybrid model seems to be preferable. If the focus is on predictive power, a standard random parameter logit model might be the better choice. Finally, we give recommendations for an adequate use of hybrid choice models based on known principles of elementary scientific inference. PMID:27310534

  7. Putting School Choice in Place.

    ERIC Educational Resources Information Center

    Glenn, Charles L.

    1989-01-01

    School choice should be promoted only under conditions guaranteeing that costs will be outweighed by benefits. Implementing choice means developing an effective assignment policy, conducting parent surveys, providing for adequate staff involvement, committing to parent outreach, managing effects on individual schools, and setting up a…

  8. School Choice: To What End?

    ERIC Educational Resources Information Center

    Wagner, Tony

    1996-01-01

    Debunks two fantasies: the feasibility of a free-market educational system and the idea that greater choice automatically means better schools. Public education is too labor-intensive and undercapitalized to be profitable. Communities need "skunk works" schools of choice to do research and development and smaller, collaboratively managed schools…

  9. School Choice: Examining the Evidence.

    ERIC Educational Resources Information Center

    Rasell, Edith, Ed.; Rothstein, Richard, Ed.

    This book presents a summary of school-choice issues, and is organized around a 1992 seminar entitled "Choice: What Role in American Education?" Each part presents a set of conference papers, followed by discussants' remarks and excerpts from audience discussion. The introduction summarizes the papers' positions and conclusions. Participants…

  10. PATERNAL INFLUENCE ON CAREER CHOICE.

    ERIC Educational Resources Information Center

    WERTS, CHARLES E.

    FATHER'S OCCUPATION WAS COMPARED WITH SON'S CAREER CHOICE FOR A SAMPLE OF 76,015 MALE, COLLEGE FRESHMEN. RESULTS INDICATED THAT CERTAIN TYPES OF FATHERS' OCCUPATIONS WERE ASSOCIATED WITH SIMILAR TYPES OF CAREER CHOICES BY SONS. BOYS WHOSE FATHERS WERE IN SCIENTIFIC OCCUPATIONS (ENGINEERS, MILITARY OFFICERS, ARCHITECTS, BIOLOGISTS, CHEMISTS, AND…

  11. Preference Reversal in Multiattribute Choice

    ERIC Educational Resources Information Center

    Tsetsos, Konstantinos; Usher, Marius; Chater, Nick

    2010-01-01

    A central puzzle for theories of choice is that people's preferences between options can be reversed by the presence of decoy options (that are not chosen) or by the presence of other irrelevant options added to the choice set. Three types of reversal effect reported in the decision-making literature, the attraction, compromise, and similarity…

  12. The Supply Side of Choice

    ERIC Educational Resources Information Center

    Hill, Paul T.

    2005-01-01

    New school creation is key to success of choice. For the last two decades, the struggle over school choice has focused on freeing up parents to choose. It continues to this day, with growing success in the forms of public and private voucher programs, charter school laws in 40 states and the District of Columbia, and state and federal laws that…

  13. Contextual Explanations of School Choice

    ERIC Educational Resources Information Center

    Lauen, Douglas Lee

    2007-01-01

    Participation in school-choice programs has been increasing across the country since the early 1990s. While some have examined the role that families play in the school-choice process, research has largely ignored the role of social contexts in determining where a student attends school. This article improves on previous research by modeling the…

  14. School Choice with Chinese Characteristics

    ERIC Educational Resources Information Center

    Wu, Xiaoxin

    2012-01-01

    This paper explores the major characteristics of school choice in the Chinese context. It highlights the involvement of cultural and economic capital, such as choice fees, donations, prize-winning certificates and awards in gaining school admission, as well as the use of social capital in the form of "guanxi". The requirement for these resources…

  15. School Choice: A Report Card.

    ERIC Educational Resources Information Center

    Peterson, Paul E.

    1998-01-01

    Locates school choice's theoretical underpinnings in market theory and communitarianism. Explains contributions of magnet schools, charter schools, and voucher systems to the choice movement. Summarizes preliminary findings for voucher plans, highlighting minority participation, family and teacher satisfaction, student mobility, and college…

  16. The Globalisation of School Choice?

    ERIC Educational Resources Information Center

    Forsey, Martin, Ed.; Davies, Scott, Ed.; Walford, Geoffrey, Ed.

    2008-01-01

    "Which school should I choose for my child?" For many parents, this question is one of the most important of their lives. "School choice" is a slogan being voiced around the globe, conjuring images of a marketplace with an abundance of educational options. Those promoting educational choice also promise equality, social advantage, autonomy, and…

  17. College Choice in the Philippines

    ERIC Educational Resources Information Center

    Tan, Christine Joy

    2009-01-01

    This descriptive and correlational study examined the applicability of major U.S. college choice factors to Philippine high school seniors. A sample of 226 students from a private school in Manila completed the College Choice Survey for High School Seniors. Cronbach's alpha for the survey composite index was 0.933. The purposes of this…

  18. Religious Education and Religious Choice

    ERIC Educational Resources Information Center

    Hand, Michael

    2015-01-01

    According to the "religious choice case" for compulsory religious education, pupils have a right to be made aware of the religious and irreligious paths open to them and equipped with the wherewithal to choose between them. A familiar objection to this argument is that the idea of religious choice reduces religion to a matter of taste. I…

  19. Eye Movements in Risky Choice

    PubMed Central

    Hermens, Frouke; Matthews, William J.

    2015-01-01

    Abstract We asked participants to make simple risky choices while we recorded their eye movements. We built a complete statistical model of the eye movements and found very little systematic variation in eye movements over the time course of a choice or across the different choices. The only exceptions were finding more (of the same) eye movements when choice options were similar, and an emerging gaze bias in which people looked more at the gamble they ultimately chose. These findings are inconsistent with prospect theory, the priority heuristic, or decision field theory. However, the eye movements made during a choice have a large relationship with the final choice, and this is mostly independent from the contribution of the actual attribute values in the choice options. That is, eye movements tell us not just about the processing of attribute values but also are independently associated with choice. The pattern is simple—people choose the gamble they look at more often, independently of the actual numbers they see—and this pattern is simpler than predicted by decision field theory, decision by sampling, and the parallel constraint satisfaction model. © 2015 The Authors. Journal of Behavioral Decision Making published by John Wiley & Sons Ltd. PMID:27522985

  20. The logistics of choice.

    PubMed

    Killeen, Peter R

    2015-07-01

    The generalized matching law (GML) is reconstructed as a logistic regression equation that privileges no particular value of the sensitivity parameter, a. That value will often approach 1 due to the feedback that drives switching that is intrinsic to most concurrent schedules. A model of that feedback reproduced some features of concurrent data. The GML is a law only in the strained sense that any equation that maps data is a law. The machine under the hood of matching is in all likelihood the very law that was displaced by the Matching Law. It is now time to return the Law of Effect to centrality in our science. PMID:25988932

  1. Eye Movements in Strategic Choice

    PubMed Central

    Gächter, Simon; Noguchi, Takao; Mullett, Timothy L.

    2015-01-01

    Abstract In risky and other multiattribute choices, the process of choosing is well described by random walk or drift diffusion models in which evidence is accumulated over time to threshold. In strategic choices, level‐k and cognitive hierarchy models have been offered as accounts of the choice process, in which people simulate the choice processes of their opponents or partners. We recorded the eye movements in 2 × 2 symmetric games including dominance‐solvable games like prisoner's dilemma and asymmetric coordination games like stag hunt and hawk–dove. The evidence was most consistent with the accumulation of payoff differences over time: we found longer duration choices with more fixations when payoffs differences were more finely balanced, an emerging bias to gaze more at the payoffs for the action ultimately chosen, and that a simple count of transitions between payoffs—whether or not the comparison is strategically informative—was strongly associated with the final choice. The accumulator models do account for these strategic choice process measures, but the level‐k and cognitive hierarchy models do not. © 2015 The Authors. Journal of Behavioral Decision Making published by John Wiley & Sons Ltd.

  2. Motivational Basis of Choice in Three-Choice Decomposed Games

    ERIC Educational Resources Information Center

    McClintock, Charles G.; And Others

    1973-01-01

    One of the purposes of the present study was to determine the effectiveness of a three-choice situation that displays payoffs in a simple, direct and flexible manner for discriminating social motives. (Author/RK)

  3. Understanding Parameter Invariance in Unidimensional IRT Models

    ERIC Educational Resources Information Center

    Rupp, Andre A.; Zumbo, Bruno D.

    2006-01-01

    One theoretical feature that makes item response theory (IRT) models those of choice for many psychometric data analysts is parameter invariance, the equality of item and examinee parameters from different examinee populations or measurement conditions. In this article, using the well-known fact that item and examinee parameters are identical only…

  4. Economic analysis of the first 20 years of universal hepatitis B vaccination program in Italy: an a posteriori evaluation and forecast of future benefits.

    PubMed

    Boccalini, Sara; Taddei, Cristina; Ceccherini, Vega; Bechini, Angela; Levi, Miriam; Bartolozzi, Dario; Bonanni, Paolo

    2013-05-01

    Italy was one of the first countries in the world to introduce a routine vaccination program against HBV for newborns and 12-y-old children. From a clinical point of view, such strategy was clearly successful. The objective of our study was to verify whether, at 20 y from its implementation, hepatitis B universal vaccination had positive effects also from an economic point of view. An a posteriori analysis evaluated the impact that the hepatitis B immunization program had up to the present day. The implementation of vaccination brought an extensive reduction of the burden of hepatitis B-related diseases in the Italian population. As a consequence, the past and future savings due to clinical costs avoided are particularly high. We obtained a return on investment nearly equal to 1 from the National Health Service perspective, and a benefit-to-cost ratio slightly less than 1 for the Societal perspective, considering only the first 20 y from the start of the program. In the longer-time horizon, ROI and BCR values were positive (2.78 and 2.46, respectively). The break-even point was already achieved few years ago for the NHS and for the Society, and since then more and more money is progressively saved. The implementation of universal hepatitis B vaccination was very favorable during the first 20 y of adoption, and further benefits will be increasingly evident in the future. The hepatitis B vaccination program in Italy is a clear example of the great impact that universal immunization is able to provide in the medium-long-term when health care authorities are so wise as to invest in prevention. PMID:23376840

  5. SU-E-J-170: Beyond Single-Cycle 4DCT: Maximum a Posteriori (MAP) Reconstruction-Based Binning-Free Multicycle 4DCT for Lung Radiotherapy

    SciTech Connect

    Cheung, Y; Sawant, A; Hinkle, J; Joshi, S

    2014-06-01

    Purpose: Thoracic motion changes from cycle-to-cycle and day-to-day. Conventional 4DCT does not capture these cycle to cycle variations. We present initial results of a novel 4DCT reconstruction technique based on maximum a posteriori (MAP) reconstruction. The technique uses the same acquisition process (and therefore dose) as a conventional 4DCT in order to create a high spatiotemporal resolution cine CT that captures several breathing cycles. Methods: Raw 4DCT data were acquired from a lung cancer patient. The continuous 4DCT was reconstructed using MAP algorithm which uses the raw, time-stamped CT data to reconstruct images while simultaneously estimating deformation in the subject's anatomy. This framework incorporates physical effects such as hysteresis and is robust to detector noise and irregular breathing patterns. The 4D image is described in terms of a 3D reference image defined at one end of the hysteresis loop, and two deformation vector fields (DVFs) corresponding to inhale motion and exhale motion respectively. The MAP method uses all of the CT projection data and maximizes the log posterior in order to iteratively estimate a timevariant deformation vector field that describes the entire moving and deforming volume. Results: The MAP 4DCT yielded CT-quality images for multiple cycles corresponding to the entire duration of CT acquisition, unlike the conventional 4DCT, which only yielded a single cycle. Variations such as amplitude and frequency changes and baseline shifts were clearly captured by the MAP 4DC Conclusion: We have developed a novel, binning-free, parameterized 4DCT reconstruction technique that can capture cycle-to-cycle variations of respiratory motion. This technique provides an invaluable tool for respiratory motion management research. This work was supported by funding from the National Institutes of Health and VisionRT Ltd. Amit Sawant receives research funding from Varian Medical Systems, Vision RT and Elekta.

  6. How do stereotypes influence choice?

    PubMed

    Chaxel, Anne-Sophie

    2015-05-01

    In the study reported here, I tracked one process through which stereotypes affect choice. The Implicit Association Test (IAT) and a measurement of predecisional information distortion were used to assess the influence of the association between male gender and career on the evaluation of information related to the job performance of stereotypical targets (male) and nonstereotypical targets (female). When the IAT revealed a strong association between male gender and career and the installed leader in the choice process was a stereotypical target, decision makers supported the leader with more proleader distortion; when the IAT revealed a strong association between male gender and career and the installed leader in the choice process was a nonstereotypical target, decision makers supported the trailer with less antitrailer distortion. A stronger association between male gender and career therefore resulted in an upward shift of the evaluation related to the stereotypical target (both as a trailer and a leader), which subsequently biased choice. PMID:25749702

  7. Connecting cognition and consumer choice.

    PubMed

    Bartels, Daniel M; Johnson, Eric J

    2015-02-01

    We describe what can be gained from connecting cognition and consumer choice by discussing two contexts ripe for interaction between the two fields. The first-context effects on choice-has already been addressed by cognitive science yielding insights about cognitive process but there is promise for more interaction. The second is learning and representation in choice where relevant theories in cognitive science could be informed by consumer choice, and in return, could pose and answer new questions. We conclude by discussing how these two fields of research stand to benefit from more interaction, citing examples of how interfaces of cognitive science with other fields have been illuminating for theories of cognition. PMID:25527275

  8. Evoked emotions predict food choice.

    PubMed

    Dalenberg, Jelle R; Gutjar, Swetlana; Ter Horst, Gert J; de Graaf, Kees; Renken, Remco J; Jager, Gerry

    2014-01-01

    In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores. PMID:25521352

  9. Modeling one-choice and two-choice driving tasks

    PubMed Central

    Ratcliff, Roger

    2015-01-01

    An experiment is presented in which subjects were tested on both one-choice and two-choice driving tasks and on non-driving versions of them. Diffusion models for one- and two-choice tasks were successful in extracting model-based measures from the response time and accuracy data. These include measures of the quality of the information from the stimuli that drove the decision process (drift rate in the model), the time taken up by processes outside the decision process and, for the two-choice model, the speed/accuracy decision criteria that subjects set. Drift rates were only marginally different between the driving and non-driving tasks, indicating that nearly the same information was used in the two kinds of tasks. The tasks differed in the time taken up by other processes, reflecting the difference between them in response processing demands. Drift rates were significantly correlated across the two two-choice tasks showing that subjects that performed well on one task also performed well on the other task. Nondecision times were correlated across the two driving tasks, showing common abilities on motor processes across the two tasks. These results show the feasibility of using diffusion modeling to examine decision making in driving and so provide for a theoretical examination of factors that might impair driving, such as extreme aging, distraction, sleep deprivation, and so on. PMID:25944448

  10. Developing a concept of choice.

    PubMed

    Kushnir, Tamar

    2012-01-01

    Our adult concept of choice is not a simple idea, but rather a complex set of beliefs about the causes of actions. These beliefs are situation-, individual- and culture-dependent, and are thus likely constructed through social learning. This chapter takes a rational constructivist approach to examining the development of a concept of choice in young children. Initially, infants' combine assumptions of rational agency with their capacity for statistical inference to reason about alternative possibilities for, and constraints on, action. Preschoolers' build on this basic understanding by integrating domain-specific causal knowledge of physical, biological, and psychological possibility into their appraisal of their own and others' ability to choose. However, preschoolers continue to view both psychological and social motivations as constraints on choice--for example, stating that one cannot choose to harm another, or to act against personal desires. It is not until later that children share the adult belief that choice mediates between conflicting motivations for action. The chapter concludes by suggesting avenues for future research--to better characterize conceptual changes in beliefs about choice, and to understand how such beliefs arise from children's everyday experiences. PMID:23205412

  11. Semiparametric Thurstonian Models for Recurrent Choices: A Bayesian Analysis

    ERIC Educational Resources Information Center

    Ansari, Asim; Iyengar, Raghuram

    2006-01-01

    We develop semiparametric Bayesian Thurstonian models for analyzing repeated choice decisions involving multinomial, multivariate binary or multivariate ordinal data. Our modeling framework has multiple components that together yield considerable flexibility in modeling preference utilities, cross-sectional heterogeneity and parameter-driven…

  12. Voice and choice by delegation.

    PubMed

    van de Bovenkamp, Hester; Vollaard, Hans; Trappenburg, Margo; Grit, Kor

    2013-02-01

    In many Western countries, options for citizens to influence public services are increased to improve the quality of services and democratize decision making. Possibilities to influence are often cast into Albert Hirschman's taxonomy of exit (choice), voice, and loyalty. In this article we identify delegation as an important addition to this framework. Delegation gives individuals the chance to practice exit/choice or voice without all the hard work that is usually involved in these options. Empirical research shows that not many people use their individual options of exit and voice, which could lead to inequality between users and nonusers. We identify delegation as a possible solution to this problem, using Dutch health care as a case study to explore this option. Notwithstanding various advantages, we show that voice and choice by delegation also entail problems of inequality and representativeness. PMID:23052688

  13. Does health affect portfolio choice?

    PubMed

    Love, David A; Smith, Paul A

    2010-12-01

    A number of recent studies find that poor health is empirically associated with a safer portfolio allocation. It is difficult to say, however, whether this relationship is truly causal. Both health status and portfolio choice are influenced by unobserved characteristics such as risk attitudes, impatience, information, and motivation, and these unobserved factors, if not adequately controlled for, can induce significant bias in the estimates of asset demand equations. Using the 1992-2006 waves of the Health and Retirement Study, we investigate how much of the connection between health and portfolio choice is causal and how much is due to the effects of unobserved heterogeneity. Accounting for unobserved heterogeneity with fixed effects and correlated random effects models, we find that health does not appear to significantly affect portfolio choice among single households. For married households, we find a small effect (about 2-3 percentage points) from being in the lowest of five self-reported health categories. PMID:19937612

  14. Evaluation of uncertainties in 90Sr-body-burdens obtained by whole-body count: application of Bayes' rule to derive detection limits by analysis of a posteriori data

    SciTech Connect

    Kozheurov, V. P.; Zalyapin, V. I.; Shagina, N. B.; Tokareva, E. E.; Degteva, M. O.; Tolstykh, E. I.; Anspaugh, L. R.; Napier, Bruce A.

    2002-10-01

    A whole body counter (WBC) designed to measure bremsstrahlung from 90Y, the short-lived daughter of 90Sr, has been used since 1974 to measure 90Sr-body burdens in residents along the Techa River, which was contaminated by releases from the Mayak Production Association. Bayes' rule has been applied to the a posteriori WBC data in order to derive the uncertainties associated with the data: The lower limit of reliable detection is 2.0kBq and the uncertainty of routine measurements is 1.6kBq.

  15. Evaluation of uncertainties in 90Sr-body-burdens obtained by whole-body count: application of Bayes' rule to derive detection limits by analysis of a posteriori data.

    PubMed

    Kozheurov, V P; Zalyapin, V I; Shagina, N B; Tokarevaa, E E; Degteva, M O; Tolstykh, E I; Anspaugh, L R; Napier, B A

    2002-10-01

    A whole body counter (WBC) designed to measure bremsstrahlung from 90Y, the short-lived daughter of 90Sr, has been used since 1974 to measure 90Sr-body burdens in residents along the Techa River, which was contaminated by releases from the Mayak Production Association. Bayes' rule has been applied to the a posteriori WBC data in order to derive the uncertainties associated with the data: The lower limit of reliable detection is 2.0 kBq and the uncertainty of routine measurements is 1.6 kBq. PMID:12361332

  16. From School Choice to Educational Choice. Education Outlook. No. 3

    ERIC Educational Resources Information Center

    Hess, Frederick M.; Meeks, Olivia; Manno, Bruno V.

    2011-01-01

    In recent decades, many calls for transformative change in American schooling have advocated school choice. Yet these calls themselves have too often accepted the orthodoxies of the nineteenth-century schoolhouse. In the new book "Customized Schooling: Beyond Whole-School Reform" (Harvard Education Press, 2011), the authors worked with the Walton…

  17. Public School Choice: National Trends and Initiatives.

    ERIC Educational Resources Information Center

    New Jersey State Dept. of Education, Trenton.

    This report offers a framework and conceptual base for a statewide discussion of public school choice. A review of choice activities in other states and an analysis of typical components in a choice program are provided. Organized into four main chapters, the report starts with an explanation of the concept of choice followed by a review of the…

  18. Florida CHOICES Counselor's Manual 1983-84.

    ERIC Educational Resources Information Center

    Glenn, Thomas R.; Rogers, Zelda

    This manual for counselors is intended for use with CHOICES, a computer assisted career guidance system. Following a brief introduction to CHOICES, the structure (in chart form) and an overview of the contents of the CHOICES system are given. Chapter 2 focuses on counseling clients, emphasizing the three-step helping process, i.e., preCHOICES, to…

  19. Florida CHOICES Counselor Manual, 1982-1983.

    ERIC Educational Resources Information Center

    Thomas, Glenn R.; And Others

    This manual is intended to acquaint counselors with CHOICES, a computer-assisted career information program. Following an overview of the CHOICES system, and a brief discussion of the usefulness of the program for counselors, the three-step CHOICES process is presented: Step 1, the Initial Interview (pre- CHOICES), involves determining student…

  20. More Choice Isn't Always Better

    ERIC Educational Resources Information Center

    Schuller, Tom

    2012-01-01

    Choice is important to everyone, for one's identity as well as one's material satisfaction. Everyone has choices, but even the head of state's choices are constrained. In recent years choice has risen up the political agenda in the UK. It has become a key component of the drive to reform public services such as health and education. The…

  1. Minnesota's Public School Choice Options.

    ERIC Educational Resources Information Center

    Colopy, Kelly W.; Tarr, Hope C.

    This document presents findings of a study that identified patterns of use among a broad array of open-enrollment options available to elementary and secondary students in Minnesota. During the period 1985-91, the Minnesota legislature passed several pieces of new legislation designed to: (1) increase the educational choices available to students,…

  2. Fresh Perspectives on School Choice

    ERIC Educational Resources Information Center

    Ferrero, David J.

    2004-01-01

    School choice advocacy is dominated by perspectives that reflect a tendency to regard public schooling as a private service commodity. In recent years, numerous works of Anglo-American political philosophy, sociology and legal theory have attempted to restore a conception of public schooling as an institution that cultivates civic virtue.…

  3. No Easy Road to Choice

    ERIC Educational Resources Information Center

    Robelen, Erik W.

    2008-01-01

    In the new educational landscape of New Orleans--where public school choice is a fundamental element--pounding the pavement to drum up students has become a familiar pursuit. Proponents say a central idea of the education system that has emerged since Hurricane Katrina hit in 2005 is to provide a diverse array of high-quality school options, with…

  4. Accommodations for Multiple Choice Tests

    ERIC Educational Resources Information Center

    Trammell, Jack

    2011-01-01

    Students with learning or learning-related disabilities frequently struggle with multiple choice assessments due to difficulty discriminating between items, filtering out distracters, and framing a mental best answer. This Practice Brief suggests accommodations and strategies that disability service providers can utilize in conjunction with…

  5. A Choice for the Chosen.

    ERIC Educational Resources Information Center

    Rabkin, Jeremy

    1999-01-01

    Examines reasons for opposition to school-choice programs by the American Jewish Congress and the Anti-Defamation League of B'nai Brith. There is skepticism that more Jewish families would send their children to separate schools, and there is concern that government aid would foster a more religious tone in the country. Suggests that these…

  6. Moral Dimensions of Curriculum Choices.

    ERIC Educational Resources Information Center

    MacMillan, C. J. B.

    This paper argues that just as subject matter is inherently value-laden, educators should not feel trepidation about morally justifying their criteria for choosing curricula to be taught in the classroom. It recommends that true "moral" choices should be made on the bases of relevance to student experiences; moral propriety of subject matter…

  7. Self-Determination and Choice

    ERIC Educational Resources Information Center

    Wehmeyer, Michael L.; Abery, Brian H.

    2013-01-01

    Promoting self-determination and choice opportunities for people with intellectual and developmental disabilities has become best practice in the field. This article reviews the research and development activities conducted by the authors over the past several decades and provides a synthesis of the knowledge in the field pertaining to efforts to…

  8. Coming Around on School Choice.

    ERIC Educational Resources Information Center

    Viteritti, Joseph P.

    2002-01-01

    Asserts that opponent's predictions that school choice would result in mass exodus of students and a disparate impact on public schools have failed to materialize. Argues that disadvantaged students, especially blacks, in inner-city schools are the principal beneficiaries of voucher programs. (Contains 13 references.) (PKP)

  9. "America's Choice" Taps Profit Motive

    ERIC Educational Resources Information Center

    Trotter, Andrew

    2004-01-01

    In this article, the author features the America's Choice School Design, a school improvement program that has enlisted 547 schools in 16 states in its brand of comprehensive reform, and describes the program's move to loosen its nonprofit moorings and change to a for-profit company. The purpose of the move to for-profit status is to raise capital…

  10. How to make moral choices.

    PubMed

    Chambers, David W

    2011-01-01

    Moral choice is committing to act for what one believes is right and good. It is less about what we know than about defining who we are. Three cases typical of those used in the principles or dilemmas approach to teaching ethics are presented. But they are analyzed using an alternative approach based on seven moral choice heuristics--approaches proven to increase the likelihood of locating the best course of action. The approaches suggested for analyzing moral choice situations include: (a) identify the outcomes of available alternative courses of action; (b) rule out strategies that involve deception, coercion, reneging on promises, collusion, and contempt for others; (c) be authentic (do not deceive yourself); (d) relate to others on a human basis; (e) downplay rational justifications; (f) match the solution to the problem, not the other way around; (g) execute on the best solution, do not hold out for the perfect one; and (h) take action to improve the choice after it has been made. PMID:22416620

  11. Denver Makes a Fairer Choice

    ERIC Educational Resources Information Center

    Teske, Paul; Yettick, Holly; Ely, Todd; Klute, Mary

    2015-01-01

    Denver Public Schools traditional and charter schools combined to create a single system that allowed all students to indicate their school choice preferences, replacing a system of more than 60 different selection processes. The new system also gave families a wealth of information regarding school quality. A study of the new system found it was…

  12. Paradigmatic Choices in Evaluation Methodology.

    ERIC Educational Resources Information Center

    Heilman, John G.

    1980-01-01

    The choice between experimental research or process-oriented oriented research as the only valid paradigm of evaluation research is rejected. It is argued that there is a middle ground. Suggestions are made for mixing the two approaches to suit particular research settings. (Author/GK)

  13. Choices in Cataloging Electronic Journals

    ERIC Educational Resources Information Center

    Leathem, Cecilia A.

    2005-01-01

    Libraries and catalogers face choices in the treatment of the growing collections of electronic journals. Policies issued by CONSER and the Library of Congress allow libraries to edit existing print records to accommodate information pertaining to the electronic versions (single record option) or to create new records for them. The discussion…

  14. Educational Choice and Educational Space

    ERIC Educational Resources Information Center

    Thomson, Kathleen Sonia

    2016-01-01

    This dissertation entitled "Educational choice and educational space" aims to explore the confluence of constructed space and geographic space using a supply-side context for New Zealand's public school system of quasi-open enrollment. In Part I, New Zealand's state and state-integrated school system across four urban areas is analyzed…

  15. Positive Adolescent Choices Training (PACT).

    ERIC Educational Resources Information Center

    Hammond, W. Rodney; And Others

    Positive Adolescent Choices Training (PACT) is a health promotion program providing violence prevention programming targeted at black youth, at high risk for becoming either perpetrators or victims of violence. Conducted by the School of Professional Psychology of Wright State University in Dayton, Ohio, in cooperation with Dayton Public Schools,…

  16. Impulsive Choice and Workplace Safety: A New Area of Inquiry for Research in Occupational Settings

    ERIC Educational Resources Information Center

    Reynolds, Brady; Schiffbauer, Ryan M.

    2004-01-01

    A conceptual argument is presented for the relevance of behavior-analytic research on impulsive choice to issues of occupational safety and health. Impulsive choice is defined in terms of discounting, which is the tendency for the value of a commodity to decrease as a function of various parameters (e.g., having to wait or expend energy to receive…

  17. A priori and a posteriori approaches for finding genes of evolutionary interest in non-model species: osmoregulatory genes in the kidney transcriptome of the desert rodent Dipodomys spectabilis (banner-tailed kangaroo rat).

    PubMed

    Marra, Nicholas J; Eo, Soo Hyung; Hale, Matthew C; Waser, Peter M; DeWoody, J Andrew

    2012-12-01

    One common goal in evolutionary biology is the identification of genes underlying adaptive traits of evolutionary interest. Recently next-generation sequencing techniques have greatly facilitated such evolutionary studies in species otherwise depauperate of genomic resources. Kangaroo rats (Dipodomys sp.) serve as exemplars of adaptation in that they inhabit extremely arid environments, yet require no drinking water because of ultra-efficient kidney function and osmoregulation. As a basis for identifying water conservation genes in kangaroo rats, we conducted a priori bioinformatics searches in model rodents (Mus musculus and Rattus norvegicus) to identify candidate genes with known or suspected osmoregulatory function. We then obtained 446,758 reads via 454 pyrosequencing to characterize genes expressed in the kidney of banner-tailed kangaroo rats (Dipodomys spectabilis). We also determined candidates a posteriori by identifying genes that were overexpressed in the kidney. The kangaroo rat sequences revealed nine different a priori candidate genes predicted from our Mus and Rattus searches, as well as 32 a posteriori candidate genes that were overexpressed in kidney. Mutations in two of these genes, Slc12a1 and Slc12a3, cause human renal diseases that result in the inability to concentrate urine. These genes are likely key determinants of physiological water conservation in desert rodents. PMID:22841684

  18. A priori and a posteriori investigations for developing large eddy simulations of multi-species turbulent mixing under high-pressure conditions

    SciTech Connect

    Borghesi, Giulio; Bellan, Josette

    2015-03-15

    , and the filtered species mass fluxes. Improved models were developed based on a scale-similarity approach and were found to perform considerably better than the classical ones. These improved models were also assessed in an a posteriori study. Different combinations of the standard models and the improved ones were tested. At the relatively small Reynolds numbers achievable in DNS and at the relatively small filter widths used here, the standard models for the filtered pressure, the filtered heat flux, and the filtered species fluxes were found to yield accurate results for the morphology of the large-scale structures present in the flow. Analysis of the temporal evolution of several volume-averaged quantities representative of the mixing layer growth, and of the cross-stream variation of homogeneous-plane averages and second-order correlations, as well as of visualizations, indicated that the models performed equivalently for the conditions of the simulations. The expectation is that at the much larger Reynolds numbers and much larger filter widths used in practical applications, the improved models will have much more accurate performance than the standard one.

  19. A Probabilistic, Dynamic, and Attribute-wise Model of Intertemporal Choice

    PubMed Central

    Dai, Junyi; Busemeyer, Jerome R.

    2014-01-01

    Most theoretical and empirical research on intertemporal choice assumes a deterministic and static perspective, leading to the widely adopted delay discounting models. As a form of preferential choice, however, intertemporal choice may be generated by a stochastic process that requires some deliberation time to reach a decision. We conducted three experiments to investigate how choice and decision time varied as a function of manipulations designed to examine the delay duration effect, the common difference effect, and the magnitude effect in intertemporal choice. The results, especially those associated with the delay duration effect, challenged the traditional deterministic and static view and called for alternative approaches. Consequently, various static or dynamic stochastic choice models were explored and fit to the choice data, including alternative-wise models derived from the traditional exponential or hyperbolic discount function and attribute-wise models built upon comparisons of direct or relative differences in money and delay. Furthermore, for the first time, dynamic diffusion models, such as those based on decision field theory, were also fit to the choice and response time data simultaneously. The results revealed that the attribute-wise diffusion model with direct differences, power transformations of objective value and time, and varied diffusion parameter performed the best and could account for all three intertemporal effects. In addition, the empirical relationship between choice proportions and response times was consistent with the prediction of diffusion models and thus favored a stochastic choice process for intertemporal choice that requires some deliberation time to make a decision. PMID:24635188

  20. Diet Selection Fact Sheet - Choices, Choices, Choices- Interpreting the Pasture "Salad Bar"

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This fact sheet summarizes some of the current knowledge regarding grazing behavior. A grazing ruminant is presented with a smorgasbord of choices when turned out onto a pasture. However, little is understood on how selection decisions are made by the animal. Grazing behavior research is attempting...

  1. Emotional arousal predicts intertemporal choice.

    PubMed

    Lempert, Karolina M; Johnson, Eli; Phelps, Elizabeth A

    2016-08-01

    People generally prefer immediate rewards to rewards received after a delay, often even when the delayed reward is larger. This phenomenon is known as temporal discounting. It has been suggested that preferences for immediate rewards may be due to their being more concrete than delayed rewards. This concreteness may evoke an enhanced emotional response. Indeed, manipulating the representation of a future reward to make it more concrete has been shown to heighten the reward's subjective emotional intensity, making people more likely to choose it. Here the authors use an objective measure of arousal-pupil dilation-to investigate if emotional arousal mediates the influence of delayed reward concreteness on choice. They recorded pupil dilation responses while participants made choices between immediate and delayed rewards. They manipulated concreteness through time interval framing: delayed rewards were presented either with the date on which they would be received (e.g., "$30, May 3"; DATE condition, more concrete) or in terms of delay to receipt (e.g., "$30, 7 days; DAYS condition, less concrete). Contrary to prior work, participants were not overall more patient in the DATE condition. However, there was individual variability in response to time framing, and this variability was predicted by differences in pupil dilation between conditions. Emotional arousal increased as the subjective value of delayed rewards increased, and predicted choice of the delayed reward on each trial. This study advances our understanding of the role of emotion in temporal discounting. (PsycINFO Database Record PMID:26882337

  2. Quickly making the correct choice.

    PubMed

    Brenner, Eli; Smeets, Jeroen B J

    2015-08-01

    In daily life, unconscious choices guide many of our on-going actions. Such choices need to be made quickly, because the options change as the action progresses. We confirmed that people make reasonable choices when they have to quickly decide between two alternatives, and studied the basis of such decisions. The task was to tap with their finger on as many targets as possible within 2 min. A new target appeared after every tap, sometimes accompanied by a second target that was easier to hit. When there was only one target, subjects had to find the right balance between speed and accuracy. When there were two targets, they also had to choose between them. We examined to what extent subjects switched to the target that was easier to hit when it appeared some time after the original one. Subjects generally switched to the easier target whenever doing so would help them hit more targets within the 2-min session. This was so, irrespective of whether the different delays were presented in separate sessions or were interleaved within one session. Whether or not they switched did not depend on how successful they were at hitting the targets on earlier attempts, but it did depend on the position of the finger at the moment that the easy target appeared. We conclude that people have continuous access to reasonable estimates of how long various movement options would take and of how precise the endpoints are likely to be, given the instantaneous circumstances. PMID:25913027

  3. Street Choice Logit Model for Visitors in Shopping Districts

    PubMed Central

    Kawada, Ko; Yamada, Takashi; Kishimoto, Tatsuya

    2014-01-01

    In this study, we propose two models for predicting people’s activity. The first model is the pedestrian distribution prediction (or postdiction) model by multiple regression analysis using space syntax indices of urban fabric and people distribution data obtained from a field survey. The second model is a street choice model for visitors using multinomial logit model. We performed a questionnaire survey on the field to investigate the strolling routes of 46 visitors and obtained a total of 1211 street choices in their routes. We proposed a utility function, sum of weighted space syntax indices, and other indices, and estimated the parameters for weights on the basis of maximum likelihood. These models consider both street networks, distance from destination, direction of the street choice and other spatial compositions (numbers of pedestrians, cars, shops, and elevation). The first model explains the characteristics of the street where many people tend to walk or stay. The second model explains the mechanism underlying the street choice of visitors and clarifies the differences in the weights of street choice parameters among the various attributes, such as gender, existence of destinations, number of people, etc. For all the attributes considered, the influences of DISTANCE and DIRECTION are strong. On the other hand, the influences of Int.V, SHOPS, CARS, ELEVATION, and WIDTH are different for each attribute. People with defined destinations tend to choose streets that “have more shops, and are wider and lower”. In contrast, people with undefined destinations tend to choose streets of high Int.V. The choice of males is affected by Int.V, SHOPS, WIDTH (positive) and CARS (negative). Females prefer streets that have many shops, and couples tend to choose downhill streets. The behavior of individual persons is affected by all variables. The behavior of people visiting in groups is affected by SHOP and WIDTH (positive). PMID:25379274

  4. Choices in health care: the European experience.

    PubMed

    Thomson, Sarah; Dixon, Anna

    2006-07-01

    This paper examines some policies to increase or restrict consumer choice in western European health systems as regards four decisions: choice between public and private insurance; choice of public insurance fund; choice of first contact care provider and choice of hospital. Choice between public and private insurance is limited and arose for historical reasons in Germany. Owing to significant constraints, few people choose the private option. Choice of public insurance fund tends to be exercised by younger and healthier people, the decision to change fund is mainly associated with price and, despite complex risk adjustment mechanisms, it has led to risk selection by funds. Choice of first contact care provider is widespread in Europe. In countries where choice has traditionally been restricted, reforms aim to make services more accessible and convenient to patients. Reforms to restrict direct access to specialists aim to reduce unnecessary and inappropriate care but have been unpopular with the public and professionals. Patients' take up of choice of hospital has been surprisingly low, given their stated willingness to travel. Only where choice is actively supported in the context of long waiting times is take up higher. The objectives, implementation and impact of policies about choice have varied across western Europe. Culture and embedded norms may be significant in determining the extent to which patients exercise choice. PMID:16824264

  5. Grading School Choice: Evaluating School Choice Programs by the Friedman Gold Standard. School Choice Issues in Depth

    ERIC Educational Resources Information Center

    Enlow, Robert C.

    2008-01-01

    In 2004, The Friedman Foundation for Educational Choice published a report titled "Grading Vouchers: Ranking America's School Choice Programs." Its purpose was to measure every existing school choice program against the gold standard set by Milton and Rose Friedman: that the most effective way to improve K-12 education and thus ensure a stable…

  6. Vegetarian Choices in the Protein Foods Group

    MedlinePlus

    ... Calcium Tips to Making Wise Choices Food Gallery Oils All About Oils How Are Oils Different from Solid Fats? Nutrients and Health Benefits ... Calcium Tips to Making Wise Choices Food Gallery Oils All About Oils How Are Oils Different from ...

  7. Identifying when choice helps: clarifying the relationships between choice making, self-construal, and pain.

    PubMed

    Fox, Jacob; Close, Shane R; Rose, Jason P; Geers, Andrew L

    2016-06-01

    Prior research indicates that making choices before a painful task can sometimes reduce pain. We examined the possibility that independent and interdependent self-construals moderate the effect of choice on pain. Further, we tested between two types of choice: instrumental and non-instrumental. Healthy normotensive undergraduates were randomly assigned to one of three conditions prior to the cold pressor task. Participants in an instrumental choice condition selected which hand to immerse in the water and were told this choice might help reduce their pain. Non-instrumental choice participants selected which hand to immerse but were given no information about potential pain reduction. Control participants were given no choice or additional instructions. Low interdependence individuals reported less pain than high interdependence individuals-but only when given an instrumental choice. These data indicate that not all forms of choice reduce pain and not all individuals benefit from choice. Instead, individuals low in interdependence exhibit pain relief from instrumental choices. PMID:26743202

  8. The Influence of Prior Choices on Current Choice

    PubMed Central

    de la Piedad, Xochitl; Field, Douglas; Rachlin, Howard

    2006-01-01

    Three pigeons chose between random-interval (RI) and tandem, continuous-reinforcement, fixed-interval (crf-FI) reinforcement schedules by pecking either of two keys. As long as a pigeon pecked on the RI key, both keys remained available. If a pigeon pecked on the crf-FI key, then the RI key became unavailable and the crf-FI timer began to time out. With this procedure, once the RI key was initially pecked, the prospective value of both alternatives remained constant regardless of time spent pecking on the RI key without reinforcement (RI waiting time). Despite this constancy, the rate at which pigeons switched from the RI to the crf-FI decreased sharply as RI waiting time increased. That is, prior choices influenced current choice—an exercise effect. It is argued that such influence (independent of reinforcement contingencies) may serve as a sunk-cost commitment device in self-control situations. In a second experiment, extinction was programmed if RI waiting time exceeded a certain value. Rate of switching to the crf-FI first decreased and then increased as the extinction point approached, showing sensitivity to both prior choices and reinforcement contingencies. In a third experiment, crf-FI availability was limited to a brief window during the RI waiting time. When constrained in this way, switching occurred at a high rate regardless of when, during the RI waiting time, the crf-FI became available. PMID:16602373

  9. A Framework for Choice Remedy Litigation

    ERIC Educational Resources Information Center

    Bolick, Clint

    2008-01-01

    Although school choice proponents have generally been on the offensive in legislative arenas over the past 2 decades, they have played almost constant defense in the judiciary, seeking to prevent courts from undoing school choice programs. Opponents typically wield state constitutional provisions against school choice programs. Properly construed,…

  10. Discrepancy between Snack Choice Intentions and Behavior

    ERIC Educational Resources Information Center

    Weijzen, Pascalle L. G.; de Graaf, Cees; Dijksterhuis, Garmt B.

    2008-01-01

    Objective: To investigate dietary constructs that affect the discrepancy between intentioned and actual snack choice. Design: Participants indicated their intentioned snack choice from a set of 4 snacks (2 healthful, 2 unhealthful). One week later, they actually chose a snack from the same set. Within 1 week after the actual choice, they completed…

  11. School Choice: Structured through Markets and Morality

    ERIC Educational Resources Information Center

    Lasley, Thomas J., II; Ridenour, Carolyn R.

    2005-01-01

    School choice is increasingly promulgated as a promising education reform policy for failing urban schools, but no solid evidence has yet shown the promise fulfilled. The authors argue that choice based on market theory without a moral center is insufficient. Without a moral foundation, such market-driven choice programs may actually disadvantage…

  12. Risk and Career Choice: Evidence from Turkey

    ERIC Educational Resources Information Center

    Caner, Asena; Okten, Cagla

    2010-01-01

    In this paper, we examine the college major choice decision in a risk and return framework using university entrance exam data from Turkey. Specifically we focus on the choice between majors with low income risk such as education and health and others with riskier income streams. We use a unique dataset that allows us to control for the choice set…

  13. School Choice Acceptance: An Exploratory Explication

    ERIC Educational Resources Information Center

    Koven, Steven G.; Khan, Mobin

    2014-01-01

    School choice is presented by some as a panacea to the challenges facing education in the United States. Acceptance of choice as a solution, however, is far from universal. This article examines two possible contributors to choice adoption: ideology and political culture. Political culture was found to better explain the complex phenomenon of…

  14. The Challenge of Diversity and Choice

    ERIC Educational Resources Information Center

    Glenn, Charles

    2005-01-01

    Schools of equal educational quality need not be identical, and the recent trend toward increased choice and diversity in American schooling has if anything made the system more equitable for children who previously had no choice but to attend poorly performing schools. That is not to say that all forms of school choice are good public policy:…

  15. School Choice as a Bounded Ideal

    ERIC Educational Resources Information Center

    Ben-Porath, Sigal R.

    2009-01-01

    School choice is most often viewed through the lens of provision: most of the debate on the issue searches for desirable ways to offer vouchers, scholarships or other tools that provides choice as a way to achieve equality and/or freedom. This paper focuses on the consumer side of school choice, and utilises behavioural economics as well as…

  16. Choice: The Route to Community Control?

    ERIC Educational Resources Information Center

    Margonis, Frank; Parker, Laurence

    1999-01-01

    While school choice offers inner-city parents a means of educating their children well, it represents further deterioration of society's commitment to educating all students. This paper describes: the push for private school choice; parent choice in context (historical context and failures of desegregation); and segregationist strategies and…

  17. On Becoming an Institution of First Choice.

    ERIC Educational Resources Information Center

    Gelin, Frank; Jardine, Doug

    An overview is provided of the marketing and recruitment efforts designed to make Capilano College (CC) an "institution of first choice" in the minds of its community and prospective students. The presentation by Doug Jardine defines what CC means by and hopes to accomplish by becoming a "first choice" institution, indicating that a "first choice"…

  18. Understanding cognition, choice, and behavior.

    PubMed

    Corcoran, K J

    1995-09-01

    Bandura (1995) suggests that a "crusade against the causal efficacy of human thought" exists. The present paper disputes that claim, suggesting that the quest which does exist involves an understanding of self-efficacy. Examined are Bandura's shifting definitions of self-efficacy, his misunderstandings of others' work, and implications of some of his attempts to defend the construct. In the remainder of the paper Rotter's Social Learning Theory is discussed as a model of human choice behavior which recognizes the contributions of both cognitive and behavioral traditions within psychology, and has proven to be of great heuristic value. PMID:8576399

  19. Neural Activity Reveals Preferences Without Choices

    PubMed Central

    Smith, Alec; Bernheim, B. Douglas; Camerer, Colin

    2014-01-01

    We investigate the feasibility of inferring the choices people would make (if given the opportunity) based on their neural responses to the pertinent prospects when they are not engaged in actual decision making. The ability to make such inferences is of potential value when choice data are unavailable, or limited in ways that render standard methods of estimating choice mappings problematic. We formulate prediction models relating choices to “non-choice” neural responses and use them to predict out-of-sample choices for new items and for new groups of individuals. The predictions are sufficiently accurate to establish the feasibility of our approach. PMID:25729468

  20. Social determinants of food choice.

    PubMed

    Shepherd, R

    1999-11-01

    Food choice is influenced by a large number of factors, including social and cultural factors. One method for trying to understand the impact of these factors is through the study of attitudes. Research is described which utilizes social psychological attitude models of attitude-behaviour relationships, in particular the Theory of Planned Behaviour. This approach has shown good prediction of behaviour, but there are a number of possible extensions to this basic model which might improve its utility. One such extension is the inclusion of measures of moral concern, which have been found to be important both for the choice of genetically-modified foods and also for foods to be eaten by others. It has been found to be difficult to effect dietary change, and there are a number of insights from social psychology which might address this difficulty. One is the phenomenon of optimistic bias, where individuals believe themselves to be at less risk from various hazards than the average person. This effect has been demonstrated for nutritional risks, and this might lead individuals to take less note of health education messages. Another concern is that individuals do not always have clear-cut attitudes, but rather can be ambivalent about food and about healthy eating. It is important, therefore, to have measures for this ambivalence, and an understanding of how it might impact on behaviour. PMID:10817147

  1. Multiplexed modulation of behavioral choice

    PubMed Central

    Palmer, Chris R.; Barnett, Megan N.; Copado, Saul; Gardezy, Fred; Kristan, William B.

    2014-01-01

    Stimuli in the environment, as well as internal states, influence behavioral choice. Of course, animals are often exposed to multiple external and internal factors simultaneously, which makes the ultimate determinants of behavior quite complex. We observed the behavioral responses of European leeches, Hirudo verbana, as we varied one external factor (surrounding water depth) with either another external factor (location of tactile stimulation along the body) or an internal factor (body distention following feeding). Stimulus location proved to be the primary indicator of behavioral response. In general, anterior stimulation produced shortening behavior, midbody stimulation produced local bending, and posterior stimulation usually produced either swimming or crawling but sometimes a hybrid of the two. By producing a systematically measured map of behavioral responses to body stimulation, we found wide areas of overlap between behaviors. When we varied the surrounding water depth, this map changed significantly, and a new feature – rotation of the body along its long axis prior to swimming – appeared. We found additional interactions between water depth and time since last feeding. A large blood meal initially made the animals crawl more and swim less, an effect that was attenuated as water depth increased. The behavioral map returned to its pre-feeding form after approximately 3 weeks as the leeches digested their blood meal. In summary, we found multiplexed impacts on behavioral choice, with the map of responses to tactile stimulation modified by water depth, which itself modulated the impact that feeding had on the decision to swim or crawl. PMID:24902753

  2. Choice as a Global Language in Local Practice: A Mixed Model of School Choice in Taiwan

    ERIC Educational Resources Information Center

    Mao, Chin-Ju

    2015-01-01

    This paper uses school choice policy as an example to demonstrate how local actors adopt, mediate, translate, and reformulate "choice" as neo-liberal rhetoric informing education reform. Complex processes exist between global policy about school choice and the local practice of school choice. Based on the theoretical sensibility of…

  3. Supergranular Parameters

    NASA Astrophysics Data System (ADS)

    Udayashankar, Paniveni

    2016-07-01

    I study the complexity of supergranular cells using intensity patterns from Kodaikanal solar observatory. The chaotic and turbulent aspect of the solar supergranulation can be studied by examining the interrelationships amongst the parameters characterizing supergranular cells namely size, horizontal flow field, lifetime and physical dimensions of the cells and the fractal dimension deduced from the size data. The findings are supportive of Kolmogorov's theory of turbulence. The Data consists of visually identified supergranular cells, from which a fractal dimension 'D' for supergranulation is obtained according to the relation P α AD/2 where 'A' is the area and 'P' is the perimeter of the supergranular cells. I find a fractal dimension close to about 1.3 which is consistent with that for isobars and suggests a possible turbulent origin. The cell circularity shows a dependence on the perimeter with a peak around (1.1-1.2) x 105 m. The findings are supportive of Kolmogorov's theory of turbulence.

  4. A Simplified Model of Choice Behavior under Uncertainty

    PubMed Central

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  5. A Simplified Model of Choice Behavior under Uncertainty.

    PubMed

    Lin, Ching-Hung; Lin, Yu-Kai; Song, Tzu-Jiun; Huang, Jong-Tsun; Chiu, Yao-Chu

    2016-01-01

    The Iowa Gambling Task (IGT) has been standardized as a clinical assessment tool (Bechara, 2007). Nonetheless, numerous research groups have attempted to modify IGT models to optimize parameters for predicting the choice behavior of normal controls and patients. A decade ago, most researchers considered the expected utility (EU) model (Busemeyer and Stout, 2002) to be the optimal model for predicting choice behavior under uncertainty. However, in recent years, studies have demonstrated that models with the prospect utility (PU) function are more effective than the EU models in the IGT (Ahn et al., 2008). Nevertheless, after some preliminary tests based on our behavioral dataset and modeling, it was determined that the Ahn et al. (2008) PU model is not optimal due to some incompatible results. This study aims to modify the Ahn et al. (2008) PU model to a simplified model and used the IGT performance of 145 subjects as the benchmark data for comparison. In our simplified PU model, the best goodness-of-fit was found mostly as the value of α approached zero. More specifically, we retested the key parameters α, λ, and A in the PU model. Notably, the influence of the parameters α, λ, and A has a hierarchical power structure in terms of manipulating the goodness-of-fit in the PU model. Additionally, we found that the parameters λ and A may be ineffective when the parameter α is close to zero in the PU model. The present simplified model demonstrated that decision makers mostly adopted the strategy of gain-stay loss-shift rather than foreseeing the long-term outcome. However, there are other behavioral variables that are not well revealed under these dynamic-uncertainty situations. Therefore, the optimal behavioral models may not have been found yet. In short, the best model for predicting choice behavior under dynamic-uncertainty situations should be further evaluated. PMID:27582715

  6. Relationship between dual-domain parameters and practical characterization data.

    PubMed

    Flach, Gregory P

    2012-01-01

    Dual-domain solute transport models produce significantly improved agreement to observations compared to single-domain (advection-dispersion) models when used in an a posteriori data fitting mode. However, the use of dual-domain models in a general predictive manner has been a difficult and persistent challenge, particularly at field-scale where characterization of permeability and flow is inherently limited. Numerical experiments were conducted in this study to better understand how single-rate mass transfer parameters vary with aquifer attributes and contaminant exposure. High-resolution reference simulations considered 30 different scenarios involving variations in permeability distribution, flow field, mass transfer timescale, and contaminant exposure time. Optimal dual-domain transport parameters were empirically determined by matching to breakthrough curves from the high-resolution simulations. Numerical results show that mobile porosity increases with lower permeability contrast/variance, smaller spatial correlation length, lower connectivity of high-permeability zones, and flow transverse to strata. A nonzero non-participating porosity improves empirical fitting, and becomes larger for flow aligned with strata, smaller diffusion coefficient, and larger spatial correlation length. The non-dimensional mass transfer coefficient or Damkohler number tends to be close to 1.0 and decrease with contaminant exposure time, in agreement with prior studies. The best empirical fit is generally achieved with a combination of macrodispersion and first-order mass transfer. Quantitative prediction of ensemble-average dual-domain parameters as a function of measurable aquifer attributes proved only marginally successful. PMID:21696389

  7. Psychophysics of time perception and intertemporal choice models

    NASA Astrophysics Data System (ADS)

    Takahashi, Taiki; Oono, Hidemi; Radford, Mark H. B.

    2008-03-01

    Intertemporal choice and psychophysics of time perception have been attracting attention in econophysics and neuroeconomics. Several models have been proposed for intertemporal choice: exponential discounting, general hyperbolic discounting (exponential discounting with logarithmic time perception of the Weber-Fechner law, a q-exponential discount model based on Tsallis's statistics), simple hyperbolic discounting, and Stevens' power law-exponential discounting (exponential discounting with Stevens' power time perception). In order to examine the fitness of the models for behavioral data, we estimated the parameters and AICc (Akaike Information Criterion with small sample correction) of the intertemporal choice models by assessing the points of subjective equality (indifference points) at seven delays. Our results have shown that the orders of the goodness-of-fit for both group and individual data were [Weber-Fechner discounting (general hyperbola) > Stevens' power law discounting > Simple hyperbolic discounting > Exponential discounting], indicating that human time perception in intertemporal choice may follow the Weber-Fechner law. Indications of the results for neuropsychopharmacological treatments of addiction and biophysical processing underlying temporal discounting and time perception are discussed.

  8. Suboptimal Choice in Pigeons: Stimulus Value Predicts Choice over Frequencies.

    PubMed

    Smith, Aaron P; Bailey, Alexandria R; Chow, Jonathan J; Beckmann, Joshua S; Zentall, Thomas R

    2016-01-01

    Pigeons have shown suboptimal gambling-like behavior when preferring a stimulus that infrequently signals reliable reinforcement over alternatives that provide greater reinforcement overall. As a mechanism for this behavior, recent research proposed that the stimulus value of alternatives with more reliable signals for reinforcement will be preferred relatively independently of their frequencies. The present study tested this hypothesis using a simplified design of a Discriminative alternative that, 50% of the time, led to either a signal for 100% reinforcement or a blackout period indicative of 0% reinforcement against a Nondiscriminative alternative that always led to a signal that predicted 50% reinforcement. Pigeons showed a strong preference for the Discriminative alternative that remained despite reducing the frequency of the signal for reinforcement in subsequent phases to 25% and then 12.5%. In Experiment 2, using the original design of Experiment 1, the stimulus following choice of the Nondiscriminative alternative was increased to 75% and then to 100%. Results showed that preference for the Discriminative alternative decreased only when the signals for reinforcement for the two alternatives predicted the same probability of reinforcement. The ability of several models to predict this behavior are discussed, but the terminal link stimulus value offers the most parsimonious account of this suboptimal behavior. PMID:27441394

  9. Suboptimal Choice in Pigeons: Stimulus Value Predicts Choice over Frequencies

    PubMed Central

    Bailey, Alexandria R.; Chow, Jonathan J.; Beckmann, Joshua S.; Zentall, Thomas R.

    2016-01-01

    Pigeons have shown suboptimal gambling-like behavior when preferring a stimulus that infrequently signals reliable reinforcement over alternatives that provide greater reinforcement overall. As a mechanism for this behavior, recent research proposed that the stimulus value of alternatives with more reliable signals for reinforcement will be preferred relatively independently of their frequencies. The present study tested this hypothesis using a simplified design of a Discriminative alternative that, 50% of the time, led to either a signal for 100% reinforcement or a blackout period indicative of 0% reinforcement against a Nondiscriminative alternative that always led to a signal that predicted 50% reinforcement. Pigeons showed a strong preference for the Discriminative alternative that remained despite reducing the frequency of the signal for reinforcement in subsequent phases to 25% and then 12.5%. In Experiment 2, using the original design of Experiment 1, the stimulus following choice of the Nondiscriminative alternative was increased to 75% and then to 100%. Results showed that preference for the Discriminative alternative decreased only when the signals for reinforcement for the two alternatives predicted the same probability of reinforcement. The ability of several models to predict this behavior are discussed, but the terminal link stimulus value offers the most parsimonious account of this suboptimal behavior. PMID:27441394

  10. Behavioural social choice: a status report

    PubMed Central

    Regenwetter, Michel; Grofman, Bernard; Popova, Anna; Messner, William; Davis-Stober, Clintin P.; Cavagnaro, Daniel R.

    2008-01-01

    Behavioural social choice has been proposed as a social choice parallel to seminal developments in other decision sciences, such as behavioural decision theory, behavioural economics, behavioural finance and behavioural game theory. Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We highlight that important theoretical predictions in social choice theory change dramatically under even minute violations of standard assumptions. Empirical data violate those critical assumptions. We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also emphasize important insights for research on decision making by individuals. When researchers aggregate individual choice behaviour in laboratory experiments to report summary statistics, they are implicitly applying social choice rules. Thus, they should be aware of the potential for aggregation paradoxes. We hypothesize that such problems may substantially mar the conclusions of a number of (sometimes seminal) papers in behavioural decision research. PMID:19073478