Sample records for guessing reliable solution

  1. An Investigation of the Impact of Guessing on Coefficient α and Reliability

    PubMed Central

    2014-01-01

    Guessing is known to influence the test reliability of multiple-choice tests. Although there are many studies that have examined the impact of guessing, they used rather restrictive assumptions (e.g., parallel test assumptions, homogeneous inter-item correlations, homogeneous item difficulty, and homogeneous guessing levels across items) to evaluate the relation between guessing and test reliability. Based on the item response theory (IRT) framework, this study investigated the extent of the impact of guessing on reliability under more realistic conditions where item difficulty, item discrimination, and guessing levels actually vary across items with three different test lengths (TL). By accommodating multiple item characteristics simultaneously, this study also focused on examining interaction effects between guessing and other variables entered in the simulation to be more realistic. The simulation of the more realistic conditions and calculations of reliability and classical test theory (CTT) item statistics were facilitated by expressing CTT item statistics, coefficient α, and reliability in terms of IRT model parameters. In addition to the general negative impact of guessing on reliability, results showed interaction effects between TL and guessing and between guessing and test difficulty.

  2. Effective calculation of power system low-voltage solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Overbye, T.J.; Klump, R.P.

    1996-02-01

    This paper develops a method for reliably determining the set of low-voltage solutions which are closest to the operable power flow solution. These solutions are often used in conjunction with techniques such as energy methods and the voltage instability proximity index (VIPI) for assessing system voltage stability. This paper presents an algorithm which provides good initial guesses for these solutions. The results are demonstrated on a small system and on larger systems with up to 2,000 buses.

  3. Fuel Optimal, Finite Thrust Guidance Methods to Circumnavigate with Lighting Constraints

    NASA Astrophysics Data System (ADS)

    Prince, E. R.; Carr, R. W.; Cobb, R. G.

    This paper details improvements made to the authors' most recent work to find fuel optimal, finite-thrust guidance to inject an inspector satellite into a prescribed natural motion circumnavigation (NMC) orbit about a resident space object (RSO) in geosynchronous orbit (GEO). Better initial guess methodologies are developed for the low-fidelity model nonlinear programming problem (NLP) solver to include using Clohessy- Wiltshire (CW) targeting, a modified particle swarm optimization (PSO), and MATLAB's genetic algorithm (GA). These initial guess solutions may then be fed into the NLP solver as an initial guess, where a different NLP solver, IPOPT, is used. Celestial lighting constraints are taken into account in addition to the sunlight constraint, ensuring that the resulting NMC also adheres to Moon and Earth lighting constraints. The guidance is initially calculated given a fixed final time, and then solutions are also calculated for fixed final times before and after the original fixed final time, allowing mission planners to choose the lowest-cost solution in the resulting range which satisfies all constraints. The developed algorithms provide computationally fast and highly reliable methods for determining fuel optimal guidance for NMC injections while also adhering to multiple lighting constraints.

  4. The Potential Use of the Discouraging Random Guessing (DRG) Approach in Multiple-Choice Exams in Medical Education.

    ERIC Educational Resources Information Center

    Friedman, Miriam; And Others

    1987-01-01

    Test performances of sophomore medical students on a pretest and final exam (under guessing and no-guessing instructions) were compared. Discouraging random guessing produced test information with improved test reliability and less distortion of item difficulty. More able examinees were less compliant than less able examinees. (Author/RH)

  5. Parallel Guessing: A Strategy for High-Speed Computation

    DTIC Science & Technology

    1984-09-19

    for using additional hardware to obtain higher processing speed). In this paper we argue that parallel guessing for image analysis is a useful...from a true solution, or the correctness of a guess, can be readily checked. We review image - analysis algorithms having a parallel guessing or

  6. Determination of an Optimal Control Strategy for a Generic Surface Vehicle

    DTIC Science & Technology

    2014-06-18

    paragraphs uses the numerical procedure in MATLAB’s BVP (bvp4c) algorithm using the continuation method. The goal is to find a solution to the set of...solution. Solving the BVP problem using bvp4c requires an initial guess for the solution. Note that the algorithm is very sensitive to the particular...form of the initial guess. The quality of the initial guess is paramount in convergence speed of the BVP algorithm and often determines if the

  7. The Effect of Guessing on Item Reliability under Answer-Until-Correct Scoring

    ERIC Educational Resources Information Center

    Kane, Michael; Moloney, James

    1978-01-01

    The answer-until-correct (AUC) procedure requires that examinees respond to a multi-choice item until they answer it correctly. Using a modified version of Horst's model for examinee behavior, this paper compares the effect of guessing on item reliability for the AUC procedure and the zero-one scoring procedure. (Author/CTM)

  8. Global convergence of inexact Newton methods for transonic flow

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1990-01-01

    In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.

  9. Misinformation, partial knowledge and guessing in true/false tests.

    PubMed

    Burton, Richard F

    2002-09-01

    Examiners disagree on whether or not multiple choice and true/false tests should be negatively marked. Much of the debate has been clouded by neglect of the role of misinformation and by vagueness regarding both the specification of test types and "partial knowledge" in relation to guessing. Moreover, variations in risk-taking in the face of negative marking have too often been treated in absolute terms rather than in relation to the effect of guessing on test unreliability. This paper aims to clarify these points and to compare the ill-effects on test reliability of guessing and of variable risk-taking. Three published studies on medical students are examined. These compare responses in true/false tests obtained with both negative marking and number-right scoring. The studies yield data on misinformation and on the extent to which students may fail to benefit from distrusted partial knowledge when there is negative marking. A simple statistical model is used to compare variations in risk-taking with test unreliability due to blind guessing under number-right scoring conditions. Partial knowledge should be least problematic with independent true/false items. The effect on test reliability of blind guessing under number-right conditions is generally greater than that due to the over-cautiousness of some students when there is negative marking.

  10. Stochastic approach to data analysis in fluorescence correlation spectroscopy.

    PubMed

    Rao, Ramachandra; Langoju, Rajesh; Gösch, Michael; Rigler, Per; Serov, Alexandre; Lasser, Theo

    2006-09-21

    Fluorescence correlation spectroscopy (FCS) has emerged as a powerful technique for measuring low concentrations of fluorescent molecules and their diffusion constants. In FCS, the experimental data is conventionally fit using standard local search techniques, for example, the Marquardt-Levenberg (ML) algorithm. A prerequisite for these categories of algorithms is the sound knowledge of the behavior of fit parameters and in most cases good initial guesses for accurate fitting, otherwise leading to fitting artifacts. For known fit models and with user experience about the behavior of fit parameters, these local search algorithms work extremely well. However, for heterogeneous systems or where automated data analysis is a prerequisite, there is a need to apply a procedure, which treats FCS data fitting as a black box and generates reliable fit parameters with accuracy for the chosen model in hand. We present a computational approach to analyze FCS data by means of a stochastic algorithm for global search called PGSL, an acronym for Probabilistic Global Search Lausanne. This algorithm does not require any initial guesses and does the fitting in terms of searching for solutions by global sampling. It is flexible as well as computationally faster at the same time for multiparameter evaluations. We present the performance study of PGSL for two-component with triplet fits. The statistical study and the goodness of fit criterion for PGSL are also presented. The robustness of PGSL on noisy experimental data for parameter estimation is also verified. We further extend the scope of PGSL by a hybrid analysis wherein the output of PGSL is fed as initial guesses to ML. Reliability studies show that PGSL and the hybrid combination of both perform better than ML for various thresholds of the mean-squared error (MSE).

  11. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff

  12. Puzzler Solution: Just Making an Observation | Poster

    Cancer.gov

    Editor’s Note: It looks like we stumped you. None of the puzzler guesses were correct, but our winner was the closest to getting it right. He guessed it was a sanitary sewer clean-out pipe, and that’s what the photo looks like, according to our source at Facilities Maintenance and Engineering. Please continue reading for the correct puzzler solution. By Ashley DeVine, Staff Writer

  13. Guessing right for the next war: streamlining, pooling, and right-timing force design decisions for an environment of uncertainty

    DTIC Science & Technology

    2017-05-25

    Guessing Right for the Next War: Streamlining, Pooling, and Right-Timing Force Design Decisions for an Environment of Uncertainty A...JUN 2016 – MAY 2017 4. TITLE AND SUBTITLE Guessing Right for the Next War: Streamlining, Pooling, and Right- Timing Force Design Decisions for an...committing to one force design solution to modern combat. The Army after World War II shied away from temporary organizational systems like these in

  14. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm

    NASA Astrophysics Data System (ADS)

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation.

  15. Estimating the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm.

    PubMed

    Mehdinejadiani, Behrouz

    2017-08-01

    This study represents the first attempt to estimate the solute transport parameters of the spatial fractional advection-dispersion equation using Bees Algorithm. The numerical studies as well as the experimental studies were performed to certify the integrity of Bees Algorithm. The experimental ones were conducted in a sandbox for homogeneous and heterogeneous soils. A detailed comparative study was carried out between the results obtained from Bees Algorithm and those from Genetic Algorithm and LSQNONLIN routines in FracFit toolbox. The results indicated that, in general, the Bees Algorithm much more accurately appraised the sFADE parameters in comparison with Genetic Algorithm and LSQNONLIN, especially in the heterogeneous soil and for α values near to 1 in the numerical study. Also, the results obtained from Bees Algorithm were more reliable than those from Genetic Algorithm. The Bees Algorithm showed the relative similar performances for all cases, while the Genetic Algorithm and the LSQNONLIN yielded different performances for various cases. The performance of LSQNONLIN strongly depends on the initial guess values so that, compared to the Genetic Algorithm, it can more accurately estimate the sFADE parameters by taking into consideration the suitable initial guess values. To sum up, the Bees Algorithm was found to be very simple, robust and accurate approach to estimate the transport parameters of the spatial fractional advection-dispersion equation. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Direct Methods for Predicting Movement Biomechanics Based Upon Optimal Control Theory with Implementation in OpenSim.

    PubMed

    Porsa, Sina; Lin, Yi-Chung; Pandy, Marcus G

    2016-08-01

    The aim of this study was to compare the computational performances of two direct methods for solving large-scale, nonlinear, optimal control problems in human movement. Direct shooting and direct collocation were implemented on an 8-segment, 48-muscle model of the body (24 muscles on each side) to compute the optimal control solution for maximum-height jumping. Both algorithms were executed on a freely-available musculoskeletal modeling platform called OpenSim. Direct collocation converged to essentially the same optimal solution up to 249 times faster than direct shooting when the same initial guess was assumed (3.4 h of CPU time for direct collocation vs. 35.3 days for direct shooting). The model predictions were in good agreement with the time histories of joint angles, ground reaction forces and muscle activation patterns measured for subjects jumping to their maximum achievable heights. Both methods converged to essentially the same solution when started from the same initial guess, but computation time was sensitive to the initial guess assumed. Direct collocation demonstrates exceptional computational performance and is well suited to performing predictive simulations of movement using large-scale musculoskeletal models.

  17. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  18. Maximum entropy analysis of polarized fluorescence decay of (E)GFP in aqueous solution

    NASA Astrophysics Data System (ADS)

    Novikov, Eugene G.; Skakun, Victor V.; Borst, Jan Willem; Visser, Antonie J. W. G.

    2018-01-01

    The maximum entropy method (MEM) was used for the analysis of polarized fluorescence decays of enhanced green fluorescent protein (EGFP) in buffered water/glycerol mixtures, obtained with time-correlated single-photon counting (Visser et al 2016 Methods Appl. Fluoresc. 4 035002). To this end, we used a general-purpose software module of MEM that was earlier developed to analyze (complex) laser photolysis kinetics of ligand rebinding reactions in oxygen binding proteins. We demonstrate that the MEM software provides reliable results and is easy to use for the analysis of both total fluorescence decay and fluorescence anisotropy decay of aqueous solutions of EGFP. The rotational correlation times of EGFP in water/glycerol mixtures, obtained by MEM as maxima of the correlation-time distributions, are identical to the single correlation times determined by global analysis of parallel and perpendicular polarized decay components. The MEM software is also able to determine homo-FRET in another dimeric GFP, for which the transfer correlation time is an order of magnitude shorter than the rotational correlation time. One important advantage utilizing MEM analysis is that no initial guesses of parameters are required, since MEM is able to select the least correlated solution from the feasible set of solutions.

  19. Medicine is not science: guessing the future, predicting the past.

    PubMed

    Miller, Clifford

    2014-12-01

    Irregularity limits human ability to know, understand and predict. A better understanding of irregularity may improve the reliability of knowledge. Irregularity and its consequences for knowledge are considered. Reliable predictive empirical knowledge of the physical world has always been obtained by observation of regularities, without needing science or theory. Prediction from observational knowledge can remain reliable despite some theories based on it proving false. A naïve theory of irregularity is outlined. Reducing irregularity and/or increasing regularity can increase the reliability of knowledge. Beyond long experience and specialization, improvements include implementing supporting knowledge systems of libraries of appropriately classified prior cases and clinical histories and education about expertise, intuition and professional judgement. A consequence of irregularity and complexity is that classical reductionist science cannot provide reliable predictions of the behaviour of complex systems found in nature, including of the human body. Expertise, expert judgement and their exercise appear overarching. Diagnosis involves predicting the past will recur in the current patient applying expertise and intuition from knowledge and experience of previous cases and probabilistic medical theory. Treatment decisions are an educated guess about the future (prognosis). Benefits of the improvements suggested here are likely in fields where paucity of feedback for practitioners limits development of reliable expert diagnostic intuition. Further analysis, definition and classification of irregularity is appropriate. Observing and recording irregularities are initial steps in developing irregularity theory to improve the reliability and extent of knowledge, albeit some forms of irregularity present inherent difficulties. © 2014 John Wiley & Sons, Ltd.

  20. Finite-difference solution of the compressible stability eigenvalue problem

    NASA Technical Reports Server (NTRS)

    Malik, M. R.

    1982-01-01

    A compressible stability analysis computer code is developed. The code uses a matrix finite difference method for local eigenvalue solution when a good guess for the eigenvalue is available and is significantly more computationally efficient than the commonly used initial value approach. The local eigenvalue search procedure also results in eigenfunctions and, at little extra work, group velocities. A globally convergent eigenvalue procedure is also developed which may be used when no guess for the eigenvalue is available. The global problem is formulated in such a way that no unstable spurious modes appear so that the method is suitable for use in a black box stability code. Sample stability calculations are presented for the boundary layer profiles of a Laminar Flow Control (LFC) swept wing.

  1. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    2011-12-01

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  2. An Exploration Of Fuel Optimal Two-impulse Transfers To Cyclers in the Earth-Moon System

    NASA Astrophysics Data System (ADS)

    Hosseinisianaki, Saghar

    This research explores the optimum two-impulse transfers between a low Earth orbit and cycler orbits in the Earth-Moon circular restricted three-body framework, emphasizing the optimization strategy. Cyclers are those types of periodic orbits that meet both the Earth and the Moon periodically. A spacecraft on such trajectories are under the influence of both the Earth and the Moon gravitational fields. Cyclers have gained recent interest as baseline orbits for several Earth-Moon mission concepts, notably in relation to human exploration. In this thesis it is shown that a direct optimization starting from the classic lambert initial guess may not be adequate for these problems and propose a three-step optimization solver to improve the domain of convergence toward an optimal solution. The first step consists of finding feasible trajectories with a given transfer time. I employ Lambert's problem to provide initial guess to optimize the error in arrival position. This includes the analysis of the liability of Lambert's solution as an initial guess. Once a feasible trajectory is found, the velocity impulse is only a function of transfer time, departure, and arrival points' phases. The second step consists of the optimization of impulse over transfer time which results in the minimum impulse transfer for fixed end points. Finally, the third step is mapping the optimal solutions as the end points are varied.

  3. Application of artificial neural networks and genetic algorithms to modeling molecular electronic spectra in solution

    NASA Astrophysics Data System (ADS)

    Lilichenko, Mark; Kelley, Anne Myers

    2001-04-01

    A novel approach is presented for finding the vibrational frequencies, Franck-Condon factors, and vibronic linewidths that best reproduce typical, poorly resolved electronic absorption (or fluorescence) spectra of molecules in condensed phases. While calculation of the theoretical spectrum from the molecular parameters is straightforward within the harmonic oscillator approximation for the vibrations, "inversion" of an experimental spectrum to deduce these parameters is not. Standard nonlinear least-squares fitting methods such as Levenberg-Marquardt are highly susceptible to becoming trapped in local minima in the error function unless very good initial guesses for the molecular parameters are made. Here we employ a genetic algorithm to force a broad search through parameter space and couple it with the Levenberg-Marquardt method to speed convergence to each local minimum. In addition, a neural network trained on a large set of synthetic spectra is used to provide an initial guess for the fitting parameters and to narrow the range searched by the genetic algorithm. The combined algorithm provides excellent fits to a variety of single-mode absorption spectra with experimentally negligible errors in the parameters. It converges more rapidly than the genetic algorithm alone and more reliably than the Levenberg-Marquardt method alone, and is robust in the presence of spectral noise. Extensions to multimode systems, and/or to include other spectroscopic data such as resonance Raman intensities, are straightforward.

  4. Low-Thrust Transfers from Distant Retrograde Orbits to L2 Halo Orbits in the Earth-Moon System

    NASA Technical Reports Server (NTRS)

    Parrish, Nathan L.; Parker, Jeffrey S.; Hughes, Steven P.; Heiligers, Jeannette

    2016-01-01

    This paper presents a study of transfers between distant retrograde orbits (DROs) and L2 halo orbits in the Earth-Moon system that could be flown by a spacecraft with solar electric propulsion (SEP). Two collocation-based optimal control methods are used to optimize these highly-nonlinear transfers: Legendre pseudospectral and Hermite-Simpson. Transfers between DROs and halo orbits using low-thrust propulsion have not been studied previously. This paper offers a study of several families of trajectories, parameterized by the number of orbital revolutions in a synodic frame. Even with a poor initial guess, a method is described to reliably generate families of solutions. The circular restricted 3-body problem (CRTBP) is used throughout the paper so that the results are autonomous and simpler to understand.

  5. Branch and bound algorithm for accurate estimation of analytical isotropic bidirectional reflectance distribution function models.

    PubMed

    Yu, Chanki; Lee, Sang Wook

    2016-05-20

    We present a reliable and accurate global optimization framework for estimating parameters of isotropic analytical bidirectional reflectance distribution function (BRDF) models. This approach is based on a branch and bound strategy with linear programming and interval analysis. Conventional local optimization is often very inefficient for BRDF estimation since its fitting quality is highly dependent on initial guesses due to the nonlinearity of analytical BRDF models. The algorithm presented in this paper employs L1-norm error minimization to estimate BRDF parameters in a globally optimal way and interval arithmetic to derive our feasibility problem and lower bounding function. Our method is developed for the Cook-Torrance model but with several normal distribution functions such as the Beckmann, Berry, and GGX functions. Experiments have been carried out to validate the presented method using 100 isotropic materials from the MERL BRDF database, and our experimental results demonstrate that the L1-norm minimization provides a more accurate and reliable solution than the L2-norm minimization.

  6. A rigorous and simpler method of image charges

    NASA Astrophysics Data System (ADS)

    Ladera, C. L.; Donoso, G.

    2016-07-01

    The method of image charges relies on the proven uniqueness of the solution of the Laplace differential equation for an electrostatic potential which satisfies some specified boundary conditions. Granted by that uniqueness, the method of images is rightly described as nothing but shrewdly guessing which and where image charges are to be placed to solve the given electrostatics problem. Here we present an alternative image charges method that is based not on guessing but on rigorous and simpler theoretical grounds, namely the constant potential inside any conductor and the application of powerful geometric symmetries. The aforementioned required uniqueness and, more importantly, guessing are therefore both altogether dispensed with. Our two new theoretical fundaments also allow the image charges method to be introduced in earlier physics courses for engineering and sciences students, instead of its present and usual introduction in electromagnetic theory courses that demand familiarity with the Laplace differential equation and its boundary conditions.

  7. Test Design Project: Studies in Test Adequacy. Annual Report.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…

  8. Exact self-similarity solution of the Navier-Stokes equations for a porous channel with orthogonally moving walls

    NASA Astrophysics Data System (ADS)

    Dauenhauer, Eric C.; Majdalani, Joseph

    2003-06-01

    This article describes a self-similarity solution of the Navier-Stokes equations for a laminar, incompressible, and time-dependent flow that develops within a channel possessing permeable, moving walls. The case considered here pertains to a channel that exhibits either injection or suction across two opposing porous walls while undergoing uniform expansion or contraction. Instances of direct application include the modeling of pulsating diaphragms, sweat cooling or heating, isotope separation, filtration, paper manufacturing, irrigation, and the grain regression during solid propellant combustion. To start, the stream function and the vorticity equation are used in concert to yield a partial differential equation that lends itself to a similarity transformation. Following this similarity transformation, the original problem is reduced to solving a fourth-order differential equation in one similarity variable η that combines both space and time dimensions. Since two of the four auxiliary conditions are of the boundary value type, a numerical solution becomes dependent upon two initial guesses. In order to achieve convergence, the governing equation is first transformed into a function of three variables: The two guesses and η. At the outset, a suitable numerical algorithm is applied by solving the resulting set of twelve first-order ordinary differential equations with two unspecified start-up conditions. In seeking the two unknown initial guesses, the rapidly converging inverse Jacobian method is applied in an iterative fashion. Numerical results are later used to ascertain a deeper understanding of the flow character. The numerical scheme enables us to extend the solution range to physical settings not considered in previous studies. Moreover, the numerical approach broadens the scope to cover both suction and injection cases occurring with simultaneous wall motion.

  9. Reliability of functional and predictive methods to estimate the hip joint centre in human motion analysis in healthy adults.

    PubMed

    Kainz, Hans; Hajek, Martin; Modenese, Luca; Saxby, David J; Lloyd, David G; Carty, Christopher P

    2017-03-01

    In human motion analysis predictive or functional methods are used to estimate the location of the hip joint centre (HJC). It has been shown that the Harrington regression equations (HRE) and geometric sphere fit (GSF) method are the most accurate predictive and functional methods, respectively. To date, the comparative reliability of both approaches has not been assessed. The aims of this study were to (1) compare the reliability of the HRE and the GSF methods, (2) analyse the impact of the number of thigh markers used in the GSF method on the reliability, (3) evaluate how alterations to the movements that comprise the functional trials impact HJC estimations using the GSF method, and (4) assess the influence of the initial guess in the GSF method on the HJC estimation. Fourteen healthy adults were tested on two occasions using a three-dimensional motion capturing system. Skin surface marker positions were acquired while participants performed quite stance, perturbed and non-perturbed functional trials, and walking trials. Results showed that the HRE were more reliable in locating the HJC than the GSF method. However, comparison of inter-session hip kinematics during gait did not show any significant difference between the approaches. Different initial guesses in the GSF method did not result in significant differences in the final HJC location. The GSF method was sensitive to the functional trial performance and therefore it is important to standardize the functional trial performance to ensure a repeatable estimate of the HJC when using the GSF method. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. σ -SCF: A Direct Energy-targeting Method To Mean-field Excited States

    NASA Astrophysics Data System (ADS)

    Ye, Hongzhou; Welborn, Matthew; Ricke, Nathan; van Voorhis, Troy

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g. Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF, tend to fall into the lowest solution consistent with a given symmetry - a problem known as ``variational collapse''. In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states - ground or excited - are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). This work was funded by a Grant from NSF (CHE-1464804).

  11. Multiple steady states in atmospheric chemistry

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.

    1993-01-01

    The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.

  12. Generalized gradient algorithm for trajectory optimization

    NASA Technical Reports Server (NTRS)

    Zhao, Yiyuan; Bryson, A. E.; Slattery, R.

    1990-01-01

    The generalized gradient algorithm presented and verified as a basis for the solution of trajectory optimization problems improves the performance index while reducing path equality constraints, and terminal equality constraints. The algorithm is conveniently divided into two phases, of which the first, 'feasibility' phase yields a solution satisfying both path and terminal constraints, while the second, 'optimization' phase uses the results of the first phase as initial guesses.

  13. The Double Star Orbit Initial Value Problem

    NASA Astrophysics Data System (ADS)

    Hensley, Hagan

    2018-04-01

    Many precise algorithms exist to find a best-fit orbital solution for a double star system given a good enough initial value. Desmos is an online graphing calculator tool with extensive capabilities to support animations and defining functions. It can provide a useful visual means of analyzing double star data to arrive at a best guess approximation of the orbital solution. This is a necessary requirement before using a gradient-descent algorithm to find the best-fit orbital solution for a binary system.

  14. Social Cognition as Reinforcement Learning: Feedback Modulates Emotion Inference.

    PubMed

    Zaki, Jamil; Kallman, Seth; Wimmer, G Elliott; Ochsner, Kevin; Shohamy, Daphna

    2016-09-01

    Neuroscientific studies of social cognition typically employ paradigms in which perceivers draw single-shot inferences about the internal states of strangers. Real-world social inference features much different parameters: People often encounter and learn about particular social targets (e.g., friends) over time and receive feedback about whether their inferences are correct or incorrect. Here, we examined this process and, more broadly, the intersection between social cognition and reinforcement learning. Perceivers were scanned using fMRI while repeatedly encountering three social targets who produced conflicting visual and verbal emotional cues. Perceivers guessed how targets felt and received feedback about whether they had guessed correctly. Visual cues reliably predicted one target's emotion, verbal cues predicted a second target's emotion, and neither reliably predicted the third target's emotion. Perceivers successfully used this information to update their judgments over time. Furthermore, trial-by-trial learning signals-estimated using two reinforcement learning models-tracked activity in ventral striatum and ventromedial pFC, structures associated with reinforcement learning, and regions associated with updating social impressions, including TPJ. These data suggest that learning about others' emotions, like other forms of feedback learning, relies on domain-general reinforcement mechanisms as well as domain-specific social information processing.

  15. Closed-loop endo-atmospheric ascent guidance for reusable launch vehicle

    NASA Astrophysics Data System (ADS)

    Sun, Hongsheng

    This dissertation focuses on the development of a closed-loop endo-atmospheric ascent guidance algorithm for the 2nd generation reusable launch vehicle. Special attention has been given to the issues that impact on viability, complexity and reliability in on-board implementation. The algorithm is called once every guidance update cycle to recalculate the optimal solution based on the current flight condition, taking into account atmospheric effects and path constraints. This is different from traditional ascent guidance algorithms which operate in a simple open-loop mode inside atmosphere, and later switch to a closed-loop vacuum ascent guidance scheme. The classical finite difference method is shown to be well suited for fast solution of the constrained optimal three-dimensional ascent problem. The initial guesses for the solutions are generated using an analytical vacuum optimal ascent guidance algorithm. Homotopy method is employed to gradually introduce the aerodynamic forces to generate the optimal solution from the optimal vacuum solution. The vehicle chosen for this study is the Lockheed Martin X-33 lifting-body reusable launch vehicle. To verify the algorithm presented in this dissertation, a series of open-loop and closed-loop tests are performed for three different missions. Wind effects are also studied in the closed-loop simulations. For comparison, the solutions for the same missions are also obtained by two independent optimization softwares. The results clearly establish the feasibility of closed-loop endo-atmospheric ascent guidance of rocket-powered launch vehicles. ATO cases are also tested to assess the adaptability of the algorithm to autonomously incorporate the abort modes.

  16. Low-Thrust Trajectory Optimization with Simplified SQP Algorithm

    NASA Technical Reports Server (NTRS)

    Parrish, Nathan L.; Scheeres, Daniel J.

    2017-01-01

    The problem of low-thrust trajectory optimization in highly perturbed dynamics is a stressing case for many optimization tools. Highly nonlinear dynamics and continuous thrust are each, separately, non-trivial problems in the field of optimal control, and when combined, the problem is even more difficult. This paper de-scribes a fast, robust method to design a trajectory in the CRTBP (circular restricted three body problem), beginning with no or very little knowledge of the system. The approach is inspired by the SQP (sequential quadratic programming) algorithm, in which a general nonlinear programming problem is solved via a sequence of quadratic problems. A few key simplifications make the algorithm presented fast and robust to initial guess: a quadratic cost function, neglecting the line search step when the solution is known to be far away, judicious use of end-point constraints, and mesh refinement on multiple shooting with fixed-step integration.In comparison to the traditional approach of plugging the problem into a “black-box” NLP solver, the methods shown converge even when given no knowledge of the solution at all. It was found that the only piece of information that the user needs to provide is a rough guess for the time of flight, as the transfer time guess will dictate which set of local solutions the algorithm could converge on. This robustness to initial guess is a compelling feature, as three-body orbit transfers are challenging to design with intuition alone. Of course, if a high-quality initial guess is available, the methods shown are still valid.We have shown that endpoints can be efficiently constrained to lie on 3-body repeating orbits, and that time of flight can be optimized as well. When optimizing the endpoints, we must make a trade between converging quickly on sub-optimal endpoints or converging more slowly on end-points that are arbitrarily close to optimal. It is easy for the mission design engineer to adjust this trade based on the problem at hand.The biggest limitation to the algorithm at this point is that multi-revolution transfers (greater than 2 revolutions) do not work nearly as well. This restriction comes in because the relationship between node 1 and node N becomes increasingly nonlinear as the angular distance grows. Trans-fers with more than about 1.5 complete revolutions generally require the line search to improve convergence. Future work includes: Comparison of this algorithm with other established tools; improvements to how multiple-revolution transfers are handled; parallelization of the Jacobian computation; in-creased efficiency for the line search; and optimization of many more trajectories between a variety of 3-body orbits.

  17. Monte Carlo criticality source convergence in a loosely coupled fuel storage system.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blomquist, R. N.; Gelbard, E. M.

    2003-06-10

    The fission source convergence of a very loosely coupled array of 36 fuel subassemblies with slightly non-symmetric reflection is studied. The fission source converges very slowly from a uniform guess to the fundamental mode in which about 40% of the fissions occur in one corner subassembly. Eigenvalue and fission source estimates are analyzed using a set of statistical tests similar to those used in MCNP, including the ''drift-in-mean'' test and a new drift-in-mean test using a linear fit to the cumulative estimate drift, the Shapiro-Wilk test for normality, the relative error test, and the ''1/N'' test. The normality test doesmore » not detect a drifting eigenvalue or fission source. Applied to eigenvalue estimates, the other tests generally fail to detect an unconverged solution, but they are sometimes effective when evaluating fission source distributions. None of the test provides completely reliable indication of convergence, although they can detect nonconvergence.« less

  18. Mixture Rasch model for guessing group identification

    NASA Astrophysics Data System (ADS)

    Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling

    2013-04-01

    Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.

  19. Design of Optimally Robust Control Systems.

    DTIC Science & Technology

    1980-01-01

    approach is that the optimization framework is an artificial device. While some design constraints can easily be incorporated into a single cost function...indicating that that point was indeed the solution. Also, an intellegent initial guess for k was important in order to avoid being hung up at the double

  20. Optimal thrust level for orbit insertion

    NASA Astrophysics Data System (ADS)

    Cerf, Max

    2017-07-01

    The minimum-fuel orbital transfer is analyzed in the case of a launcher upper stage using a constantly thrusting engine. The thrust level is assumed to be constant and its value is optimized together with the thrust direction. A closed-loop solution for the thrust direction is derived from the extremal analysis for a planar orbital transfer. The optimal control problem reduces to two unknowns, namely the thrust level and the final time. Guessing and propagating the costates is no longer necessary and the optimal trajectory is easily found from a rough initialization. On the other hand the initial costates are assessed analytically from the initial conditions and they can be used as initial guess for transfers at different thrust levels. The method is exemplified on a launcher upper stage targeting a geostationary transfer orbit.

  1. A geometric initial guess for localized electronic orbitals in modular biological systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beckman, P. G.; Fattebert, J. L.; Lau, E. Y.

    Recent first-principles molecular dynamics algorithms using localized electronic orbitals have achieved O(N) complexity and controlled accuracy in simulating systems with finite band gaps. However, accurately deter- mining the centers of these localized orbitals during simulation setup may require O(N 3) operations, which is computationally infeasible for many biological systems. We present an O(N) approach for approximating orbital centers in proteins, DNA, and RNA which uses non-localized solutions for a set of fixed-size subproblems to create a set of geometric maps applicable to larger systems. This scalable approach, used as an initial guess in the O(N) first-principles molecular dynamics code MGmol,more » facilitates first-principles simulations in biological systems of sizes which were previously impossible.« less

  2. Response Time Differences between Computers and Tablets

    ERIC Educational Resources Information Center

    Kong, Xiaojing; Davis, Laurie Laughlin; McBride, Yuanyuan; Morrison, Kristin

    2018-01-01

    Item response time data were used in investigating the differences in student test-taking behavior between two device conditions: computer and tablet. Analyses were conducted to address the questions of whether or not the device condition had a differential impact on rapid guessing and solution behaviors (with response time effort used as an…

  3. Sometimes "Newton's Method" Always "Cycles"

    ERIC Educational Resources Information Center

    Latulippe, Joe; Switkes, Jennifer

    2012-01-01

    Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…

  4. Optimal UAS Assignments and Trajectories for Persistent Surveillance and Data Collection from a Wireless Sensor Network

    DTIC Science & Technology

    2015-12-24

    minimizing a weighted sum ofthe time and control effort needed to collect sensor data. This problem formulation is a modified traveling salesman ...29 2.5 The Shortest Path Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.5.1 Traveling Salesman Problem ...48 3.3.1 Initial Guess by Traveling Salesman Problem Solution

  5. Two-Stage Path Planning Approach for Designing Multiple Spacecraft Reconfiguration Maneuvers

    NASA Technical Reports Server (NTRS)

    Aoude, Georges S.; How, Jonathan P.; Garcia, Ian M.

    2007-01-01

    The paper presents a two-stage approach for designing optimal reconfiguration maneuvers for multiple spacecraft. These maneuvers involve well-coordinated and highly-coupled motions of the entire fleet of spacecraft while satisfying an arbitrary number of constraints. This problem is particularly difficult because of the nonlinearity of the attitude dynamics, the non-convexity of some of the constraints, and the coupling between the positions and attitudes of all spacecraft. As a result, the trajectory design must be solved as a single 6N DOF problem instead of N separate 6 DOF problems. The first stage of the solution approach quickly provides a feasible initial solution by solving a simplified version without differential constraints using a bi-directional Rapidly-exploring Random Tree (RRT) planner. A transition algorithm then augments this guess with feasible dynamics that are propagated from the beginning to the end of the trajectory. The resulting output is a feasible initial guess to the complete optimal control problem that is discretized in the second stage using a Gauss pseudospectral method (GPM) and solved using an off-the-shelf nonlinear solver. This paper also places emphasis on the importance of the initialization step in pseudospectral methods in order to decrease their computation times and enable the solution of a more complex class of problems. Several examples are presented and discussed.

  6. The Exploration of the Relationship between Guessing and Latent Ability in IRT Models

    ERIC Educational Resources Information Center

    Gao, Song

    2011-01-01

    This study explored the relationship between successful guessing and latent ability in IRT models. A new IRT model was developed with a guessing function integrating probability of guessing an item correctly with the examinee's ability and the item parameters. The conventional 3PL IRT model was compared with the new 2PL-Guessing model on…

  7. Guessing versus Choosing an Upcoming Task

    PubMed Central

    Kleinsorge, Thomas; Scheil, Juliane

    2016-01-01

    We compared the effects of guessing vs. choosing an upcoming task. In a task-switching paradigm with four tasks, two groups of participants were asked to either guess or choose which task will be presented next under otherwise identical conditions. The upcoming task corresponded to participants’ guesses or choices in 75 % of the trials. However, only participants in the Choosing condition were correctly informed about this, whereas participants in the Guessing condition were told that tasks were determined at random. In the Guessing condition, we replicated previous findings of a pronounced reduction of switch costs in case of incorrect guesses. This switch cost reduction was considerably less pronounced with denied choices in the Choosing condition. We suggest that in the Choosing condition, the signaling of prediction errors associated with denied choices is attenuated because a certain proportion of denied choices is consistent with the overall representation of the situation as conveyed by task instructions. In the Guessing condition, in contrast, the mismatch of guessed and actual task is resolved solely on the level of individual trials by strengthening the representation of the actual task. PMID:27047423

  8. Defectors Cannot Be Detected during“Small Talk” with Strangers

    PubMed Central

    Manson, Joseph H.; Gervais, Matthew M.; Kline, Michelle A.

    2013-01-01

    To account for the widespread human tendency to cooperate in one-shot social dilemmas, some theorists have proposed that cooperators can be reliably detected based on ethological displays that are difficult to fake. Experimental findings have supported the view that cooperators can be distinguished from defectors based on “thin slices” of behavior, but the relevant cues have remained elusive, and the role of the judge's perspective remains unclear. In this study, we followed triadic conversations among unacquainted same-sex college students with unannounced dyadic one-shot prisoner's dilemmas, and asked participants to guess the PD decisions made toward them and among the other two participants. Two other sets of participants guessed the PD decisions after viewing videotape of the conversations, either with foreknowledge (informed), or without foreknowledge (naïve), of the post-conversation PD. Only naïve video viewers approached better-than-chance prediction accuracy, and they were significantly accurate at predicting the PD decisions of only opposite-sexed conversation participants. Four ethological displays recently proposed to cue defection in one-shot social dilemmas (arms crossed, lean back, hand touch, and face touch) failed to predict either actual defection or guesses of defection by any category of observer. Our results cast doubt on the role of “greenbeard” signals in the evolution of human prosociality, although they suggest that eavesdropping may be more informative about others' cooperative propensities than direct interaction. PMID:24358201

  9. Insight solutions are correct more often than analytic solutions

    PubMed Central

    Salvi, Carola; Bricolo, Emanuela; Kounios, John; Bowden, Edward; Beeman, Mark

    2016-01-01

    How accurate are insights compared to analytical solutions? In four experiments, we investigated how participants’ solving strategies influenced their solution accuracies across different types of problems, including one that was linguistic, one that was visual and two that were mixed visual-linguistic. In each experiment, participants’ self-judged insight solutions were, on average, more accurate than their analytic ones. We hypothesised that insight solutions have superior accuracy because they emerge into consciousness in an all-or-nothing fashion when the unconscious solving process is complete, whereas analytic solutions can be guesses based on conscious, prematurely terminated, processing. This hypothesis is supported by the finding that participants’ analytic solutions included relatively more incorrect responses (i.e., errors of commission) than timeouts (i.e., errors of omission) compared to their insight responses. PMID:27667960

  10. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  11. σ-SCF: A direct energy-targeting method to mean-field excited states

    NASA Astrophysics Data System (ADS)

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D.; Van Voorhis, Troy

    2017-12-01

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry—a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states—ground or excited—are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H2, HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  12. σ-SCF: A direct energy-targeting method to mean-field excited states.

    PubMed

    Ye, Hong-Zhou; Welborn, Matthew; Ricke, Nathan D; Van Voorhis, Troy

    2017-12-07

    The mean-field solutions of electronic excited states are much less accessible than ground state (e.g., Hartree-Fock) solutions. Energy-based optimization methods for excited states, like Δ-SCF (self-consistent field), tend to fall into the lowest solution consistent with a given symmetry-a problem known as "variational collapse." In this work, we combine the ideas of direct energy-targeting and variance-based optimization in order to describe excited states at the mean-field level. The resulting method, σ-SCF, has several advantages. First, it allows one to target any desired excited state by specifying a single parameter: a guess of the energy of that state. It can therefore, in principle, find all excited states. Second, it avoids variational collapse by using a variance-based, unconstrained local minimization. As a consequence, all states-ground or excited-are treated on an equal footing. Third, it provides an alternate approach to locate Δ-SCF solutions that are otherwise hardly accessible by the usual non-aufbau configuration initial guess. We present results for this new method for small atoms (He, Be) and molecules (H 2 , HF). We find that σ-SCF is very effective at locating excited states, including individual, high energy excitations within a dense manifold of excited states. Like all single determinant methods, σ-SCF shows prominent spin-symmetry breaking for open shell states and our results suggest that this method could be further improved with spin projection.

  13. The speed of metacognition: taking time to get to know one's structural knowledge.

    PubMed

    Mealor, Andy D; Dienes, Zoltan

    2013-03-01

    The time course of different metacognitive experiences of knowledge was investigated using artificial grammar learning. Experiment 1 revealed that when participants are aware of the basis of their judgments (conscious structural knowledge) decisions are made most rapidly, followed by decisions made with conscious judgment but without conscious knowledge of underlying structure (unconscious structural knowledge), and guess responses (unconscious judgment knowledge) were made most slowly, even when controlling for differences in confidence and accuracy. In experiment 2, short response deadlines decreased the accuracy of unconscious but not conscious structural knowledge. Conversely, the deadline decreased the proportion of conscious structural knowledge in favour of guessing. Unconscious structural knowledge can be applied rapidly but becomes more reliable with additional metacognitive processing time whereas conscious structural knowledge is an all-or-nothing response that cannot always be applied rapidly. These dissociations corroborate quite separate theories of recognition (dual-process) and metacognition (higher order thought and cross-order integration). Copyright © 2012 Elsevier Inc. All rights reserved.

  14. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  15. Setting and changing feature priorities in visual short-term memory.

    PubMed

    Kalogeropoulou, Zampeta; Jagadeesh, Akshay V; Ohl, Sven; Rolfs, Martin

    2017-04-01

    Many everyday tasks require prioritizing some visual features over competing ones, both during the selection from the rich sensory input and while maintaining information in visual short-term memory (VSTM). Here, we show that observers can change priorities in VSTM when, initially, they attended to a different feature. Observers reported from memory the orientation of one of two spatially interspersed groups of black and white gratings. Using colored pre-cues (presented before stimulus onset) and retro-cues (presented after stimulus offset) predicting the to-be-reported group, we manipulated observers' feature priorities independently during stimulus encoding and maintenance, respectively. Valid pre-cues reliably increased observers' performance (reduced guessing, increased report precision) as compared to neutral ones; invalid pre-cues had the opposite effect. Valid retro-cues also consistently improved performance (by reducing random guesses), even if the unexpected group suddenly became relevant (invalid-valid condition). Thus, feature-based attention can reshape priorities in VSTM protecting information that would otherwise be forgotten.

  16. A Case for Soft Error Detection and Correction in Computational Chemistry.

    PubMed

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  17. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  18. Comment on 3PL IRT Adjustment for Guessing

    ERIC Educational Resources Information Center

    Chiu, Ting-Wei; Camilli, Gregory

    2013-01-01

    Guessing behavior is an issue discussed widely with regard to multiple choice tests. Its primary effect is on number-correct scores for examinees at lower levels of proficiency. This is a systematic error or bias, which increases observed test scores. Guessing also can inflate random error variance. Correction or adjustment for guessing formulas…

  19. Children's Awareness of Their Own Certainty and Understanding of Deduction and Guessing

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Anderson, Katherine L.

    2006-01-01

    We conducted three studies that investigated first through third grade children's ability to identify and remember deductive inference or guessing as the source of a belief, to detect and retain the certainty of a belief generated through inference or guessing and to evaluate another observer's inferences and guesses. Immediately following a…

  20. Evaluating the Impact of Guessing and Its Interactions With Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    PubMed Central

    Paek, Insu

    2015-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test characteristics on four confidence interval (CI) procedures for coefficient alpha in terms of coverage rate (CR), length, and the degree of asymmetry of CI estimates. In addition, interval estimates of coefficient alpha when data follow the essentially tau-equivalent condition were investigated as a supplement to the case of dichotomous data with examinee guessing. For dichotomous data with guessing, the results did not reveal salient negative effects of guessing and its interactions with other test characteristics (sample size, test length, coefficient alpha levels) on CR and the degree of asymmetry, but the effect of guessing was salient as a main effect and an interaction effect with sample size on the length of the CI estimates, making longer CI estimates as guessing increases, especially when combined with a small sample size. Other important effects (e.g., CI procedures on CR) are also discussed. PMID:29795863

  1. Rotorcraft Brownout: Advanced Understanding, Control and Mitigation

    DTIC Science & Technology

    2008-12-31

    the Gauss Seidel iterative method . The overall steps of SIMPLER algorithm can be summarized as: 1. Guess velocity field, 2. Calculate the momentum...techniques and numerical methods , and the team will begin to develop a methodology that is capable of integrating these solutions and highlighting...rotorcraft design optimization techniques will then be undertaken using the validated computational methods . 15. SUBJECT TERMS Rotorcraft

  2. Thermal Vegetation Canopy Model Studies.

    DTIC Science & Technology

    1981-08-01

    optical and thermal canopy radiation models, and the interpretation of these measurements. Previous technical re- ports in this series have described...The initial guess is taken to be air temperature; thus, the solution approach may be interpreted as determining the modification to the air...provided assistance for interpreting the micrometeorological data. In addition, Dr. L. W. Gay of the School of Renewable Natural Resources, Arizona

  3. An Investigation of Preservice Teachers' Use of Guess and Check in Solving a Semi Open-Ended Mathematics Problem

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret; An, Song A.; Ma, Tingting; Rangel-Chavez, A. Fabiola; Harbaugh, Adam

    2012-01-01

    Open-ended problems have been regarded as powerful tools for teaching mathematics. This study examined the problem solving of eight mathematics/science middle-school teachers. A semi-structured interview was conducted with (PTs) after completing an open-ended triangle task with four unique solutions. Of particular emphasis was how the PTs used a…

  4. Theory and computation of optimal low- and medium-thrust transfers

    NASA Technical Reports Server (NTRS)

    Chuang, C.-H.

    1993-01-01

    This report presents the formulation of the optimal low- and medium-thrust orbit transfer control problem and methods for numerical solution of the problem. The problem formulation is for final mass maximization and allows for second-harmonic oblateness, atmospheric drag, and three-dimensional, non-coplanar, non-aligned elliptic terminal orbits. We setup some examples to demonstrate the ability of two indirect methods to solve the resulting TPBVP's. The methods demonstrated are the multiple-point shooting method as formulated in H. J. Oberle's subroutine BOUNDSCO, and the minimizing boundary-condition method (MBCM). We find that although both methods can converge solutions, there are trade-offs to using either method. BOUNDSCO has very poor convergence for guesses that do not exhibit the correct switching structure. MBCM, however, converges for a wider range of guesses. However, BOUNDSCO's multi-point structure allows more freedom in quesses by increasing the node points as opposed to only quessing the initial state in MBCM. Finally, we note an additional drawback for BOUNDSCO: the routine does not supply information to the users routines for switching function polarity but only the location of a preset number of switching points.

  5. Fedosov differentials and Catalan numbers

    NASA Astrophysics Data System (ADS)

    Löffler, Johannes

    2010-06-01

    The aim of the paper is to establish a non-recursive formula for the general solution of Fedosov's 'quadratic' fixed-point equation (Fedosov 1994 J. Diff. Geom. 40 213-38). Fedosov's geometrical fixed-point equation for a differential is rewritten in a form similar to the functional equation for the generating function of Catalan numbers. This allows us to guess the solution. An adapted example for Kaehler manifolds of constant sectional curvature is considered in detail. Also for every connection on a manifold a familiar classical differential will be introduced. Dedicated to the memory of Nikolai Neumaier.

  6. Development of Scatterometer-Derived Surface Pressures

    NASA Astrophysics Data System (ADS)

    Hilburn, K. A.; Bourassa, M. A.; O'Brien, J. J.

    2001-12-01

    SeaWinds scatterometer-derived wind fields can be used to estimate surface pressure fields. The method to be used has been developed and tested with Seasat-A and NSCAT wind measurements. The method involves blending two dynamically consistent values of vorticity. Geostrophic relative vorticity is calculated from an initial guess surface pressure field (AVN analysis in this case). Relative vorticity is calculated from SeaWinds winds, adjusted to a geostrophic value, and then blended with the initial guess. An objective method applied minimizes the differences between the initial guess field and scatterometer field, subject to regularization. The long-term goal of this project is to derive research-quality pressure fields from the SeaWinds winds for the Southern Ocean from the Antarctic ice sheet to 30 deg S. The intermediate goal of this report involves generation of pressure fields over the northern hemisphere for testing purposes. Specifically, two issues need to be addressed. First, the most appropriate initial guess field will be determined: the pure AVN analysis or the previously assimilated pressure field. The independent comparison data to be used in answering this question will involve data near land, ship data, and ice data that were not included in the AVN analysis. Second, the smallest number of pressure observations required to anchor the assimilated field will be determined. This study will use Neumann (derivative) boundary conditions on the region of interest. Such boundary conditions only determine the solution to within a constant that must be determined by a number of anchoring points. The smallness of the number of anchoring points will demonstrate the viability of the general use of the scatterometer as a barometer over the oceans.

  7. Acquiring Different Senses of the Verb "To Know."

    ERIC Educational Resources Information Center

    Richards, Meredith Martin; Brown, Melissa Leath

    Children's understanding of the epistemological terms "know" and "guess" was investigated in two studies with four- to ten-year-old subjects. Two adult players guessed at the location of a ball hidden in one of two boxes. On each trial the child was asked questions about "knowing" and "guessing" both before and after the guessing took place.…

  8. No-signaling quantum key distribution: solution by linear programming

    NASA Astrophysics Data System (ADS)

    Hwang, Won-Young; Bae, Joonwoo; Killoran, Nathan

    2015-02-01

    We outline a straightforward approach for obtaining a secret key rate using only no-signaling constraints and linear programming. Assuming an individual attack, we consider all possible joint probabilities. Initially, we study only the case where Eve has binary outcomes, and we impose constraints due to the no-signaling principle and given measurement outcomes. Within the remaining space of joint probabilities, by using linear programming, we get bound on the probability of Eve correctly guessing Bob's bit. We then make use of an inequality that relates this guessing probability to the mutual information between Bob and a more general Eve, who is not binary-restricted. Putting our computed bound together with the Csiszár-Körner formula, we obtain a positive key generation rate. The optimal value of this rate agrees with known results, but was calculated in a more straightforward way, offering the potential of generalization to different scenarios.

  9. The Costs and Benefits of Testing and Guessing on Recognition Memory

    PubMed Central

    Huff, Mark J.; Balota, David A.; Hutchison, Keith A.

    2016-01-01

    We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490

  10. Children's and adults' evaluation of the certainty of deductive inferences, inductive inferences, and guesses.

    PubMed

    Pillow, Bradford H

    2002-01-01

    Two experiments investigated kindergarten through fourth-grade children's and adults' (N = 128) ability to (1) evaluate the certainty of deductive inferences, inductive inferences, and guesses; and (2) explain the origins of inferential knowledge. When judging their own cognitive state, children in first grade and older rated deductive inferences as more certain than guesses; but when judging another person's knowledge, children did not distinguish valid inferences from invalid inferences and guesses until fourth grade. By third grade, children differentiated their own deductive inferences from inductive inferences and guesses, but only adults both differentiated deductive inferences from inductive inferences and differentiated inductive inferences from guesses. Children's recognition of their own inferences may contribute to the development of knowledge about cognitive processes, scientific reasoning, and a constructivist epistemology.

  11. Children's and adults' judgments of the certainty of deductive inferences, inductive inferences, and guesses.

    PubMed

    Pillow, Bradford H; Pearson, Raeanne M; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults differentiated strong inductions, weak inductions, and informed guesses from pure guesses. By Grade 3, participants also gave different types of explanations for their deductions and inductions. These results are discussed in relation to children's concepts of cognitive processes, logical reasoning, and epistemological development.

  12. Global Search Capabilities of Indirect Methods for Impulsive Transfers

    NASA Astrophysics Data System (ADS)

    Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong

    2015-09-01

    An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.

  13. Evaluating the Impact of Guessing and Its Interactions with Other Test Characteristics on Confidence Interval Procedures for Coefficient Alpha

    ERIC Educational Resources Information Center

    Paek, Insu

    2016-01-01

    The effect of guessing on the point estimate of coefficient alpha has been studied in the literature, but the impact of guessing and its interactions with other test characteristics on the interval estimators for coefficient alpha has not been fully investigated. This study examined the impact of guessing and its interactions with other test…

  14. Puzzler Solution: Perfect Weather for a Picnic | Poster

    Cancer.gov

    It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of

  15. Age-related differences in guessing on free and forced recall tests.

    PubMed

    Huff, Mark J; Meade, Michelle L; Hutchison, Keith A

    2011-05-01

    This study examined possible age-related differences in recall, guessing, and metacognition on free recall tests and forced recall tests. Participants studied categorised and unrelated word lists and were asked to recall the items under one of the following test conditions: standard free recall, free recall with a penalty for guessing, free recall with no penalty for guessing, or forced recall. The results demonstrated interesting age differences regarding the impact of liberal test instructions (i.e., forced recall and no penalty) relative to more conservative test instructions (i.e., standard free recall and penalty) on memory performance. Specifically, once guessing was controlled, younger adults' recall of categorised lists varied in accordance with test instructions while older adults' recall of categorised lists did not differ between conservative and liberal test instructions, presumably because older adults approach standard free recall tests of categorised lists with a greater propensity towards guessing than young adults.

  16. Deriving mesoscale temperature and moisture fields from satellite radiance measurements over the United States

    NASA Technical Reports Server (NTRS)

    Hillger, D. W.; Vonder Haar, T. H.

    1977-01-01

    The ability to provide mesoscale temperature and moisture fields from operational satellite infrared sounding radiances over the United States is explored. High-resolution sounding information for mesoscale analysis and forecasting is shown to be obtainable in mostly clear areas. An iterative retrieval algorithm applied to NOAA-VTPR radiances uses a mean radiosonde sounding as a best initial-guess profile. Temperature soundings are then retrieved at a horizontal resolution of about 70 km, as is an indication of the precipitable water content of the vertical sounding columns. Derived temperature values may be biased in general by the initial-guess sounding or in certain areas by the cloud correction technique, but the resulting relative temperature changes across the field when not contaminated by clouds will be useful for mesoscale forecasting and models. The derived moisture, affected only by high clouds, proves to be reliable to within 0.5 cm of precipitable water and contains valuable horizontal information. Present-day applications from polar-orbiting satellites as well as possibilities from upcoming temperature and moisture sounders on geostationary satellites are noted.

  17. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  18. Nonlinear equation of the modes in circular slab waveguides and its application.

    PubMed

    Zhu, Jianxin; Zheng, Jia

    2013-11-20

    In this paper, circularly curved inhomogeneous waveguides are transformed into straight inhomogeneous waveguides first by a conformal mapping. Then, the differential transfer matrix method is introduced and adopted to deduce the exact dispersion relation for modes. This relation itself is complex and difficult to solve, but it can be approximated by a simpler nonlinear equation in practical applications, which is close to the exact relation and quite easy to analyze. Afterward, optimized asymptotic solutions are obtained and act as initial guesses for the following Newton's iteration. Finally, very accurate solutions are achieved in the numerical experiment.

  19. Agency affects adults', but not children's, guessing preferences in a game of chance.

    PubMed

    Harris, Adam J L; Rowley, Martin G; Beck, Sarah R; Robinson, Elizabeth J; McColgan, Kerry L

    2011-09-01

    Adults and children have recently been shown to prefer guessing the outcome of a die roll after the die has been rolled (but remained out of sight) rather than before it has been rolled. This result is contrary to the predictions of the competence hypothesis (Heath & Tversky, 1991 ), which proposes that people are sensitive to the degree of their relative ignorance and therefore prefer to guess about an outcome it is impossible to know, rather than one that they could know, but do not. We investigated the potential role of agency in guessing preferences about a novel game of chance. When the experimenter controlled the outcome, we replicated the finding that adults and 5- to 6-year-old children preferred to make their guess after the outcome had been determined. For adults only, this preference reversed when they exerted control over the outcome about which they were guessing. The adult data appear best explained by a modified version of the competence hypothesis that highlights the notion of control or responsibility. It is proposed that potential attributions of blame are related to the guesser's role in determining the outcome. The child data were consistent with an imagination-based account of guessing preferences.

  20. Exploring the perceptual biases associated with believing and disbelieving in paranormal phenomena.

    PubMed

    Simmonds-Moore, Christine

    2014-08-01

    Ninety-five participants (32 believers, 30 disbelievers and 33 neutral believers in the paranormal) participated in an experiment comprising one visual and one auditory block of trials. Each block included one ESP, two degraded stimuli and one random trial. Each trial included 8 screens or epochs of "random" noise. Participants entered a guess if they perceived a stimulus or changed their mind about stimulus identity, rated guesses for confidence and made notes during each trial. Believers and disbelievers did not differ in the number of guesses made, or in their ability to detect degraded stimuli. Believers displayed a trend toward making faster guesses for some conditions and significantly higher confidence and more misidentifications concerning guesses than disbelievers. Guesses, misidentifications and faster response latencies were generally more likely in the visual than auditory conditions. ESP performance was no different from chance. ESP performance did not differ between belief groups or sensory modalities. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Influences of Source - Item Contingency and Schematic Knowledge on Source Monitoring: Tests of the Probability-Matching Account

    PubMed Central

    Bayen, Ute J.; Kuhlmann, Beatrice G.

    2010-01-01

    The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source guessing probabilities to the perceived contingency between sources and item types. When they do not have a representation of a contingency, they base their guesses on prior schematic knowledge. The authors provide support for this account in two experiments with sources presenting information that was expected for one source and somewhat unexpected for another. Schema-relevant information about the sources was provided at the time of encoding. When contingency perception was impeded by dividing attention, participants showed schema-based guessing (Experiment 1). Manipulating source - item contingency also affected guessing (Experiment 2). When this contingency was schema-inconsistent, it superseded schema-based expectations and led to schema-inconsistent guessing. PMID:21603251

  2. Predictive ability of an early diagnostic guess in patients presenting with chest pain; a longitudinal descriptive study

    PubMed Central

    2010-01-01

    Background The intuitive early diagnostic guess could play an important role in reaching a final diagnosis. However, no study to date has attempted to quantify the importance of general practitioners' (GPs) ability to correctly appraise the origin of chest pain within the first minutes of an encounter. Methods The validation study was nested in a multicentre cohort study with a one year follow-up and included 626 successive patients who presented with chest pain and were attended by 58 GPs in Western Switzerland. The early diagnostic guess was assessed prior to a patient's history being taken by a GP and was then compared to a diagnosis of chest pain observed over the next year. Results Using summary measures clustered at the GP's level, the early diagnostic guess was confirmed by further investigation in 51.0% (CI 95%; 49.4% to 52.5%) of patients presenting with chest pain. The early diagnostic guess was more accurate in patients with a life threatening illness (65.4%; CI 95% 64.5% to 66.3%) and in patients who did not feel anxious (62.9%; CI 95% 62.5% to 63.3%). The predictive abilities of an early diagnostic guess were consistent among GPs. Conclusions The GPs early diagnostic guess was correct in one out of two patients presenting with chest pain. The probability of a correct guess was higher in patients with a life-threatening illness and in patients not feeling anxious about their pain. PMID:20170544

  3. Predictive ability of an early diagnostic guess in patients presenting with chest pain; a longitudinal descriptive study.

    PubMed

    Verdon, François; Junod, Michel; Herzig, Lilli; Vaucher, Paul; Burnand, Bernard; Bischoff, Thomas; Pécoud, Alain; Favrat, Bernard

    2010-02-21

    The intuitive early diagnostic guess could play an important role in reaching a final diagnosis. However, no study to date has attempted to quantify the importance of general practitioners' (GPs) ability to correctly appraise the origin of chest pain within the first minutes of an encounter. The validation study was nested in a multicentre cohort study with a one year follow-up and included 626 successive patients who presented with chest pain and were attended by 58 GPs in Western Switzerland. The early diagnostic guess was assessed prior to a patient's history being taken by a GP and was then compared to a diagnosis of chest pain observed over the next year. Using summary measures clustered at the GP's level, the early diagnostic guess was confirmed by further investigation in 51.0% (CI 95%; 49.4% to 52.5%) of patients presenting with chest pain. The early diagnostic guess was more accurate in patients with a life threatening illness (65.4%; CI 95% 64.5% to 66.3%) and in patients who did not feel anxious (62.9%; CI 95% 62.5% to 63.3%). The predictive abilities of an early diagnostic guess were consistent among GPs. The GPs early diagnostic guess was correct in one out of two patients presenting with chest pain. The probability of a correct guess was higher in patients with a life-threatening illness and in patients not feeling anxious about their pain.

  4. Evaluating the contributions of task expectancy in the testing and guessing benefits on recognition memory.

    PubMed

    Huff, Mark J; Yates, Tyler J; Balota, David A

    2018-05-03

    Recently, we have shown that two types of initial testing (recall of a list or guessing of critical items repeated over 12 study/test cycles) improved final recognition of related and unrelated word lists relative to restudy. These benefits were eliminated, however, when test instructions were manipulated within subjects and presented after study of each list, procedures designed to minimise expectancy of a specific type of upcoming test [Huff, Balota, & Hutchison, 2016. The costs and benefits of testing and guessing on recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1559-1572. doi: 10.1037/xlm0000269 ], suggesting that testing and guessing effects may be influenced by encoding strategies specific for the type of upcoming task. We follow-up these experiments by examining test-expectancy processes in guessing and testing. Testing and guessing benefits over restudy were not found when test instructions were presented either after (Experiment 1) or before (Experiment 2) a single study/task cycle was completed, nor were benefits found when instructions were presented before study/task cycles and the task was repeated three times (Experiment 3). Testing and guessing benefits emerged only when instructions were presented before a study/task cycle and the task was repeated six times (Experiments 4A and 4B). These experiments demonstrate that initial testing and guessing can produce memory benefits in recognition, but only following substantial task repetitions which likely promote task-expectancy processes.

  5. Use of the maximum entropy method to retrieve the vertical atmospheric ozone profile and predict atmospheric ozone content

    NASA Technical Reports Server (NTRS)

    Turner, B. Curtis

    1992-01-01

    A method is developed for prediction of ozone levels in planetary atmospheres. This method is formulated in terms of error covariance matrices, and is associated with both direct measurements, a priori first guess profiles, and a weighting function matrix. This is described by the following linearized equation: y = A(matrix) x X + eta, where A is the weighting matrix and eta is noise. The problems to this approach are: (1) the A matrix is near singularity; (2) the number of unknowns in the profile exceeds the number of data points, therefore, the solution may not be unique; and (3) even if a unique solution exists, eta may cause the solution to be ill conditioned.

  6. Children's and Adults' Evaluation of Their Own Inductive Inferences, Deductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.

    2009-01-01

    Adults' and kindergarten through fourth-grade children's evaluations and explanations of inductive inferences, deductive inferences, and guesses were assessed. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Beginning in third grade, deductions were rated as more certain than strong…

  7. The Ranschburg Effect: Tests of the Guessing-Bias and Proactive Interference Hypotheses

    ERIC Educational Resources Information Center

    Walsh, Michael F.; Schwartz, Marian

    1977-01-01

    The guessing-bias and proactive interference hypotheses of the Ranschburg Effect were investigated by giving three groups different instructions as to guessing during recall. Results failed to support the prediction that the effect should be reduced or eliminated on shift trials. Neither hypothesis received significant support. (CHK)

  8. Generically Used Expert Scheduling System (GUESS): User's Guide Version 1.0

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira

    1996-01-01

    This user's guide contains instructions explaining how to best operate the program GUESS, a generic expert scheduling system. GUESS incorporates several important features for a generic scheduler, including automatic scheduling routines to generate a 'first' schedule for the user, a user interface that includes Gantt charts and enables the human scheduler to manipulate schedules manually, diagnostic report generators, and a variety of scheduling techniques. The current version of GUESS runs on an IBM PC or compatible in the Windows 3.1 or Windows '95 environment.

  9. Evaluation of a neutron spectrum from Bonner spheres measurements using a Bayesian parameter estimation combined with the traditional unfolding methods

    NASA Astrophysics Data System (ADS)

    Mazrou, H.; Bezoubiri, F.

    2018-07-01

    In this work, a new program developed under MATLAB environment and supported by the Bayesian software WinBUGS has been combined to the traditional unfolding codes namely MAXED and GRAVEL, to evaluate a neutron spectrum from the Bonner spheres measured counts obtained around a shielded 241AmBe based-neutron irradiator located at a Secondary Standards Dosimetry Laboratory (SSDL) at CRNA. In the first step, the results obtained by the standalone Bayesian program, using a parametric neutron spectrum model based on a linear superposition of three components namely: a thermal-Maxwellian distribution, an epithermal (1/E behavior) and a kind of a Watt fission and Evaporation models to represent the fast component, were compared to those issued from MAXED and GRAVEL assuming a Monte Carlo default spectrum. Through the selection of new upper limits for some free parameters, taking into account the physical characteristics of the irradiation source, of both considered models, good agreement was obtained for investigated integral quantities i.e. fluence rate and ambient dose equivalent rate compared to MAXED and GRAVEL results. The difference was generally below 4% for investigated parameters suggesting, thereby, the reliability of the proposed models. In the second step, the Bayesian results obtained from the previous calculations were used, as initial guess spectra, for the traditional unfolding codes, MAXED and GRAVEL to derive the solution spectra. Here again the results were in very good agreement, confirming the stability of the Bayesian solution.

  10. The Costs and Benefits of Testing and Guessing on Recognition Memory

    ERIC Educational Resources Information Center

    Huff, Mark J.; Balota, David A.; Hutchison, Keith A.

    2016-01-01

    We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…

  11. Children's and Adults' Judgments of the Certainty of Deductive Inferences, Inductive Inferences, and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.; Hecht, Mary; Bremer, Amanda

    2010-01-01

    Children and adults rated their own certainty following inductive inferences, deductive inferences, and guesses. Beginning in kindergarten, participants rated deductions as more certain than weak inductions or guesses. Deductions were rated as more certain than strong inductions beginning in Grade 3, and fourth-grade children and adults…

  12. Use of mathematical decomposition to optimize investments in gas production and distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dougherty, E.L.; Lombardino, E.; Hutchinson, P.

    1986-01-01

    This paper presents an analytical approach based upon the decomposition method of mathematical programming for determining the optimal investment sequence in each year of a planning horizon for a group of reservoirs that produce gas and gas liquids through a trunk-line network and a gas processing plant. The paper describes the development of the simulation and investment planning system (SIPS) to perform the required calculations. Net present value (NPV) is maximized with the requirement that the incremental present value ratio (PWPI) of any investment in any reservoir be greater than a specified minimum value. A unique feature is a gasmore » reservoir simulation model that aids SIPS in evaluating field development investments. The optimal solution supplies specified dry gas offtake requirements through time until the remaining reserves are insufficient to meet requirements economically. The sales value of recovered liquids contributes significantly to NPV, while the required spare gas-producing capacity reduces NPV. Sips was used successfully for 4 years to generate annual investment plans and operating budgets, and to perform many special studies for a producing complex containing over 50 reservoirs. This experience is reviewed. In considering this large problem, SIPS converges to the optimal solution in 10 to 20 iterations. The primary factor that determines this number is how good the starting guess is. Although sips can generate a starting guess, beginning with a previous optimal solution ordinarily results in faster convergence. Computing time increases in proportion to the number of reservoirs because more than 90% of computing time is spent solving the, reservoir, subproblems.« less

  13. Method for guessing the response of a physical system to an arbitrary input

    DOEpatents

    Wolpert, David H.

    1996-01-01

    Stacked generalization is used to minimize the generalization errors of one or more generalizers acting on a known set of input values and output values representing a physical manifestation and a transformation of that manifestation, e.g., hand-written characters to ASCII characters, spoken speech to computer command, etc. Stacked generalization acts to deduce the biases of the generalizer(s) with respect to a known learning set and then correct for those biases. This deduction proceeds by generalizing in a second space whose inputs are the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is the correct guess. Stacked generalization can be used to combine multiple generalizers or to provide a correction to a guess from a single generalizer.

  14. The dynamics of fidelity over the time course of long-term memory.

    PubMed

    Persaud, Kimele; Hemmer, Pernille

    2016-08-01

    Bayesian models of cognition assume that prior knowledge about the world influences judgments. Recent approaches have suggested that the loss of fidelity from working to long-term (LT) memory is simply due to an increased rate of guessing (e.g. Brady, Konkle, Gill, Oliva, & Alvarez, 2013). That is, recall is the result of either remembering (with some noise) or guessing. This stands in contrast to Bayesian models of cognition while assume that prior knowledge about the world influences judgments, and that recall is a combination of expectations learned from the environment and noisy memory representations. Here, we evaluate the time course of fidelity in LT episodic memory, and the relative contribution of prior category knowledge and guessing, using a continuous recall paradigm. At an aggregate level, performance reflects a high rate of guessing. However, when aggregate data is partitioned by lag (i.e., the number of presentations from study to test), or is un-aggregated, performance appears to be more complex than just remembering with some noise and guessing. We implemented three models: the standard remember-guess model, a three-component remember-guess model, and a Bayesian mixture model and evaluated these models against the data. The results emphasize the importance of taking into account the influence of prior category knowledge on memory. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. One Way or Another

    NASA Technical Reports Server (NTRS)

    Zazzali, Christian

    2003-01-01

    Even experienced project managers can t anticipate every potential problem. Part of planning ahead should include allowing oneself the flexibility to rethink the plan and improvise if necessary. Unique solutions to problems sometimes create a set of new problems unique in nature as well. In dealing with sudden changes in planning, try to consider what other elements of the project will be affected, but don t second guess yourself into a state of inaction because you can t anticipate every contingency.

  16. Correction for Guessing in the Framework of the 3PL Item Response Theory

    ERIC Educational Resources Information Center

    Chiu, Ting-Wei

    2010-01-01

    Guessing behavior is an important topic with regard to assessing proficiency on multiple choice tests, particularly for examinees at lower levels of proficiency due to greater the potential for systematic error or bias which that inflates observed test scores. Methods that incorporate a correction for guessing on high-stakes tests generally rely…

  17. Children's Understanding of the Words "Know" and "Guess."

    ERIC Educational Resources Information Center

    Miscione, John L.; And Others

    This study investigated preschool children's understanding of the words "know" and "guess." Subjects for the study were 48 male and female preschool children ranging in age from 3.6 to 6.6 years. The children were divided into three age groups representing one year intervals. The task for the study involved a "guessing" game in which a colored…

  18. Children's Evaluation of the Certainty of Another Person's Inductive Inferences and Guesses

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Pearson, RaeAnne M.

    2012-01-01

    In three studies, 5-10-year-old children and an adult comparison group judged another's certainty in making inductive inferences and guesses. Participants observed a puppet make strong inductions, weak inductions, and guesses. Participants either had no information about the correctness of the puppet's conclusion, knew that the puppet was correct,…

  19. IRT Models for Ability-Based Guessing

    ERIC Educational Resources Information Center

    Martin, Ernesto San; del Pino, Guido; De Boeck, Paul

    2006-01-01

    An ability-based guessing model is formulated and applied to several data sets regarding educational tests in language and in mathematics. The formulation of the model is such that the probability of a correct guess does not only depend on the item but also on the ability of the individual, weighted with a general discrimination parameter. By so…

  20. The effect of compliant walls on three-dimensional primary and secondary instabilities in boundary layer transition

    NASA Astrophysics Data System (ADS)

    Joslin, R. D.

    1991-04-01

    The use of passive devices to obtain drag and noise reduction or transition delays in boundary layers is highly desirable. One such device that shows promise for hydrodynamic applications is the compliant coating. The present study extends the mechanical model to allow for three-dimensional waves. This study also looks at the effect of compliant walls on three-dimensional secondary instabilities. For the primary and secondary instability analysis, spectral and shooting approximations are used to obtain solutions of the governing equations and boundary conditions. The spectral approximation consists of local and global methods of solution while the shooting approach is local. The global method is used to determine the discrete spectrum of eigenvalue without any initial guess. The local method requires a sufficiently accurate initial guess to converge to the eigenvalue. Eigenvectors may be obtained with either local approach. For the initial stage of this analysis, two and three dimensional primary instabilities propagate over compliant coatings. Results over the compliant walls are compared with the rigid wall case. Three-dimensional instabilities are found to dominate transition over the compliant walls considered. However, transition delays are still obtained and compared with transition delay predictions for rigid walls. The angles of wave propagation are plotted with Reynolds number and frequency. Low frequency waves are found to be highly three-dimensional.

  1. Improved initial guess with semi-subpixel level accuracy in digital image correlation by feature-based method

    NASA Astrophysics Data System (ADS)

    Zhang, Yunlu; Yan, Lei; Liou, Frank

    2018-05-01

    The quality initial guess of deformation parameters in digital image correlation (DIC) has a serious impact on convergence, robustness, and efficiency of the following subpixel level searching stage. In this work, an improved feature-based initial guess (FB-IG) scheme is presented to provide initial guess for points of interest (POIs) inside a large region. Oriented FAST and Rotated BRIEF (ORB) features are semi-uniformly extracted from the region of interest (ROI) and matched to provide initial deformation information. False matched pairs are eliminated by the novel feature guided Gaussian mixture model (FG-GMM) point set registration algorithm, and nonuniform deformation parameters of the versatile reproducing kernel Hilbert space (RKHS) function are calculated simultaneously. Validations on simulated images and real-world mini tensile test verify that this scheme can robustly and accurately compute initial guesses with semi-subpixel level accuracy in cases with small or large translation, deformation, or rotation.

  2. Understanding inference as a source of knowledge: children's ability to evaluate the certainty of deduction, perception, and guessing.

    PubMed

    Pillow, B H; Hill, V; Boyce, A; Stein, C

    2000-03-01

    Three experiments investigated children's understanding of inference as a source of knowledge. Children observed a puppet make a statement about the color of one of two hidden toys after the puppet (a) looked directly at the toy (looking), (b) looked at the other toy (inference), or (c) looked at neither toy (guessing). Most 4-, 5-, and 6-year-olds did not rate the puppet as being more certain of the toy's color after the puppet looked directly at it or inferred its color than they did after the puppet guessed its color. Most 8 and 9-year-olds distinguished inference and looking from guessing. The tendency to explain the puppet's knowledge by referring to inference increased with age. Children who referred to inference in their explanations were more likely to judge deductive inference as more certain than guessing.

  3. Numerical solutions of 3-dimensional Navier-Stokes equations for closed bluff-bodies

    NASA Technical Reports Server (NTRS)

    Abolhassani, J. S.; Tiwari, S. N.

    1985-01-01

    The Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallelepiped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDc VPS 32 computer. The codes are written in 32-bit (half word) FORTRAN, which provides an approximate factor of two decreasing in computational time and doubles the memory size compared to the 54-bit word size.

  4. An explicit solution to the exoatmospheric powered flight guidance and trajectory optimization problem for rocket propelled vehicles

    NASA Technical Reports Server (NTRS)

    Jaggers, R. F.

    1977-01-01

    A derivation of an explicit solution to the two point boundary-value problem of exoatmospheric guidance and trajectory optimization is presented. Fixed initial conditions and continuous burn, multistage thrusting are assumed. Any number of end conditions from one to six (throttling is required in the case of six) can be satisfied in an explicit and practically optimal manner. The explicit equations converge for off nominal conditions such as engine failure, abort, target switch, etc. The self starting, predictor/corrector solution involves no Newton-Rhapson iterations, numerical integration, or first guess values, and converges rapidly if physically possible. A form of this algorithm has been chosen for onboard guidance, as well as real time and preflight ground targeting and trajectory shaping for the NASA Space Shuttle Program.

  5. Computerized Classification Testing under the One-Parameter Logistic Response Model with Ability-Based Guessing

    ERIC Educational Resources Information Center

    Wang, Wen-Chung; Huang, Sheng-Yun

    2011-01-01

    The one-parameter logistic model with ability-based guessing (1PL-AG) has been recently developed to account for effect of ability on guessing behavior in multiple-choice items. In this study, the authors developed algorithms for computerized classification testing under the 1PL-AG and conducted a series of simulations to evaluate their…

  6. Getting Lucky: How Guessing Threatens the Validity of Performance Classifications

    ERIC Educational Resources Information Center

    Foley, Brett P.

    2016-01-01

    There is always a chance that examinees will answer multiple choice (MC) items correctly by guessing. Design choices in some modern exams have created situations where guessing at random through the full exam--rather than only for a subset of items where the examinee does not know the answer--can be an effective strategy to pass the exam. This…

  7. An age-related attentuation of selectivity of choice in a modified guessing task.

    PubMed

    Sanford, A J; Jack, E; Maule, A J

    1977-01-01

    Previous research has shown that older Ss tend to be less selective in multi-source monitoring tasks in that they do not observe the more likely source of information as frequently as do the young. On the other hand, it has also been found that in a simple guessing-game or probability matching task older Ss are no different in their patterns of prediction. An experiment is described below in which old and young Ss take part in a simple quessing-game task where uncertainty as to the success of a guess is made artificially high by the introduction of a proportion of trials on which the stimulus event occurring could not be guessed. Under these conditions old Ss were less selective in their responses. It is suggested that the results support a view that older Ss are less selective at high levels of uncertainty in the likelihood of a guess being the correct one, and that the result is consistent with both types of earlier results, goes part-way towards clarifying the differences, and provides a further example of a situation in which attenuated guessing-selectivity is associated with age.

  8. Targeting Ballistic Lunar Capture Trajectories Using Periodic Orbits in the Sun-Earth CRTBP

    NASA Technical Reports Server (NTRS)

    Cooley, D.S.; Griesemer, Paul Ricord; Ocampo, Cesar

    2009-01-01

    A particular periodic orbit in the Earth-Sun circular restricted three body problem is shown to have the characteristics needed for a ballistic lunar capture transfer. An injection from a circular parking orbit into the periodic orbit serves as an initial guess for a targeting algorithm. By targeting appropriate parameters incrementally in increasingly complicated force models and using precise derivatives calculated from the state transition matrix, a reliable algorithm is produced. Ballistic lunar capture trajectories in restricted four body systems are shown to be able to be produced in a systematic way.

  9. Interrater reliability: the kappa statistic.

    PubMed

    McHugh, Mary L

    2012-01-01

    The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data collected in the study are correct representations of the variables measured. Measurement of the extent to which data collectors (raters) assign the same score to the same variable is called interrater reliability. While there have been a variety of methods to measure interrater reliability, traditionally it was measured as percent agreement, calculated as the number of agreement scores divided by the total number of scores. In 1960, Jacob Cohen critiqued use of percent agreement due to its inability to account for chance agreement. He introduced the Cohen's kappa, developed to account for the possibility that raters actually guess on at least some variables due to uncertainty. Like most correlation statistics, the kappa can range from -1 to +1. While the kappa is one of the most commonly used statistics to test interrater reliability, it has limitations. Judgments about what level of kappa should be acceptable for health research are questioned. Cohen's suggested interpretation may be too lenient for health related studies because it implies that a score as low as 0.41 might be acceptable. Kappa and percent agreement are compared, and levels for both kappa and percent agreement that should be demanded in healthcare studies are suggested.

  10. Denoising forced-choice detection data.

    PubMed

    García-Pérez, Miguel A

    2010-02-01

    Observers in a two-alternative forced-choice (2AFC) detection task face the need to produce a response at random (a guess) on trials in which neither presentation appeared to display a stimulus. Observers could alternatively be instructed to use a 'guess' key on those trials, a key that would produce a random guess and would also record the resultant correct or wrong response as emanating from a computer-generated guess. A simulation study shows that 'denoising' 2AFC data with information regarding which responses are a result of guesses yields estimates of detection threshold and spread of the psychometric function that are far more precise than those obtained in the absence of this information, and parallel the precision of estimates obtained with yes-no tasks running for the same number of trials. Simulations also show that partial compliance with the instructions to use the 'guess' key reduces the quality of the estimates, which nevertheless continue to be more precise than those obtained from conventional 2AFC data if the observers are still moderately compliant. An empirical study testing the validity of simulation results showed that denoised 2AFC estimates of spread were clearly superior to conventional 2AFC estimates and similar to yes-no estimates, but variations in threshold across observers and across sessions hid the benefits of denoising for threshold estimation. The empirical study also proved the feasibility of using a 'guess' key in addition to the conventional response keys defined in 2AFC tasks.

  11. Optimization of Low-Thrust Spiral Trajectories by Collocation

    NASA Technical Reports Server (NTRS)

    Falck, Robert D.; Dankanich, John W.

    2012-01-01

    As NASA examines potential missions in the post space shuttle era, there has been a renewed interest in low-thrust electric propulsion for both crewed and uncrewed missions. While much progress has been made in the field of software for the optimization of low-thrust trajectories, many of the tools utilize higher-fidelity methods which, while excellent, result in extremely high run-times and poor convergence when dealing with planetocentric spiraling trajectories deep within a gravity well. Conversely, faster tools like SEPSPOT provide a reasonable solution but typically fail to account for other forces such as third-body gravitation, aerodynamic drag, solar radiation pressure. SEPSPOT is further constrained by its solution method, which may require a very good guess to yield a converged optimal solution. Here the authors have developed an approach using collocation intended to provide solution times comparable to those given by SEPSPOT while allowing for greater robustness and extensible force models.

  12. Multiple-Choice Exams and Guessing: Results from a One-Year Study of General Chemistry Tests Designed to Discourage Guessing

    ERIC Educational Resources Information Center

    Campbell, Mark L.

    2015-01-01

    Multiple-choice exams, while widely used, are necessarily imprecise due to the contribution of the final student score due to guessing. This past year at the United States Naval Academy the construction and grading scheme for the department-wide general chemistry multiple-choice exams were revised with the goal of decreasing the contribution of…

  13. Verbal problem solving in high functioning autistic individuals.

    PubMed

    Minshew, N J; Siegel, D J; Goldstein, G; Weldy, S

    1994-01-01

    The verbal problem-solving and abstract reasoning ability of 25 high-functioning autistic individuals ages 11 to 41 was compared with normal controls individually matched on age, gender, race, IQ, and educational level. The Twenty Questions Procedure was administered using a grid of 42 common objects. Time to complete the task, number of correct solutions, and number and type of questions asked were analyzed. Results indicated that controls were more often successful in achieving solutions, and in formulating constraint seeking questions that conceptually grouped, ordered, and sorted the objects. In contrast, the autistics relied primarily on guessing. Findings are consistent with prior studies reporting a core deficit in autism involving abstract reasoning ability.

  14. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  15. A Naval Aviation Maintenance Organizational Activity Strategic Information System (OASIS)

    DTIC Science & Technology

    1990-03-01

    regarding how a problem should be solved." [Ref. 22- p, 5 fl They amount to " educated guesses" about the solution to a problem. 30 * Digital Equipment...user provides the developers with the requirements. feedback about progress, and approval of the final system. End-user invol \\ement has be- 37 come a...of confidence. If MCCs had access to such data thev could make better educated decisions about the likelihood of a partic- ular part being the cause

  16. Arc-Length Continuation and Multi-Grid Techniques for Nonlinear Elliptic Eigenvalue Problems,

    DTIC Science & Technology

    1981-03-19

    size of the finest grid. We use the (AM) adaptive version of the Cycle C algorithm , unless otherwise stated. The first modified algorithm is the...by computing the derivative, uk, at a known solution and use it to get a better initial guess for the next value of X in a predictor - corrector fashion...factorization of the Jacobian Gu computed already in the Newton step. Using such a predictor - corrector method will often allow us to take a much bigger step

  17. An Efficient Searchable Encryption Against Keyword Guessing Attacks for Sharable Electronic Medical Records in Cloud-based System.

    PubMed

    Wu, Yilun; Lu, Xicheng; Su, Jinshu; Chen, Peixin

    2016-12-01

    Preserving the privacy of electronic medical records (EMRs) is extremely important especially when medical systems adopt cloud services to store patients' electronic medical records. Considering both the privacy and the utilization of EMRs, some medical systems apply searchable encryption to encrypt EMRs and enable authorized users to search over these encrypted records. Since individuals would like to share their EMRs with multiple persons, how to design an efficient searchable encryption for sharable EMRs is still a very challenge work. In this paper, we propose a cost-efficient secure channel free searchable encryption (SCF-PEKS) scheme for sharable EMRs. Comparing with existing SCF-PEKS solutions, our scheme reduces the storage overhead and achieves better computation performance. Moreover, our scheme can guard against keyword guessing attack, which is neglected by most of the existing schemes. Finally, we implement both our scheme and a latest medical-based scheme to evaluate the performance. The evaluation results show that our scheme performs much better performance than the latest one for sharable EMRs.

  18. The ironic effect of guessing: increased false memory for mediated lists in younger and older adults

    PubMed Central

    Coane, Jennifer H.; Huff, Mark J.; Hutchison, Keith A.

    2016-01-01

    Younger and older adults studied lists of words directly (e.g., creek, water) or indirectly (e.g., beaver, faucet) related to a nonpresented critical lure (CL; e.g., river). Indirect (i.e., mediated) lists presented items that were only related to CLs through nonpresented mediators (i.e., directly related items). Following study, participants completed a condition-specific task, math, a recall test with or without a warning about the CL, or tried to guess the CL. On a final recognition test, warnings (vs. math and recall without warning) decreased false recognition for direct lists, and guessing increased mediated false recognition (an ironic effect of guessing) in both age groups. The observed age-invariance of the ironic effect of guessing suggests that processes involved in mediated false memory are preserved in aging and confirms the effect is largely due to activation in semantic networks during encoding and to the strengthening of these networks during the interpolated tasks. PMID:26393390

  19. The ironic effect of guessing: increased false memory for mediated lists in younger and older adults.

    PubMed

    Coane, Jennifer H; Huff, Mark J; Hutchison, Keith A

    2016-01-01

    Younger and older adults studied lists of words directly (e.g., creek, water) or indirectly (e.g., beaver, faucet) related to a nonpresented critical lure (CL; e.g., river). Indirect (i.e., mediated) lists presented items that were only related to CLs through nonpresented mediators (i.e., directly related items). Following study, participants completed a condition-specific task, math, a recall test with or without a warning about the CL, or tried to guess the CL. On a final recognition test, warnings (vs. math and recall without warning) decreased false recognition for direct lists, and guessing increased mediated false recognition (an ironic effect of guessing) in both age groups. The observed age-invariance of the ironic effect of guessing suggests that processes involved in mediated false memory are preserved in aging and confirms the effect is largely due to activation in semantic networks during encoding and to the strengthening of these networks during the interpolated tasks.

  20. A New Neural Network Approach Including First-Guess for Retrieval of Atmospheric Water Vapor, Cloud Liquid Water Path, Surface Temperature and Emissivities Over Land From Satellite Microwave Observations

    NASA Technical Reports Server (NTRS)

    Aires, F.; Prigent, C.; Rossow, W. B.; Rothstein, M.; Hansen, James E. (Technical Monitor)

    2000-01-01

    The analysis of microwave observations over land to determine atmospheric and surface parameters is still limited due to the complexity of the inverse problem. Neural network techniques have already proved successful as the basis of efficient retrieval methods for non-linear cases, however, first-guess estimates, which are used in variational methods to avoid problems of solution non-uniqueness or other forms of solution irregularity, have up to now not been used with neural network methods. In this study, a neural network approach is developed that uses a first-guess. Conceptual bridges are established between the neural network and variational methods. The new neural method retrieves the surface skin temperature, the integrated water vapor content, the cloud liquid water path and the microwave surface emissivities between 19 and 85 GHz over land from SSM/I observations. The retrieval, in parallel, of all these quantities improves the results for consistency reasons. A data base to train the neural network is calculated with a radiative transfer model and a a global collection of coincident surface and atmospheric parameters extracted from the National Center for Environmental Prediction reanalysis, from the International Satellite Cloud Climatology Project data and from microwave emissivity atlases previously calculated. The results of the neural network inversion are very encouraging. The r.m.s. error of the surface temperature retrieval over the globe is 1.3 K in clear sky conditions and 1.6 K in cloudy scenes. Water vapor is retrieved with a r.m.s. error of 3.8 kg/sq m in clear conditions and 4.9 kg/sq m in cloudy situations. The r.m.s. error in cloud liquid water path is 0.08 kg/sq m . The surface emissivities are retrieved with an accuracy of better than 0.008 in clear conditions and 0.010 in cloudy conditions. Microwave land surface temperature retrieval presents a very attractive complement to the infrared estimates in cloudy areas: time record of land surface temperature will be produced.

  1. Inverse method predicting spinning modes radiated by a ducted fan from free-field measurements.

    PubMed

    Lewy, Serge

    2005-02-01

    In the study the inverse problem of deducing the modal structure of the acoustic field generated by a ducted turbofan is addressed using conventional farfield directivity measurements. The final objective is to make input data available for predicting noise radiation in other configurations that would not have been tested. The present paper is devoted to the analytical part of that study. The proposed method is based on the equations governing ducted sound propagation and free-field radiation. It leads to fast computations checked on Rolls-Royce tests made in the framework of previous European projects. Results seem to be reliable although the system of equations to be solved is generally underdetermined (more propagating modes than acoustic measurements). A limited number of modes are thus selected according to any a priori knowledge of the sources. A first guess of the source amplitudes is obtained by adjusting the calculated maximum of radiation of each mode to the measured sound pressure level at the same angle. A least squares fitting gives the final solution. A simple correction can be made to take account of the mean flow velocity inside the nacelle which shifts the directivity patterns. It consists of modifying the actual frequency to keep the cut-off ratios unchanged.

  2. Solving free-plasma-boundary problems with the SIESTA MHD code

    NASA Astrophysics Data System (ADS)

    Sanchez, R.; Peraza-Rodriguez, H.; Reynolds-Barredo, J. M.; Tribaldos, V.; Geiger, J.; Hirshman, S. P.; Cianciosa, M.

    2017-10-01

    SIESTA is a recently developed MHD equilibrium code designed to perform fast and accurate calculations of ideal MHD equilibria for 3D magnetic configurations. It is an iterative code that uses the solution obtained by the VMEC code to provide a background coordinate system and an initial guess of the solution. The final solution that SIESTA finds can exhibit magnetic islands and stochastic regions. In its original implementation, SIESTA addressed only fixed-boundary problems. This fixed boundary condition somewhat restricts its possible applications. In this contribution we describe a recent extension of SIESTA that enables it to address free-plasma-boundary situations, opening up the possibility of investigating problems with SIESTA in which the plasma boundary is perturbed either externally or internally. As an illustration, the extended version of SIESTA is applied to a configuration of the W7-X stellarator.

  3. Newton's method applied to finite-difference approximations for the steady-state compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Bailey, Harry E.; Beam, Richard M.

    1991-01-01

    Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.

  4. Towards understanding the guessing game: a dynamical systems’ perspective

    NASA Astrophysics Data System (ADS)

    Reimann, Stefan

    2004-08-01

    The so-called “Guessing Game” or α-Beauty Contest serves as a paradigmatic conceptual framework for competitive price formation on financial markets beyond traditional equilibrium finance. It highlights features that are reasonable to consider when dealing with price formation on real markets. Nonetheless this game is still poorly understood. We propose a model which is essentially based on two assumptions: (1) players consider intervals rather than exact numbers to cope with incomplete knowledge and (2) players iteratively update their recent guesses. It provides an explanation for typical patterns observed in real data, such as the strict positivity of outcomes in the 1-shot setting, the skew background distribution of guessed numbers, as well as the polynomial convergence towards the game-theoretic Nash equilibrium in the iterative setting.

  5. Guess what? Here is a new tool that finds some new guessing attacks

    DTIC Science & Technology

    2003-01-01

    Std Z39-18 2 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle A type-flaw occurs when a message of one type is received by a...satisfying condition 1), but not before guessing (satisfying condition 2). 4 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle The only case...Feb 2003. 6 Ricardo Corin, Sreekanth Malladi , Jim Alves-Foss, and Sandro Etalle 4.1 Examples Example 4.1 Consider the following protocol: Msg 1. a

  6. Puzzler Solution: Perfect Weather for a Picnic | Poster

    Cancer.gov

    It looks like we stumped you. We did not receive any correct guesses for the current Poster Puzzler, which is an image of the top of the Building 434 picnic table, with a view looking towards Building 472. This picnic table and others across campus were supplied by the NCI at Frederick Campus Improvement Committee. Building 434, located on Wood Street, is home to the staff of Scientific Publications, Graphics & Media (SPGM), the Central Repository, and the NCI Experimental Therapeutics Program support group, Applied and Developmental Research Directorate.

  7. The neural encoding of guesses in the human brain.

    PubMed

    Bode, Stefan; Bogler, Carsten; Soon, Chun Siong; Haynes, John-Dylan

    2012-01-16

    Human perception depends heavily on the quality of sensory information. When objects are hard to see we often believe ourselves to be purely guessing. Here we investigated whether such guesses use brain networks involved in perceptual decision making or independent networks. We used a combination of fMRI and pattern classification to test how visibility affects the signals, which determine choices. We found that decisions regarding clearly visible objects are predicted by signals in sensory brain regions, whereas different regions in parietal cortex became predictive when subjects were shown invisible objects and believed themselves to be purely guessing. This parietal network was highly overlapping with regions, which have previously been shown to encode free decisions. Thus, the brain might use a dedicated network for determining choices when insufficient sensory information is available. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Priming guesses on a forced-recall test.

    PubMed

    Gibson, Janet M; Meade, Michelle L

    2004-07-01

    The forced-recall paradigm requires participants to fill all spaces on the memory test even if they cannot remember all the list words. In the present study, the authors used that paradigm to examine the influence of implicit memory on guessing--when participants fill remaining spaces after they cannot remember list items. They measured explicit memory as the percentage of targets that participants designated as remembered from the list and implicit memory as the percentage of targets they wrote but did not designate as remembered (beyond chance level). The authors examined implicit memory on guessing with forced recall (Experiment 1), forced cued recall with younger and older adults (Experiment 2), and forced free and cued recall under a depth-of-processing manipulation (Experiment 3). They conclude that implicit memory influences guesses of targets in the forced-recall paradigm.

  9. An analytical-numerical approach for parameter determination of a five-parameter single-diode model of photovoltaic cells and modules

    NASA Astrophysics Data System (ADS)

    Hejri, Mohammad; Mokhtari, Hossein; Azizian, Mohammad Reza; Söder, Lennart

    2016-04-01

    Parameter extraction of the five-parameter single-diode model of solar cells and modules from experimental data is a challenging problem. These parameters are evaluated from a set of nonlinear equations that cannot be solved analytically. On the other hand, a numerical solution of such equations needs a suitable initial guess to converge to a solution. This paper presents a new set of approximate analytical solutions for the parameters of a five-parameter single-diode model of photovoltaic (PV) cells and modules. The proposed solutions provide a good initial point which guarantees numerical analysis convergence. The proposed technique needs only a few data from the PV current-voltage characteristics, i.e. open circuit voltage Voc, short circuit current Isc and maximum power point current and voltage Im; Vm making it a fast and low cost parameter determination technique. The accuracy of the presented theoretical I-V curves is verified by experimental data.

  10. Iterative Methods to Solve Linear RF Fields in Hot Plasma

    NASA Astrophysics Data System (ADS)

    Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo

    2014-10-01

    Most magnetic plasma confinement devices use radio frequency (RF) waves for current drive and/or heating. Numerical modeling of RF fields is an important part of performance analysis of such devices and a predictive tool aiding design and development of future devices. Prior attempts at this modeling have mostly used direct solvers to solve the formulated linear equations. Full wave modeling of RF fields in hot plasma with 3D nonuniformities is mostly prohibited, with memory demands of a direct solver placing a significant limitation on spatial resolution. Iterative methods can significantly increase spatial resolution. We explore the feasibility of using iterative methods in 3D full wave modeling. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating along test particle orbits. The wave equation is discretized using a finite difference approach. The initial guess is important in iterative methods, and we examine different initial guesses including the solution to the cold plasma wave equation. Work is supported by the U.S. DOE SBIR program.

  11. Counterfactual quantum computation through quantum interrogation

    NASA Astrophysics Data System (ADS)

    Hosten, Onur; Rakher, Matthew T.; Barreiro, Julio T.; Peters, Nicholas A.; Kwiat, Paul G.

    2006-02-01

    The logic underlying the coherent nature of quantum information processing often deviates from intuitive reasoning, leading to surprising effects. Counterfactual computation constitutes a striking example: the potential outcome of a quantum computation can be inferred, even if the computer is not run. Relying on similar arguments to interaction-free measurements (or quantum interrogation), counterfactual computation is accomplished by putting the computer in a superposition of `running' and `not running' states, and then interfering the two histories. Conditional on the as-yet-unknown outcome of the computation, it is sometimes possible to counterfactually infer information about the solution. Here we demonstrate counterfactual computation, implementing Grover's search algorithm with an all-optical approach. It was believed that the overall probability of such counterfactual inference is intrinsically limited, so that it could not perform better on average than random guesses. However, using a novel `chained' version of the quantum Zeno effect, we show how to boost the counterfactual inference probability to unity, thereby beating the random guessing limit. Our methods are general and apply to any physical system, as illustrated by a discussion of trapped-ion systems. Finally, we briefly show that, in certain circumstances, counterfactual computation can eliminate errors induced by decoherence.

  12. Incorrect predictions reduce switch costs.

    PubMed

    Kleinsorge, Thomas; Scheil, Juliane

    2015-07-01

    In three experiments, we combined two sources of conflict within a modified task-switching procedure. The first source of conflict was the one inherent in any task switching situation, namely the conflict between a task set activated by the recent performance of another task and the task set needed to perform the actually relevant task. The second source of conflict was induced by requiring participants to guess aspects of the upcoming task (Exps. 1 & 2: task identity; Exp. 3: position of task precue). In case of an incorrect guess, a conflict accrues between the representation of the guessed task and the actually relevant task. In Experiments 1 and 2, incorrect guesses led to an overall increase of reaction times and error rates, but they reduced task switch costs compared to conditions in which participants predicted the correct task. In Experiment 3, incorrect guesses resulted in faster performance overall and to a selective decrease of reaction times in task switch trials when the cue-target interval was long. We interpret these findings in terms of an enhanced level of controlled processing induced by a combination of two sources of conflict converging upon the same target of cognitive control. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Sampling strategies based on singular vectors for assimilated models in ocean forecasting systems

    NASA Astrophysics Data System (ADS)

    Fattorini, Maria; Brandini, Carlo; Ortolani, Alberto

    2016-04-01

    Meteorological and oceanographic models do need observations, not only as a ground truth element to verify the quality of the models, but also to keep model forecast error acceptable: through data assimilation techniques which merge measured and modelled data, natural divergence of numerical solutions from reality can be reduced / controlled and a more reliable solution - called analysis - is computed. Although this concept is valid in general, its application, especially in oceanography, raises many problems due to three main reasons: the difficulties that have ocean models in reaching an acceptable state of equilibrium, the high measurements cost and the difficulties in realizing them. The performances of the data assimilation procedures depend on the particular observation networks in use, well beyond the background quality and the used assimilation method. In this study we will present some results concerning the great impact of the dataset configuration, in particular measurements position, on the evaluation of the overall forecasting reliability of an ocean model. The aim consists in identifying operational criteria to support the design of marine observation networks at regional scale. In order to identify the observation network able to minimize the forecast error, a methodology based on Singular Vectors Decomposition of the tangent linear model is proposed. Such a method can give strong indications on the local error dynamics. In addition, for the purpose of avoiding redundancy of information contained in the data, a minimal distance among data positions has been chosen on the base of a spatial correlation analysis of the hydrodynamic fields under investigation. This methodology has been applied for the choice of data positions starting from simplified models, like an ideal double-gyre model and a quasi-geostrophic one. Model configurations and data assimilation are based on available ROMS routines, where a variational assimilation algorithm (4D-var) is included as part of the code These first applications have provided encouraging results in terms of increased predictability time and reduced forecast error, also improving the quality of the analysis used to recover the real circulation patterns from a first guess quite far from the real state.

  14. Parallel Monotonic Basin Hopping for Low Thrust Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    McCarty, Steven L.; McGuire, Melissa L.

    2018-01-01

    Monotonic Basin Hopping has been shown to be an effective method of solving low thrust trajectory optimization problems. This paper outlines an extension to the common serial implementation by parallelizing it over any number of available compute cores. The Parallel Monotonic Basin Hopping algorithm described herein is shown to be an effective way to more quickly locate feasible solutions, and improve locally optimal solutions in an automated way without requiring a feasible initial guess. The increased speed achieved through parallelization enables the algorithm to be applied to more complex problems that would otherwise be impractical for a serial implementation. Low thrust cislunar transfers and a hybrid Mars example case demonstrate the effectiveness of the algorithm. Finally, a preliminary scaling study quantifies the expected decrease in solve time compared to a serial implementation.,

  15. Children's Developing Understanding of Mental Verbs: Remember, Know, and Guess.

    ERIC Educational Resources Information Center

    Johnson, Carl Nils; Wellman, Henry M.

    1980-01-01

    Preschoolers interpreted mental verbs with respect to their mental state in contrast to external state. These children were nontheless ignorant of definitive distinctions between the mental verbs, completely confusing cases of remembering, knowing, and guessing. (Author/RH)

  16. The role of guessing and boundaries on date estimation biases.

    PubMed

    Lee, Peter James; Brown, Norman R

    2004-08-01

    This study investigates the causes of event-dating biases. Two hundred participants provided knowledge ratings and date estimates for 64 news events. Four independent groups dated the same events under different boundary constraints. Analysis across all responses showed that forward telescoping decreased with boundary age, concurring with the boundary-effects model. With guesses removed from the data set, backward telescoping was greatly reduced, but forward telescoping was unaffected by boundaries. This dissociation indicates that multiple factors (e.g., guessing and reconstructive strategies) are responsible for different dating biases and argue against a boundary explanation of forward telescoping.

  17. Target-present guessing as a function of target prevalence and accumulated information in visual search.

    PubMed

    Peltier, Chad; Becker, Mark W

    2017-05-01

    Target prevalence influences visual search behavior. At low target prevalence, miss rates are high and false alarms are low, while the opposite is true at high prevalence. Several models of search aim to describe search behavior, one of which has been specifically intended to model search at varying prevalence levels. The multiple decision model (Wolfe & Van Wert, Current Biology, 20(2), 121--124, 2010) posits that all searches that end before the observer detects a target result in a target-absent response. However, researchers have found very high false alarms in high-prevalence searches, suggesting that prevalence rates may be used as a source of information to make "educated guesses" after search termination. Here, we further examine the ability for prevalence level and knowledge gained during visual search to influence guessing rates. We manipulate target prevalence and the amount of information that an observer accumulates about a search display prior to making a response to test if these sources of evidence are used to inform target present guess rates. We find that observers use both information about target prevalence rates and information about the proportion of the array inspected prior to making a response allowing them to make an informed and statistically driven guess about the target's presence.

  18. Using patients' narratives to reveal gender stereotypes among medical students.

    PubMed

    Andersson, Jenny; Salander, Pär; Hamberg, Katarina

    2013-07-01

    Gender bias exists in patient treatment, and, like most people, health care providers harbor gender stereotypes. In this study, the authors examined the gender stereotypes that medical students hold about patients. In 2005, in Umeå, Sweden, the authors collected 81 narratives written by patients who had undergone cancer treatment; all information that might reveal the patients' gender was removed from the texts. Eighty-seven medical students read 40 or 41 narratives each, guessed the patient's gender, and explained their guess. The authors analyzed the students' explanations qualitatively and quantitatively to reveal the students' gender stereotypes and to determine whether those stereotypes had any predictive value for correctly guessing a patient's gender. The students' explanations contained 21 categories of justifications, 12 of which were significantly associated with the students guessing one gender or the other. Only three categories successfully predicted a correct identification of gender; two categories were more often associated with incorrect guesses. Medical students enter their training program with culturally shared stereotypes about male and female patients that could cause bias during their future careers as physicians. To prevent this, medical curricula must address gender stereotypes and their possible consequences. The impact of implicit stereotypes must be included in discussions about gender bias in health care.

  19. Immediate Feedback Assessment Technique in a Chemistry Classroom

    NASA Astrophysics Data System (ADS)

    Taylor, Kate R.

    The Immediate Feedback Assessment Technique, or IFAT, is a new testing system that turns a student's traditional multiple-choice testing into a chance for hands-on learning; and provides teachers with an opportunity to obtain more information about a student's knowledge during testing. In the current study we wanted to know if: When students are given the second-chance afforded by the IFAT system, are they guessing or using prior knowledge when making their second chance choice. Additionally, while there has been some adaptation of this testing system in non-science disciplines, we wanted to study if the IFAT-system would be well- received among faculty in the sciences, more specifically chemistry faculty. By comparing the students rate of success on second-chance afforded by the IFAT-system versus the statistical likelihood of guessing correctly, statistical analysis was used to determine if we observed enough students earning the second-chance points to reject the likelihood that students were randomly guessing. Our data analysis revealed that is statistically highly unlikely that students were only guessing when the IFAT system was utilized. (It is important to note that while we can find that students are getting the answer correct at a much higher rate than random guessing we can never truly know if every student is using thought or not.).

  20. A fast multi-resolution approach to tomographic PIV

    NASA Astrophysics Data System (ADS)

    Discetti, Stefano; Astarita, Tommaso

    2012-03-01

    Tomographic particle image velocimetry (Tomo-PIV) is a recently developed three-component, three-dimensional anemometric non-intrusive measurement technique, based on an optical tomographic reconstruction applied to simultaneously recorded images of the distribution of light intensity scattered by seeding particles immersed into the flow. Nowadays, the reconstruction process is carried out mainly by iterative algebraic reconstruction techniques, well suited to handle the problem of limited number of views, but computationally intensive and memory demanding. The adoption of the multiplicative algebraic reconstruction technique (MART) has become more and more accepted. In the present work, a novel multi-resolution approach is proposed, relying on the adoption of a coarser grid in the first step of the reconstruction to obtain a fast estimation of a reliable and accurate first guess. A performance assessment, carried out on three-dimensional computer-generated distributions of particles, shows a substantial acceleration of the reconstruction process for all the tested seeding densities with respect to the standard method based on 5 MART iterations; a relevant reduction in the memory storage is also achieved. Furthermore, a slight accuracy improvement is noticed. A modified version, improved by a multiplicative line of sight estimation of the first guess on the compressed configuration, is also tested, exhibiting a further remarkable decrease in both memory storage and computational effort, mostly at the lowest tested seeding densities, while retaining the same performances in terms of accuracy.

  1. Modulation of financial deprivation on deception and its neural correlates.

    PubMed

    Sun, Peng; Ling, Xiaoli; Zheng, Li; Chen, Jia; Li, Lin; Liu, Zhiyuan; Cheng, Xuemei; Guo, Xiuyan

    2017-11-01

    Deception is a universal phenomenon in human society and plays an important role in everyday life. Previous studies have revealed that people might have an internalized moral norm of keeping honest and the deceptive behavior was reliably correlated with activation in executive brain regions of prefrontal cortices to over-ride intuitive honest responses. Using functional magnetic resonance imaging, this study sought to investigate how financial position modulated the neural responses during deceptive decision. Twenty-one participants were scanned when they played a series of adapted Dictator Game with different partners after a ball-guess game. Specifically, participants gained or lost money in the ball-guess game, and had opportunities to get more financial gains through cheating in the following adapted Dictator Game. Behavioral results indicated that participants did not cheat to the full extent; instead they were more likely to lie after losing money compared with gaining money. At the neural level, weaker activities in the dorsolateral prefrontal cortices were observed when participants lied after losing money than gaining money. Together, our data indicated that, people really had an internalized norm of keeping honest, but it would be lenient when people feel financial deprivation. And suppressing the truthful response originating from moral norm of keeping honest was associated with increased level of activation in the dorsolateral prefrontal cortices, but this association became weaker when people were under financial deprivation.

  2. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  3. A multi-frequency iterative imaging method for discontinuous inverse medium problem

    NASA Astrophysics Data System (ADS)

    Zhang, Lei; Feng, Lixin

    2018-06-01

    The inverse medium problem with discontinuous refractive index is a kind of challenging inverse problem. We employ the primal dual theory and fast solution of integral equations, and propose a new iterative imaging method. The selection criteria of regularization parameter is given by the method of generalized cross-validation. Based on multi-frequency measurements of the scattered field, a recursive linearization algorithm has been presented with respect to the frequency from low to high. We also discuss the initial guess selection strategy by semi-analytical approaches. Numerical experiments are presented to show the effectiveness of the proposed method.

  4. Explicitly computing geodetic coordinates from Cartesian coordinates

    NASA Astrophysics Data System (ADS)

    Zeng, Huaien

    2013-04-01

    This paper presents a new form of quartic equation based on Lagrange's extremum law and a Groebner basis under the constraint that the geodetic height is the shortest distance between a given point and the reference ellipsoid. A very explicit and concise formulae of the quartic equation by Ferrari's line is found, which avoids the need of a good starting guess for iterative methods. A new explicit algorithm is then proposed to compute geodetic coordinates from Cartesian coordinates. The convergence region of the algorithm is investigated and the corresponding correct solution is given. Lastly, the algorithm is validated with numerical experiments.

  5. Guessing imagined and live chance events: adults behave like children with live events.

    PubMed

    Robinson, E J; Pendle, J E C; Rowley, M G; Beck, S R; McColgan, K L T

    2009-11-01

    An established finding is that adults prefer to guess before rather than after a chance event has happened. This is interpreted in terms of aversion to guessing when relatively incompetent: After throwing, the fall could be known. Adults (N=71, mean age 18;11, N=28, mean age 48;0) showed this preference with imagined die-throwing as in the published studies. With live die-throwing, children (N=64, aged 6 and 8 years; N=50, aged 5 and 6 years) and 15-year-olds (N=93, 46) showed the opposite preference, as did 17 adults. Seventeen-year-olds (N=82) were more likely to prefer to guess after throwing with live rather than imagined die-throwing. Reliance on imagined situations in the literature on decision-making under uncertainty ignores the possibility that adults imagine inaccurately how they would really feel: After a real die has been thrown, adults, like children, may feel there is less ambiguity about the outcome.

  6. Implicit recognition based on lateralized perceptual fluency.

    PubMed

    Vargas, Iliana M; Voss, Joel L; Paller, Ken A

    2012-02-06

    In some circumstances, accurate recognition of repeated images in an explicit memory test is driven by implicit memory. We propose that this "implicit recognition" results from perceptual fluency that influences responding without awareness of memory retrieval. Here we examined whether recognition would vary if images appeared in the same or different visual hemifield during learning and testing. Kaleidoscope images were briefly presented left or right of fixation during divided-attention encoding. Presentation in the same visual hemifield at test produced higher recognition accuracy than presentation in the opposite visual hemifield, but only for guess responses. These correct guesses likely reflect a contribution from implicit recognition, given that when the stimulated visual hemifield was the same at study and test, recognition accuracy was higher for guess responses than for responses with any level of confidence. The dramatic difference in guessing accuracy as a function of lateralized perceptual overlap between study and test suggests that implicit recognition arises from memory storage in visual cortical networks that mediate repetition-induced fluency increments.

  7. Integration of social information by human groups

    PubMed Central

    Granovskiy, Boris; Gold, Jason M.; Sumpter, David; Goldstone, Robert L.

    2015-01-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy but with information about the decisions made by peers in their group. The “wisdom of crowds” hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0 and 100% (e.g., ‘What percentage of Americans are left-handed?’). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move towards the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. PMID:26189568

  8. Integration of Social Information by Human Groups.

    PubMed

    Granovskiy, Boris; Gold, Jason M; Sumpter, David J T; Goldstone, Robert L

    2015-07-01

    We consider a situation in which individuals search for accurate decisions without direct feedback on their accuracy, but with information about the decisions made by peers in their group. The "wisdom of crowds" hypothesis states that the average judgment of many individuals can give a good estimate of, for example, the outcomes of sporting events and the answers to trivia questions. Two conditions for the application of wisdom of crowds are that estimates should be independent and unbiased. Here, we study how individuals integrate social information when answering trivia questions with answers that range between 0% and 100% (e.g., "What percentage of Americans are left-handed?"). We find that, consistent with the wisdom of crowds hypothesis, average performance improves with group size. However, individuals show a consistent bias to produce estimates that are insufficiently extreme. We find that social information provides significant, albeit small, improvement to group performance. Outliers with answers far from the correct answer move toward the position of the group mean. Given that these outliers also tend to be nearer to 50% than do the answers of other group members, this move creates group polarization away from 50%. By looking at individual performance over different questions we find that some people are more likely to be affected by social influence than others. There is also evidence that people differ in their competence in answering questions, but lack of competence is not significantly correlated with willingness to change guesses. We develop a mathematical model based on these results that postulates a cognitive process in which people first decide whether to take into account peer guesses, and if so, to move in the direction of these guesses. The size of the move is proportional to the distance between their own guess and the average guess of the group. This model closely approximates the distribution of guess movements and shows how outlying incorrect opinions can be systematically removed from a group resulting, in some situations, in improved group performance. However, improvement is only predicted for cases in which the initial guesses of individuals in the group are biased. Copyright © 2015 Cognitive Science Society, Inc.

  9. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    PubMed Central

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2015-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The consequence is that the proficiencies of the more proficient students are increased relative to those of the less proficient. Not controlling the guessing bias underestimates the progress of students across 7 years of schooling with important educational implications. PMID:29795871

  10. Partitioned-Interval Quantum Optical Communications Receiver

    NASA Technical Reports Server (NTRS)

    Vilnrotter, Victor A.

    2013-01-01

    The proposed quantum receiver in this innovation partitions each binary signal interval into two unequal segments: a short "pre-measurement" segment in the beginning of the symbol interval used to make an initial guess with better probability than 50/50 guessing, and a much longer segment used to make the high-sensitivity signal detection via field-cancellation and photon-counting detection. It was found that by assigning as little as 10% of the total signal energy to the pre-measurement segment, the initial 50/50 guess can be improved to about 70/30, using the best available measurements such as classical coherent or "optimized Kennedy" detection.

  11. Multiple-choice examinations: adopting an evidence-based approach to exam technique.

    PubMed

    Hammond, E J; McIndoe, A K; Sansome, A J; Spargo, P M

    1998-11-01

    Negatively marked multiple-choice questions (MCQs) are part of the assessment process in both the Primary and Final examinations for the fellowship of the Royal College of Anaesthetists. It is said that candidates who guess will lose marks in the MCQ paper. We studied candidates attending a pre-examination revision course and have shown that an evaluation of examination technique is an important part of an individual's preparation. All candidates benefited substantially from backing their educated guesses while only 3 out of 27 lost marks from backing their wild guesses. Failure to appreciate the relationship between knowledge and technique may significantly affect a candidate's performance in the examination.

  12. Number Guessing

    ERIC Educational Resources Information Center

    Sezin, Fatin

    2009-01-01

    It is instructive and interesting to find hidden numbers by using different positional numeration systems. Most of the present guessing techniques use the binary system expressed as less-than, greater-than or present-absent type information. This article describes how, by employing four cards having integers 1-64 written in different colours, one…

  13. Orbital Battleship: A Guessing Game to Reinforce Atomic Structure

    ERIC Educational Resources Information Center

    Kurushkin, Mikhail; Mikhaylenko, Maria

    2016-01-01

    A competitive educational guessing game "Orbital Battleship" which reinforces Madelung's and Hund's rules, values of quantum numbers, and understanding of periodicity was designed. The game develops strategic thinking, is not time-consuming, requires minimal preparation and supervision, and is an efficient and fun alternative to more…

  14. Hybrid Differential Dynamic Programming with Stochastic Search

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASAs Dawn mission. The Dawn trajectory was designed with the DDP-based Static Dynamic Optimal Control algorithm used in the Mystic software. Another recently developed method, Hybrid Differential Dynamic Programming (HDDP) is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  15. The effect of unsuccessful retrieval on children's subsequent learning.

    PubMed

    Carneiro, Paula; Lapa, Ana; Finn, Bridgid

    2018-02-01

    It is well known that successful retrieval enhances subsequent adults' learning by promoting long-term retention. Recent research has also found benefits from unsuccessful retrieval, but the evidence is not as clear-cut when the participants are children. In this study, we employed a methodology based on guessing-the weak associate paradigm-to test whether children can learn from generated errors or whether errors are harmful for learning. We tested second- and third-grade children in Experiment 1 and tested preschool and kindergarten children in Experiment 2. With slight differences in the method, in both experiments children heard the experimenter saying one word (cue) and were asked to guess an associate word (guess condition) or to listen to the correspondent target-associated word (study condition), followed by corrective feedback in both conditions. At the end of the guessing phase, the children undertook a cued-recall task in which they were presented with each cue and were asked to say the corrected target. Together, the results showed that older children-those in kindergarten and early elementary school-benefited from unsuccessful retrieval. Older children showed more correct responses and fewer errors in the guess condition. In contrast, preschoolers produced similar levels of correct and error responses in the two conditions. In conclusion, generating errors seems to be beneficial for future learning of children older than 5years. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Length scales involved in decoherence of trapped bosons by buffer-gas scattering

    NASA Astrophysics Data System (ADS)

    Gilz, Lukas; Rico-Pérez, Luis; Anglin, James R.

    2014-05-01

    We ask and answer a basic question about the length scales involved in quantum decoherence: how far apart in space do two parts of a quantum system have to be before a common quantum environment decoheres them as if they were entirely separate? We frame this question specifically in a cold atom context. How far apart do two populations of bosons have to be before an environment of thermal atoms of a different species ("buffer gas") responds to their two particle numbers separately? An initial guess for this length scale is the thermal coherence length of the buffer gas; we show that a standard Born-Markov treatment partially supports this guess, but predicts only inverse-square saturation of decoherence rates with distance, and not the much more abrupt Gaussian behavior of the buffer gas's first-order coherence. We confirm this Born-Markov result with a more rigorous theory, based on an exact solution of a two-scatterer scattering problem, which also extends the result beyond weak scattering. Finally, however, we show that when interactions within the buffer-gas reservoir are taken into account, an abrupt saturation of the decoherence rate does occur, exponentially on the length scale of the buffer gas's mean free path.

  17. A hybrid computational-experimental approach for automated crystal structure solution

    NASA Astrophysics Data System (ADS)

    Meredig, Bryce; Wolverton, C.

    2013-02-01

    Crystal structure solution from diffraction experiments is one of the most fundamental tasks in materials science, chemistry, physics and geology. Unfortunately, numerous factors render this process labour intensive and error prone. Experimental conditions, such as high pressure or structural metastability, often complicate characterization. Furthermore, many materials of great modern interest, such as batteries and hydrogen storage media, contain light elements such as Li and H that only weakly scatter X-rays. Finally, structural refinements generally require significant human input and intuition, as they rely on good initial guesses for the target structure. To address these many challenges, we demonstrate a new hybrid approach, first-principles-assisted structure solution (FPASS), which combines experimental diffraction data, statistical symmetry information and first-principles-based algorithmic optimization to automatically solve crystal structures. We demonstrate the broad utility of FPASS to clarify four important crystal structure debates: the hydrogen storage candidates MgNH and NH3BH3; Li2O2, relevant to Li-air batteries; and high-pressure silane, SiH4.

  18. Uneducated Guesses: Using Evidence to Uncover Misguided Education Policies

    ERIC Educational Resources Information Center

    Wainer, Howard

    2011-01-01

    "Uneducated Guesses" challenges everything our policymakers thought they knew about education and education reform, from how to close the achievement gap in public schools to admission standards for top universities. In this explosive book, Howard Wainer uses statistical evidence to show why some of the most widely held beliefs in…

  19. Guess the Number of . . .

    ERIC Educational Resources Information Center

    Housen, Monica

    2017-01-01

    In this article, Monica Housen describes how she uses Guess the Number of . . . , a game that develops estimation skills and persistence to provide a fun, to provide a meaningful experience for her high school students. Each week she displays objects in a clear plastic container, like those for pretzels sold in bulk. Students enter a…

  20. Guess what? Implicit motivation boosts the influence of subliminal information on choice.

    PubMed

    Milyavsky, Maxim; Hassin, Ran R; Schul, Yaacov

    2012-09-01

    When is choice affected by subliminal messages? This question has fascinated scientists and lay people alike, but it is only recently that reliable empirical data began to emerge. In the current paper we bridge the literature on implicit motivation and that on subliminal persuasion. We suggest that motivation in general, and implicit motivation more specifically, plays an important role in subliminal persuasion: It sensitizes us to subliminal cues. To examine this hypothesis we developed a new paradigm that allows powerful tests of subliminal influences as well as stringent assessments of subliminality. The results of two experiments suggest that implicit motivation can enhance the effects of subliminal priming on choice. Copyright © 2012 Elsevier Inc. All rights reserved.

  1. Observational tests for stellar evolution and pulsation theory. I - The globular clusters M 4 and M 15

    NASA Astrophysics Data System (ADS)

    Caputo, F.

    1987-01-01

    It is shown that the pulsational properties of RR Lyrae variables in globular clusters can be used together with the Red Giant Branch location to derive reliable information on the cluster reddening and distance modulus. By demanding full agreement with some key observables, the reddening and distance modulus of the globular clusters M4 and M15 are derived as a function of the mass of the variables and of the adopted cluster metallicity. Thus, from the comparison between observations and theoretical isochrones, the cluster age can be evaluated. A best guess for the age of M4 and M15 can be presented: 16×109yr, with a total uncertainty of 2 billion years.

  2. Size and emotion averaging: costs of dividing attention after all.

    PubMed

    Brand, John; Oriet, Chris; Tottenham, Laurie Sykes

    2012-03-01

    Perceptual averaging is a process by which sets of similar items are represented by summary statistics such as their average size, luminance, or orientation. Researchers have argued that this process is automatic, able to be carried out without interference from concurrent processing. Here, we challenge this conclusion and demonstrate a reliable cost of computing the mean size of circles distinguished by colour (Experiments 1 and 2) and the mean emotionality of faces distinguished by sex (Experiment 3). We also test the viability of two strategies that could have allowed observers to guess the correct response without computing the average size or emotionality of both sets concurrently. We conclude that although two means can be computed concurrently, doing so incurs a cost of dividing attention.

  3. On computations of the integrated space shuttle flowfield using overset grids

    NASA Technical Reports Server (NTRS)

    Chiu, I-T.; Pletcher, R. H.; Steger, J. L.

    1990-01-01

    Numerical simulations using the thin-layer Navier-Stokes equations and chimera (overset) grid approach were carried out for flows around the integrated space shuttle vehicle over a range of Mach numbers. Body-conforming grids were used for all the component grids. Testcases include a three-component overset grid - the external tank (ET), the solid rocket booster (SRB) and the orbiter (ORB), and a five-component overset grid - the ET, SRB, ORB, forward and aft attach hardware, configurations. The results were compared with the wind tunnel and flight data. In addition, a Poisson solution procedure (a special case of the vorticity-velocity formulation) using primitive variables was developed to solve three-dimensional, irrotational, inviscid flows for single as well as overset grids. The solutions were validated by comparisons with other analytical or numerical solution, and/or experimental results for various geometries. The Poisson solution was also used as an initial guess for the thin-layer Navier-Stokes solution procedure to improve the efficiency of the numerical flow simulations. It was found that this approach resulted in roughly a 30 percent CPU time savings as compared with the procedure solving the thin-layer Navier-Stokes equations from a uniform free stream flowfield.

  4. Subjective qualities of memories associated with the picture superiority effect in schizophrenia.

    PubMed

    Huron, Caroline; Danion, Jean-Marie; Rizzo, Lydia; Killofer, Valérie; Damiens, Annabelle

    2003-02-01

    Patients with schizophrenia (n = 24) matched with 24 normal subjects were presented with both words and pictures. On a recognition memory task, they were asked to give remember, know, or guess responses to items that were recognized on the basis of conscious recollection, familiarity, or guessing, respectively. Compared with normal subjects, patients exhibited a lower picture superiority effect selectively related to remember responses. Unlike normal subjects, they did not exhibit any word superiority effect in relation to guess responses; this explains why the overall picture superiority effect appeared to be intact. These results emphasize the need to take into account the subjective states of awareness when analyzing memory impairments in schizophrenia.

  5. Older adults' use of metacognitive knowledge in source monitoring: spared monitoring but impaired control.

    PubMed

    Kuhlmann, Beatrice G; Touron, Dayna R

    2011-03-01

    While episodic memory declines with age, metacognitive monitoring is spared. The current study explored whether older adults can use their preserved metacognitive knowledge to make source guesses in the absence of source memory. Through repetition, words from two sources (italic vs. bold text type) differed in memorability. There were no age differences in monitoring this difference despite an age difference in memory. Older adults used their metacognitive knowledge to make source guesses but showed a deficit in varying their source guessing based on word recognition. Therefore, older adults may not fully benefit from metacognitive knowledge about sources in source monitoring. (c) 2011 APA, all rights reserved.

  6. Simultaneous Retrieval of Temperature, Water Vapor and Ozone Atmospheric Profiles from IASI: Compression, De-noising, First Guess Retrieval and Inversion Algorithms

    NASA Technical Reports Server (NTRS)

    Aires, F.; Rossow, W. B.; Scott, N. A.; Chedin, A.; Hansen, James E. (Technical Monitor)

    2001-01-01

    A fast temperature water vapor and ozone atmospheric profile retrieval algorithm is developed for the high spectral resolution Infrared Atmospheric Sounding Interferometer (IASI) space-borne instrument. Compression and de-noising of IASI observations are performed using Principal Component Analysis. This preprocessing methodology also allows, for a fast pattern recognition in a climatological data set to obtain a first guess. Then, a neural network using first guess information is developed to retrieve simultaneously temperature, water vapor and ozone atmospheric profiles. The performance of the resulting fast and accurate inverse model is evaluated with a large diversified data set of radiosondes atmospheres including rare events.

  7. Optimizing energy growth as a tool for finding exact coherent structures

    NASA Astrophysics Data System (ADS)

    Olvera, D.; Kerswell, R. R.

    2017-08-01

    We discuss how searching for finite-amplitude disturbances of a given energy that maximize their subsequent energy growth after a certain later time T can be used to probe the phase space around a reference state and ultimately to find other nearby solutions. The procedure relies on the fact that of all the initial disturbances on a constant-energy hypersphere, the optimization procedure will naturally select the one that lies closest to the stable manifold of a nearby solution in phase space if T is large enough. Then, when in its subsequent evolution the optimal disturbance transiently approaches the new solution, a flow state at this point can be used as an initial guess to converge the solution to machine precision. We illustrate this approach in plane Couette flow by rediscovering the spanwise-localized "snake" solutions of Schneider et al. [Phys. Rev. Lett. 104, 104501 (2010), 10.1103/PhysRevLett.104.104501], probing phase space at very low Reynolds numbers (less than 127.7 ) where the constant-shear solution is believed to be the global attractor and examining how the edge between laminar and turbulent flow evolves when stable stratification eliminates the turbulence. We also show that the steady snake solution smoothly delocalizes as unstable stratification is gradually turned on until it connects (via an intermediary global three-dimensional solution) to two-dimensional Rayleigh-Bénard roll solutions.

  8. Does Incorrect Guessing Impair Fact Learning?

    ERIC Educational Resources Information Center

    Kang, Sean H. K.; Pashler, Harold; Cepeda, Nicholas J.; Rohrer, Doug; Carpenter, Shana K.; Mozer, Michael C.

    2011-01-01

    Taking a test has been shown to produce enhanced retention of the retrieved information. On tests, however, students often encounter questions the answers for which they are unsure. Should they guess anyway, even if they are likely to answer incorrectly? Or are errors engrained, impairing subsequent learning of the correct answer? We sought to…

  9. A New Procedure for Detection of Students' Rapid Guessing Responses Using Response Time

    ERIC Educational Resources Information Center

    Guo, Hongwen; Rios, Joseph A.; Haberman, Shelby; Liu, Ou Lydia; Wang, Jing; Paek, Insu

    2016-01-01

    Unmotivated test takers using rapid guessing in item responses can affect validity studies and teacher and institution performance evaluation negatively, making it critical to identify these test takers. The authors propose a new nonparametric method for finding response-time thresholds for flagging item responses that result from rapid-guessing…

  10. Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Holster, Trevor A.; Lake, J.

    2016-01-01

    Stewart questioned Beglar's use of Rasch analysis of the Vocabulary Size Test (VST) and advocated the use of 3-parameter logistic item response theory (3PLIRT) on the basis that it models a non-zero lower asymptote for items, often called a "guessing" parameter. In support of this theory, Stewart presented fit statistics derived from…

  11. Analyzing Algebraic Thinking Using "Guess My Number" Problems

    ERIC Educational Resources Information Center

    Patton, Barba; De Los Santos, Estella

    2012-01-01

    The purpose of this study was to assess student knowledge of numeric, visual and algebraic representations. A definite gap between arithmetic and algebra has been documented in the research. The researchers' goal was to identify a link between the two. Using four "Guess My Number" problems, seventh and tenth grade students were asked to write…

  12. An Effectiveness Index and Profile for Instructional Media.

    ERIC Educational Resources Information Center

    Bond, Jack H.

    A scale was developed for judging the relative value of various media in teaching children. Posttest scores were partitioned into several components: error, prior knowledge, guessing, and gain from the learning exercise. By estimating the amounts of prior knowledge, guessing, and error, and then subtracting these from the total score, an index of…

  13. The Effect of Testing Condition on Word Guessing in Elementary School Children

    ERIC Educational Resources Information Center

    Mannamaa, Mairi; Kikas, Eve; Raidvee, Aire

    2008-01-01

    Elementary school children's word guessing is studied, and the results from individual and collective testing conditions are compared. The participants are 764 students from the second, third, and fourth grades (ages 8-11, 541 students from mainstream regular classes and 223 students with learning disabilities). About half of these students are…

  14. A Two-Parameter Latent Trait Model. Methodology Project.

    ERIC Educational Resources Information Center

    Choppin, Bruce

    On well-constructed multiple-choice tests, the most serious threat to measurement is not variation in item discrimination, but the guessing behavior that may be adopted by some students. Ways of ameliorating the effects of guessing are discussed, especially for problems in latent trait models. A new item response model, including an item parameter…

  15. A simple mechanical system mimicking phase transitions in a one-dimensional medium

    NASA Astrophysics Data System (ADS)

    Charru, François

    1997-11-01

    We study a simple mechanical oscillator the bifurcations of which illustrate first- and second-order phase transitions. The phase diagram of the oscillator exhibits a coexistence curve. This curve ends at a critical point, where three critical exponents can be defined. A metronome may be used to illustrate the main results. We then consider a linear array of such oscillators with elastic coupling, which is governed by the damped Klein - Gordon equation. The classical solutions of this equation, such as fronts propagating in an unstable or in a metastable state, can be guessed at and discussed from the point of view of a mechanical model.

  16. Contaminant point source localization error estimates as functions of data quantity and model quality

    NASA Astrophysics Data System (ADS)

    Hansen, Scott K.; Vesselinov, Velimir V.

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulate well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. This greatly enhanced performance, but gains from additional data collection remained limited.

  17. Hybrid Differential Dynamic Programming with Stochastic Search

    NASA Technical Reports Server (NTRS)

    Aziz, Jonathan; Parker, Jeffrey; Englander, Jacob A.

    2016-01-01

    Differential dynamic programming (DDP) has been demonstrated as a viable approach to low-thrust trajectory optimization, namely with the recent success of NASA's Dawn mission. The Dawn trajectory was designed with the DDP-based Static/Dynamic Optimal Control algorithm used in the Mystic software.1 Another recently developed method, Hybrid Differential Dynamic Programming (HDDP),2, 3 is a variant of the standard DDP formulation that leverages both first-order and second-order state transition matrices in addition to nonlinear programming (NLP) techniques. Areas of improvement over standard DDP include constraint handling, convergence properties, continuous dynamics, and multi-phase capability. DDP is a gradient based method and will converge to a solution nearby an initial guess. In this study, monotonic basin hopping (MBH) is employed as a stochastic search method to overcome this limitation, by augmenting the HDDP algorithm for a wider search of the solution space.

  18. Mathematics in the Making: Mapping Verbal Discourse in Polya's "Let Us Teach Guessing" Lesson

    ERIC Educational Resources Information Center

    Truxaw, Mary P.; DeFranco, Thomas C.

    2007-01-01

    This paper describes a detailed analysis of verbal discourse within an exemplary mathematics lesson--that is, George Polya teaching in the Mathematics Association of America [MAA] video classic, "Let Us Teach Guessing" (1966). The results of the analysis reveal an inductive model of teaching that represents recursive cycles rather than linear…

  19. Guess Who's Coming to Dinner: The Importance of Multiculturalism in the Aftermath of Hurricane Katrina

    ERIC Educational Resources Information Center

    Moore, Alicia L.

    2007-01-01

    The importance of multiculturalism in the aftermath of Hurricane Katrina can be illustrated through a comparative view of the 1967 controversial, seminal, and Academy Award winning film, "Guess Who's Coming to Dinner". In the film, a multicultural cast starred in a groundbreaking tale of interracial marriage--then still illegal in some United…

  20. Young Children's Reasoning in Games of Nonsocial and Social Logic: "Tic Tac Toe" and a "Guessing Game".

    ERIC Educational Resources Information Center

    Fernie, David E.; DeVries, Rheta

    This research study tests Selman's (1980) hypothesis that different games pull players toward particular kinds of reasoning through a developmental comparison of children's reasoning in two games, Tic Tac Toe and the Guessing Game. The present study focuses on two basic questions and their educational implications: (1) What differences and…

  1. Identifying Measurement Disturbance Effects Using Rasch Item Fit Statistics and the Logit Residual Index.

    ERIC Educational Resources Information Center

    Mount, Robert E.; Schumacker, Randall E.

    1998-01-01

    A Monte Carlo study was conducted using simulated dichotomous data to determine the effects of guessing on Rasch item fit statistics and the Logit Residual Index. Results indicate that no significant differences were found between the mean Rasch item fit statistics for each distribution type as the probability of guessing the correct answer…

  2. A Response to Holster and Lake Regarding Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon

    2017-01-01

    Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…

  3. "A Spinach with a V on It": What 3-Year-Olds See in Standard and Enhanced Blissymbols.

    ERIC Educational Resources Information Center

    Raghavendra, Parimala; Fristoe, Macalyne

    1990-01-01

    Standard or enhanced Blissymbols, designed to represent familiar actions, attributes, and objects, were shown to 20 3 year olds, who guessed their meaning. The number of their guesses that referred to the enhancements was twice as great as the number that referred to the standard Blissymbol base. (Author/JDD)

  4. Influences of Source-Item Contingency and Schematic Knowledge on Source Monitoring: Tests of the Probability-Matching Account

    ERIC Educational Resources Information Center

    Bayen, Ute J.; Kuhlmann, Beatrice G.

    2011-01-01

    The authors investigated conditions under which judgments in source-monitoring tasks are influenced by prior schematic knowledge. According to a probability-matching account of source guessing (Spaniol & Bayen, 2002), when people do not remember the source of information, they match source-guessing probabilities to the perceived contingency…

  5. Network Authentication Protocol Studies

    DTIC Science & Technology

    2009-04-01

    the 37th Annual Hawaii International Conference on System Sciences (HICSS’04), 2004. [86] R . Corin, S. Malladi , J. Alves-Foss, and S. Etalle. Guess...requirement work products Corin03a [Corin03a] R . Corin, S. Malladi , J. Alves-Foss, and S. Etalle. Guess what? Here is a new tool that finds some new guessing...Cryptosystem………………………………………………………………… 7 Figure 3.1: A Bundle……………………………………………………………………….. 43 Figure 5.1: Penetrator strands combining a) F, R strands

  6. On the adequacy of identified Cole Cole models

    NASA Astrophysics Data System (ADS)

    Xiang, Jianping; Cheng, Daizhan; Schlindwein, F. S.; Jones, N. B.

    2003-06-01

    The Cole-Cole model has been widely used to interpret electrical geophysical data. Normally an iterative computer program is used to invert the frequency domain complex impedance data and simple error estimation is obtained from the squared difference of the measured (field) and calculated values over the full frequency range. Recently a new direct inversion algorithm was proposed for the 'optimal' estimation of the Cole-Cole parameters, which differs from existing inversion algorithms in that the estimated parameters are direct solutions of a set of equations without the need for an initial guess for initialisation. This paper first briefly investigates the advantages and disadvantages of the new algorithm compared to the standard Levenberg-Marquardt "ridge regression" algorithm. Then, and more importantly, we address the adequacy of the models resulting from both the "ridge regression" and the new algorithm, using two different statistical tests and we give objective statistical criteria for acceptance or rejection of the estimated models. The first is the standard χ2 technique. The second is a parameter-accuracy based test that uses a joint multi-normal distribution. Numerical results that illustrate the performance of both testing methods are given. The main goals of this paper are (i) to provide the source code for the new ''direct inversion'' algorithm in Matlab and (ii) to introduce and demonstrate two methods to determine the reliability of a set of data before data processing, i.e., to consider the adequacy of the resulting Cole-Cole model.

  7. State and parameter estimation of the heat shock response system using Kalman and particle filters.

    PubMed

    Liu, Xin; Niranjan, Mahesan

    2012-06-01

    Traditional models of systems biology describe dynamic biological phenomena as solutions to ordinary differential equations, which, when parameters in them are set to correct values, faithfully mimic observations. Often parameter values are tweaked by hand until desired results are achieved, or computed from biochemical experiments carried out in vitro. Of interest in this article, is the use of probabilistic modelling tools with which parameters and unobserved variables, modelled as hidden states, can be estimated from limited noisy observations of parts of a dynamical system. Here we focus on sequential filtering methods and take a detailed look at the capabilities of three members of this family: (i) extended Kalman filter (EKF), (ii) unscented Kalman filter (UKF) and (iii) the particle filter, in estimating parameters and unobserved states of cellular response to sudden temperature elevation of the bacterium Escherichia coli. While previous literature has studied this system with the EKF, we show that parameter estimation is only possible with this method when the initial guesses are sufficiently close to the true values. The same turns out to be true for the UKF. In this thorough empirical exploration, we show that the non-parametric method of particle filtering is able to reliably estimate parameters and states, converging from initial distributions relatively far away from the underlying true values. Software implementation of the three filters on this problem can be freely downloaded from http://users.ecs.soton.ac.uk/mn/HeatShock

  8. Time-periodic solutions of the Benjamin-Ono equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ambrose , D.M.; Wilkening, Jon

    2008-04-01

    We present a spectrally accurate numerical method for finding non-trivial time-periodic solutions of non-linear partial differential equations. The method is based on minimizing a functional (of the initial condition and the period) that is positive unless the solution is periodic, in which case it is zero. We solve an adjoint PDE to compute the gradient of this functional with respect to the initial condition. We include additional terms in the functional to specify the free parameters, which, in the case of the Benjamin-Ono equation, are the mean, a spatial phase, a temporal phase and the real part of one ofmore » the Fourier modes at t = 0. We use our method to study global paths of non-trivial time-periodic solutions connecting stationary and traveling waves of the Benjamin-Ono equation. As a starting guess for each path, we compute periodic solutions of the linearized problem by solving an infinite dimensional eigenvalue problem in closed form. We then use our numerical method to continue these solutions beyond the realm of linear theory until another traveling wave is reached (or until the solution blows up). By experimentation with data fitting, we identify the analytical form of the solutions on the path connecting the one-hump stationary solution to the two-hump traveling wave. We then derive exact formulas for these solutions by explicitly solving the system of ODE's governing the evolution of solitons using the ansatz suggested by the numerical simulations.« less

  9. The Development and Validation of the Game User Experience Satisfaction Scale (GUESS).

    PubMed

    Phan, Mikki H; Keebler, Joseph R; Chaparro, Barbara S

    2016-12-01

    The aim of this study was to develop and psychometrically validate a new instrument that comprehensively measures video game satisfaction based on key factors. Playtesting is often conducted in the video game industry to help game developers build better games by providing insight into the players' attitudes and preferences. However, quality feedback is difficult to obtain from playtesting sessions without a quality gaming assessment tool. There is a need for a psychometrically validated and comprehensive gaming scale that is appropriate for playtesting and game evaluation purposes. The process of developing and validating this new scale followed current best practices of scale development and validation. As a result, a mixed-method design that consisted of item pool generation, expert review, questionnaire pilot study, exploratory factor analysis (N = 629), and confirmatory factor analysis (N = 729) was implemented. A new instrument measuring video game satisfaction, called the Game User Experience Satisfaction Scale (GUESS), with nine subscales emerged. The GUESS was demonstrated to have content validity, internal consistency, and convergent and discriminant validity. The GUESS was developed and validated based on the assessments of over 450 unique video game titles across many popular genres. Thus, it can be applied across many types of video games in the industry both as a way to assess what aspects of a game contribute to user satisfaction and as a tool to aid in debriefing users on their gaming experience. The GUESS can be administered to evaluate user satisfaction of different types of video games by a variety of users. © 2016, Human Factors and Ergonomics Society.

  10. Ultrasonic prediction of term birth weight in Hispanic women. Accuracy in an outpatient clinic.

    PubMed

    Nahum, Gerard G; Pham, Krystle Q; McHugh, John P

    2003-01-01

    To investigate the accuracy of ultrasonic fetal biometric algorithms for estimating term fetal weight. Ultrasonographic fetal biometric assessments were made in 74 Hispanic women who delivered at 37-42 weeks of gestation. Measurements were taken of the fetal biparietal diameter, head circumference, abdominal circumference and femur length. Twenty-seven standard fetal biometric algorithms were assessed for their accuracy in predicting fetal weight. Results were compared to those obtained by merely guessing the mean term birth weight in each case. The correlation between ultrasonically predicted and actual birth weights ranged from 0.52 to 0.79. The different ultrasonic algorithms estimated fetal weight to within +/- 8.6-15.0% (+/- 295-520 g) of actual birth weight as compared with +/- 13.6% (+/- 449 g) for guessing the mean birth weight in each case (mean +/- SD). The mean absolute prediction errors for 17 of the ultrasonic equations (63%) were superior to those obtained by guessing the mean birth weight by 3.2-5.0% (96-154 g) (P < .05). Fourteen algorithms (52%) were more accurate for predicting fetal weight to within +/- 15%, and 20 algorithms (74%) were more accurate for predicting fetal weight to within +/- 10% of actual birth weight than simply guessing the mean birth weight (P < .05). Ten ultrasonic equations (37%) showed significant utility for predicting fetal weight > 4,000 g (likelihood ratio > 5.0). Term fetal weight predictions using the majority of sonographic fetal biometric equations are more accurate, by up to 154 g and 5%, than simply guessing the population-specific mean birth weight.

  11. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-02-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second generation DGVM LPJ-GUESS to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, that increased the model's speed by approximately the factor 8, we were able to faster detect shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south-transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  12. Game of Words: Prototype of a Digital Game Focusing on Oral Production (and Comprehension) through Asynchronous Interaction

    ERIC Educational Resources Information Center

    Loiseau, Mathieu; Hallal, Racha; Ballot, Pauline; Gazidedja, Ada

    2016-01-01

    In this paper, we present a learning game designed according to a strategy focusing on favouring the learners' "playful attitude". The game's modalities pertain to what we might call "guessing games". The chosen avatar of such guessing games both exists as learning and Commercial Off The Shelf (COTS) board games. We explain in…

  13. Detection of Differential Item Functioning with Nonlinear Regression: A Non-IRT Approach Accounting for Guessing

    ERIC Educational Resources Information Center

    Drabinová, Adéla; Martinková, Patrícia

    2017-01-01

    In this article we present a general approach not relying on item response theory models (non-IRT) to detect differential item functioning (DIF) in dichotomous items with presence of guessing. The proposed nonlinear regression (NLR) procedure for DIF detection is an extension of method based on logistic regression. As a non-IRT approach, NLR can…

  14. The Incidence of Clueing in Multiple Choice Testbank Questions in Accounting: Some Evidence from Australia

    ERIC Educational Resources Information Center

    Ibbett, Nicole L.; Wheldon, Brett J.

    2016-01-01

    In 2014 Central Queensland University (CQU) in Australia banned the use of multiple choice questions (MCQs) as an assessment tool. One of the reasons given for this decision was that MCQs provide an opportunity for students to "pass" by merely guessing their answers. The mathematical likelihood of a student passing by guessing alone can…

  15. Improving Preschoolers' Recognition Memory for Faces with Orienting Information.

    ERIC Educational Resources Information Center

    Montepare, Joann M.

    To determine whether preschool children's memory for unfamiliar faces could be facilitated by giving them orienting information about faces, 4- and 5-year-old subjects were told that they were going to play a guessing game in which they would be looking at faces and guessing which ones they had seen before. In study 1, 6 boys and 6 girls within…

  16. An Alternative to the 3PL: Using Asymmetric Item Characteristic Curves to Address Guessing Effects

    ERIC Educational Resources Information Center

    Lee, Sora; Bolt, Daniel M.

    2018-01-01

    Both the statistical and interpretational shortcomings of the three-parameter logistic (3PL) model in accommodating guessing effects on multiple-choice items are well documented. We consider the use of a residual heteroscedasticity (RH) model as an alternative, and compare its performance to the 3PL with real test data sets and through simulation…

  17. An Examination of At-Risk College Freshmen's Expository Literacy Skills Using Interactive Online Writing Activities

    ERIC Educational Resources Information Center

    Mongillo, Geraldine; Wilder, Hilary

    2012-01-01

    This qualitative study focused on at-risk college freshmen's ability to read and write expository text using game-like, online expository writing activities. These activities required participants to write descriptions of a target object so that peers could guess what the object was, after which they were given the results of those guesses as…

  18. Grade of Membership Response Time Model for Detecting Guessing Behaviors

    ERIC Educational Resources Information Center

    Pokropek, Artur

    2016-01-01

    A response model that is able to detect guessing behaviors and produce unbiased estimates in low-stake conditions using timing information is proposed. The model is a special case of the grade of membership model in which responses are modeled as partial members of a class that is affected by motivation and a class that responds only according to…

  19. A monogamy-of-entanglement game with applications to device-independent quantum cryptography

    NASA Astrophysics Data System (ADS)

    Tomamichel, Marco; Fehr, Serge; Kaniewski, Jędrzej; Wehner, Stephanie

    2013-10-01

    We consider a game in which two separate laboratories collaborate to prepare a quantum system and are then asked to guess the outcome of a measurement performed by a third party in a random basis on that system. Intuitively, by the uncertainty principle and the monogamy of entanglement, the probability that both players simultaneously succeed in guessing the outcome correctly is bounded. We are interested in the question of how the success probability scales when many such games are performed in parallel. We show that any strategy that maximizes the probability to win every game individually is also optimal for the parallel repetition of the game. Our result implies that the optimal guessing probability can be achieved without the use of entanglement. We explore several applications of this result. Firstly, we show that it implies security for standard BB84 quantum key distribution when the receiving party uses fully untrusted measurement devices, i.e. we show that BB84 is one-sided device independent. Secondly, we show how our result can be used to prove security of a one-round position-verification scheme. Finally, we generalize a well-known uncertainty relation for the guessing probability to quantum side information.

  20. A Robust and Efficient Method for Steady State Patterns in Reaction-Diffusion Systems

    PubMed Central

    Lo, Wing-Cheong; Chen, Long; Wang, Ming; Nie, Qing

    2012-01-01

    An inhomogeneous steady state pattern of nonlinear reaction-diffusion equations with no-flux boundary conditions is usually computed by solving the corresponding time-dependent reaction-diffusion equations using temporal schemes. Nonlinear solvers (e.g., Newton’s method) take less CPU time in direct computation for the steady state; however, their convergence is sensitive to the initial guess, often leading to divergence or convergence to spatially homogeneous solution. Systematically numerical exploration of spatial patterns of reaction-diffusion equations under different parameter regimes requires that the numerical method be efficient and robust to initial condition or initial guess, with better likelihood of convergence to an inhomogeneous pattern. Here, a new approach that combines the advantages of temporal schemes in robustness and Newton’s method in fast convergence in solving steady states of reaction-diffusion equations is proposed. In particular, an adaptive implicit Euler with inexact solver (AIIE) method is found to be much more efficient than temporal schemes and more robust in convergence than typical nonlinear solvers (e.g., Newton’s method) in finding the inhomogeneous pattern. Application of this new approach to two reaction-diffusion equations in one, two, and three spatial dimensions, along with direct comparisons to several other existing methods, demonstrates that AIIE is a more desirable method for searching inhomogeneous spatial patterns of reaction-diffusion equations in a large parameter space. PMID:22773849

  1. Evaluating the coupled vegetation-fire model, LPJ-GUESS-SPITFIRE, against observed tropical forest biomass

    NASA Astrophysics Data System (ADS)

    Spessa, Allan; Forrest, Matthew; Werner, Christian; Steinkamp, Joerg; Hickler, Thomas

    2013-04-01

    Wildfire is a fundamental Earth System process. It is the most important disturbance worldwide in terms of area and variety of biomes affected; a major mechanism by which carbon is transferred from the land to the atmosphere (2-4 Pg per annum, equiv. 20-30% of global fossil fuel emissions over the last decade); and globally a significant source of particulate aerosols and trace greenhouse gases. Fire is also potentially important as a feedback in the climate system. If climate change favours more intense fire regimes, this would result in a net transfer of carbon from ecosystems to the atmosphere, as well as higher emissions, and under certain circumstances, increased troposphere ozone production- all contributing to positive climate-land surface feedbacks. Quantitative analysis of fire-vegetation-climate interactions has been held back until recently by a lack of consistent global data sets on fire, and by the underdeveloped state of dynamic vegetation-fire modelling. Dynamic vegetation-fire modelling is an essential part of our forecasting armory for examining the possible impacts of climate, fire regimes and land-use on ecosystems and emissions from biomass burning beyond the observation period, as part of future climate or paleo-climate studies. LPJ-GUESS is a process-based model of vegetation dynamics designed for regional to global applications. It combines features of the Lund-Potsdam-Jena Dynamic Global Vegetation Model (LPJ-DGVM) with those of the General Ecosystem Simulator (GUESS) in a single, flexible modelling framework. The models have identical representations of eco-physiological and biogeochemical processes, including the hydrological cycle. However, they differ in the detail with which vegetation dynamics and canopy structure are simulated. Simplified, computationally efficient representations are used in the LPJ-DGVM, while LPJ-GUESS employs a gap-model approach, which better captures ecological succession and hence ecosystem changes due to disturbance such as fire. SPITFIRE (SPread and InTensity of FIRe and Emissions) mechanistically simulates the number of fires, area burnt, fire intensity, crown fires, fire-induced plant mortality, and emissions of carbon, trace gases and aerosols from biomass burning. Originally developed as an embedded model within LPJ-DGVM, SPITFIRE has since been coupled to LPJ-GUESS. However, neither LPJ-DGVM-SPITFIRE nor LPJ-GUESS-SPITFIRE has been fully benchmarked, especially in terms of how well each model simulates vegetation patterns and biomass in areas where fire is known to be important. This information is crucial if we are to have confidence in the models in forecasting fire, emissions from biomass burning and fire-climate impacts on ecosystems. Here we report on the benchmarking of the LPJ-GUESS-SPITFIRE model. We benchmarked LPJ-GUESS-SPITFIRE driven by a combination of daily reanalysis climate data (Sheffield 2012), monthly GFEDv3 burnt area data (1997-2009) (van der Werf et al. 2010) and long-term annual fire statistics (1901 to 2000) (Mouillot and Field 2005) against new Lidar-based biomass data for tropical forests and savannas (Saatchi et al. 2011; Baccini et al., 2012). Our new work has focused on revising the way GUESS simulates tree allometry, light penetration through the tree canopy and sapling recruitment, and how GUESS-SPITFIRE simulates fire-induced mortality, all based on recent literature, as well as a more explicit accounting of land cover change (JRC's GLC 2009). We present how these combined changes result in a much improved simulation of tree carbon across the tropics, including the Americas, Africa, Asia and Australia. Our results are compared with respect to more empirical-based approaches to calculating emissions from biomass burning. We discuss our findings in terms of improved forecasting of fire, emissions from biomass burning and fire-climate impacts on ecosystems.

  2. Rapid space trajectory generation using a Fourier series shape-based approach

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan

    With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipments. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.

  3. Development of a Response Planner using the UCT Algorithm for Cyber Defense

    DTIC Science & Technology

    2013-03-01

    writer2l, guess passwd r2l, imap r2l, ipsweep probe, land dos, loadmodule u2r, multihop r2l, neptune dos, nmap probe, perl u2r, phf r2l, pod dos, portsweep...2646 10 pod 201 11 back 956 12 guess passwd 53 Item Type Count 13 ftp write 8 14 multihop 7 15 rootkit 10 16 buffer overflow 30 17 imap 11 18...pod 0 0 0 87 6 11 0 0 64 33 0 0 0 0 k = back 908 0 0 0 0 0 0 0 0 0 47 0 1 0 l = guess passwd 0 0 0 42 3 0 1 0 0 0 0 5 1 0 m = buffer overflow 0 0 17

  4. Efficient and Accurate Optimal Linear Phase FIR Filter Design Using Opposition-Based Harmony Search Algorithm

    PubMed Central

    Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390

  5. Efficient and accurate optimal linear phase FIR filter design using opposition-based harmony search algorithm.

    PubMed

    Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P

    2013-01-01

    In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.

  6. Minimal gain marching schemes: searching for unstable steady-states with unsteady solvers

    NASA Astrophysics Data System (ADS)

    de S. Teixeira, Renan; S. de B. Alves, Leonardo

    2017-12-01

    Reference solutions are important in several applications. They are used as base states in linear stability analyses as well as initial conditions and reference states for sponge zones in numerical simulations, just to name a few examples. Their accuracy is also paramount in both fields, leading to more reliable analyses and efficient simulations, respectively. Hence, steady-states usually make the best reference solutions. Unfortunately, standard marching schemes utilized for accurate unsteady simulations almost never reach steady-states of unstable flows. Steady governing equations could be solved instead, by employing Newton-type methods often coupled with continuation techniques. However, such iterative approaches do require large computational resources and very good initial guesses to converge. These difficulties motivated the development of a technique known as selective frequency damping (SFD) (Åkervik et al. in Phys Fluids 18(6):068102, 2006). It adds a source term to the unsteady governing equations that filters out the unstable frequencies, allowing a steady-state to be reached. This approach does not require a good initial condition and works well for self-excited flows, where a single nonzero excitation frequency is selected by either absolute or global instability mechanisms. On the other hand, it seems unable to damp stationary disturbances. Furthermore, flows with a broad unstable frequency spectrum might require the use of multiple filters, which delays convergence significantly. Both scenarios appear in convectively, absolutely or globally unstable flows. An alternative approach is proposed in the present paper. It modifies the coefficients of a marching scheme in such a way that makes the absolute value of its linear gain smaller than one within the required unstable frequency spectra, allowing the respective disturbance amplitudes to decay given enough time. These ideas are applied here to implicit multi-step schemes. A few chosen test cases shows that they enable convergence toward solutions that are unstable to stationary and oscillatory disturbances, with either a single or multiple frequency content. Finally, comparisons with SFD are also performed, showing significant reduction in computer cost for complex flows by using the implicit multi-step MGM schemes.

  7. Numerical solutions of Navier-Stokes equations for a Butler wing

    NASA Technical Reports Server (NTRS)

    Abolhassani, J. S.; Tiwari, S. N.

    1985-01-01

    The flow field is simulated on the surface of a given delta wing (Butler wing) at zero incident in a uniform stream. The simulation is done by integrating a set of flow field equations. This set of equations governs the unsteady, viscous, compressible, heat conducting flow of an ideal gas. The equations are written in curvilinear coordinates so that the wing surface is represented accurately. These equations are solved by the finite difference method, and results obtained for high-speed freestream conditions are compared with theoretical and experimental results. In this study, the Navier-Stokes equations are solved numerically. These equations are unsteady, compressible, viscous, and three-dimensional without neglecting any terms. The time dependency of the governing equations allows the solution to progress naturally for an arbitrary initial initial guess to an asymptotic steady state, if one exists. The equations are transformed from physical coordinates to the computational coordinates, allowing the solution of the governing equations in a rectangular parallel-piped domain. The equations are solved by the MacCormack time-split technique which is vectorized and programmed to run on the CDC VPS 32 computer.

  8. Automated Generation of Finite-Element Meshes for Aircraft Conceptual Design

    NASA Technical Reports Server (NTRS)

    Li, Wu; Robinson, Jay

    2016-01-01

    This paper presents a novel approach for automated generation of fully connected finite-element meshes for all internal structural components and skins of a given wing-body geometry model, controlled by a few conceptual-level structural layout parameters. Internal structural components include spars, ribs, frames, and bulkheads. Structural layout parameters include spar/rib locations in wing chordwise/spanwise direction and frame/bulkhead locations in longitudinal direction. A simple shell thickness optimization problem with two load conditions is used to verify versatility and robustness of the automated meshing process. The automation process is implemented in ModelCenter starting from an OpenVSP geometry and ending with a NASTRAN 200 solution. One subsonic configuration and one supersonic configuration are used for numerical verification. Two different structural layouts are constructed for each configuration and five finite-element meshes of different sizes are generated for each layout. The paper includes various comparisons of solutions of 20 thickness optimization problems, as well as discussions on how the optimal solutions are affected by the stress constraint bound and the initial guess of design variables.

  9. An Automatic Medium to High Fidelity Low-Thrust Global Trajectory Toolchain; EMTG-GMAT

    NASA Technical Reports Server (NTRS)

    Beeson, Ryne T.; Englander, Jacob A.; Hughes, Steven P.; Schadegg, Maximillian

    2015-01-01

    Solving the global optimization, low-thrust, multiple-flyby interplanetary trajectory problem with high-fidelity dynamical models requires an unreasonable amount of computational resources. A better approach, and one that is demonstrated in this paper, is a multi-step process whereby the solution of the aforementioned problem is solved at a lower-fidelity and this solution is used as an initial guess for a higher-fidelity solver. The framework presented in this work uses two tools developed by NASA Goddard Space Flight Center: the Evolutionary Mission Trajectory Generator (EMTG) and the General Mission Analysis Tool (GMAT). EMTG is a medium to medium-high fidelity low-thrust interplanetary global optimization solver, which now has the capability to automatically generate GMAT script files for seeding a high-fidelity solution using GMAT's local optimization capabilities. A discussion of the dynamical models as well as thruster and power modeling for both EMTG and GMAT are given in this paper. Current capabilities are demonstrated with examples that highlight the toolchains ability to efficiently solve the difficult low-thrust global optimization problem with little human intervention.

  10. A Study on the Relationship between English Vocabulary Threshold and Word Guessing Strategy for Pre-University Chinese Students in Malaysia

    ERIC Educational Resources Information Center

    Juan, Wu Xiao; Abidin, Mohamad Jafre Zainol; Eng, Lin Siew

    2013-01-01

    This survey aims at studying the relationship between English vocabulary threshold and word guessing strategy that is used in reading comprehension learning among 80 pre-university Chinese students in Malaysia. T-test is the main statistical test for this research, and the collected data is analysed using SPSS. From the standard deviation test…

  11. Controlling Guessing Bias in the Dichotomous Rasch Model Applied to a Large-Scale, Vertically Scaled Testing Program

    ERIC Educational Resources Information Center

    Andrich, David; Marais, Ida; Humphry, Stephen Mark

    2016-01-01

    Recent research has shown how the statistical bias in Rasch model difficulty estimates induced by guessing in multiple-choice items can be eliminated. Using vertical scaling of a high-profile national reading test, it is shown that the dominant effect of removing such bias is a nonlinear change in the unit of scale across the continuum. The…

  12. "Playing with Words": Effects of an Anagram Solving Game-Like Application for Primary Education Students

    ERIC Educational Resources Information Center

    Panagiotakopoulos, Chris T.; Sarris, Menelaos E.

    2013-01-01

    The present study reports the basic characteristics of a game-like application entitled "Playing with Words-PwW". PwW is a single-user application where a word must be guessed given an anagram of that word. Anagrams are presented from a predefined word list and users can repeatedly try to guess the word, from which the anagram is…

  13. Question structure impacts efficiency and performance in an interactive guessing game: implications for strategy engagement and executive functioning.

    PubMed

    Longenecker, Julia; Liu, Kristy; Chen, Eric Y H

    2012-12-30

    In an interactive guessing game, controls had higher performance and efficiency than patients with schizophrenia in correct trials. Patients' difficulties generating efficient questions suggest an increased taxation of working memory and an inability to engage an appropriate strategy, leading to impulsive behavior and reduced success. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  14. Interpolation of unevenly spaced data using a parabolic leapfrog correction method and cubic splines

    Treesearch

    Julio L. Guardado; William T. Sommers

    1977-01-01

    The technique proposed allows interpolation of data recorded at unevenly spaced sites to a regular grid or to other sites. Known data are interpolated to an initial guess field grid of unevenly spaced rows and columns by a simple distance weighting procedure. The initial guess field is then adjusted by using a parabolic leapfrog correction and the known data. The final...

  15. Children's Perception of Black and White Boxes and Bobo Dolls as a Reflection of How They Regard Their Own and Other's Racial Membership.

    ERIC Educational Resources Information Center

    Stabler, John R.; Johnson, Edward E.

    Investigation of how children's responses to black and white objects reflect racial concepts is reported. One series of experiments asking Headstart children to guess which objects they liked or disliked were hidden in black or white boxes. Although white children guessed more often that positively evaluated objects were in white boxes, black…

  16. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, B.; Polizzi, E.

    2013-05-01

    The self-consistent procedure in electronic structure calculations is revisited using a highly efficient and robust algorithm for solving the non-linear eigenvector problem, i.e., H({ψ})ψ = Eψ. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm to account for the non-linearity of the Hamiltonian with the occupied eigenvectors. Using a series of numerical examples and the density functional theory-Kohn/Sham model, it will be shown that our approach can outperform the traditional SCF mixing-scheme techniques by providing a higher converge rate, convergence to the correct solution regardless of the choice of the initial guess, and a significant reduction of the eigenvalue solve time in simulations.

  17. Steady axisymmetric vortex flows with swirl and shear

    NASA Astrophysics Data System (ADS)

    Elcrat, Alan R.; Fornberg, Bengt; Miller, Kenneth G.

    A general procedure is presented for computing axisymmetric swirling vortices which are steady with respect to an inviscid flow that is either uniform at infinity or includes shear. We consider cases both with and without a spherical obstacle. Choices of numerical parameters are given which yield vortex rings with swirl, attached vortices with swirl analogous to spherical vortices found by Moffatt, tubes of vorticity extending to infinity and Beltrami flows. When there is a spherical obstacle we have found multiple solutions for each set of parameters. Flows are found by numerically solving the Bragg-Hawthorne equation using a non-Newton-based iterative procedure which is robust in its dependence on an initial guess.

  18. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  19. Application of a computationally efficient method to approximate gap model results with a probabilistic approach

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Lischke, H.

    2014-07-01

    To be able to simulate climate change effects on forest dynamics over the whole of Switzerland, we adapted the second-generation DGVM (dynamic global vegetation model) LPJ-GUESS (Lund-Potsdam-Jena General Ecosystem Simulator) to the Alpine environment. We modified model functions, tuned model parameters, and implemented new tree species to represent the potential natural vegetation of Alpine landscapes. Furthermore, we increased the computational efficiency of the model to enable area-covering simulations in a fine resolution (1 km) sufficient for the complex topography of the Alps, which resulted in more than 32 000 simulation grid cells. To this aim, we applied the recently developed method GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) (Scherstjanoi et al., 2013) to LPJ-GUESS. GAPPARD derives mean output values from a combination of simulation runs without disturbances and a patch age distribution defined by the disturbance frequency. With this computationally efficient method, which increased the model's speed by approximately the factor 8, we were able to faster detect the shortcomings of LPJ-GUESS functions and parameters. We used the adapted LPJ-GUESS together with GAPPARD to assess the influence of one climate change scenario on dynamics of tree species composition and biomass throughout the 21st century in Switzerland. To allow for comparison with the original model, we additionally simulated forest dynamics along a north-south transect through Switzerland. The results from this transect confirmed the high value of the GAPPARD method despite some limitations towards extreme climatic events. It allowed for the first time to obtain area-wide, detailed high-resolution LPJ-GUESS simulation results for a large part of the Alpine region.

  20. Reliability of MEG source imaging of anterior temporal spikes: analysis of an intracranially characterized spike focus.

    PubMed

    Wennberg, Richard; Cheyne, Douglas

    2014-05-01

    To assess the reliability of MEG source imaging (MSI) of anterior temporal spikes through detailed analysis of the localization and orientation of source solutions obtained for a large number of spikes that were separately confirmed by intracranial EEG to be focally generated within a single, well-characterized spike focus. MSI was performed on 64 identical right anterior temporal spikes from an anterolateral temporal neocortical spike focus. The effects of different volume conductors (sphere and realistic head model), removal of noise with low frequency filters (LFFs) and averaging multiple spikes were assessed in terms of the reliability of the source solutions. MSI of single spikes resulted in scattered dipole source solutions that showed reasonable reliability for localization at the lobar level, but only for solutions with a goodness-of-fit exceeding 80% using a LFF of 3 Hz. Reliability at a finer level of intralobar localization was limited. Spike averaging significantly improved the reliability of source solutions and averaging 8 or more spikes reduced dependency on goodness-of-fit and data filtering. MSI performed on topographically identical individual spikes from an intracranially defined classical anterior temporal lobe spike focus was limited by low reliability (i.e., scattered source solutions) in terms of fine, sublobar localization within the ipsilateral temporal lobe. Spike averaging significantly improved reliability. MSI performed on individual anterior temporal spikes is limited by low reliability. Reduction of background noise through spike averaging significantly improves the reliability of MSI solutions. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Contaminant point source localization error estimates as functions of data quantity and model quality

    DOE PAGES

    Hansen, Scott K.; Vesselinov, Velimir Valentinov

    2016-10-01

    We develop empirically-grounded error envelopes for localization of a point contamination release event in the saturated zone of a previously uncharacterized heterogeneous aquifer into which a number of plume-intercepting wells have been drilled. We assume that flow direction in the aquifer is known exactly and velocity is known to within a factor of two of our best guess from well observations prior to source identification. Other aquifer and source parameters must be estimated by interpretation of well breakthrough data via the advection-dispersion equation. We employ high performance computing to generate numerous random realizations of aquifer parameters and well locations, simulatemore » well breakthrough data, and then employ unsupervised machine optimization techniques to estimate the most likely spatial (or space-time) location of the source. Tabulating the accuracy of these estimates from the multiple realizations, we relate the size of 90% and 95% confidence envelopes to the data quantity (number of wells) and model quality (fidelity of ADE interpretation model to actual concentrations in a heterogeneous aquifer with channelized flow). We find that for purely spatial localization of the contaminant source, increased data quantities can make up for reduced model quality. For space-time localization, we find similar qualitative behavior, but significantly degraded spatial localization reliability and less improvement from extra data collection. Since the space-time source localization problem is much more challenging, we also tried a multiple-initial-guess optimization strategy. Furthermore, this greatly enhanced performance, but gains from additional data collection remained limited.« less

  2. A morphing-based scheme for large deformation analysis with stereo-DIC

    NASA Astrophysics Data System (ADS)

    Genovese, Katia; Sorgente, Donato

    2018-05-01

    A key step in the DIC-based image registration process is the definition of the initial guess for the non-linear optimization routine aimed at finding the parameters describing the pixel subset transformation. This initialization may result very challenging and possibly fail when dealing with pairs of largely deformed images such those obtained from two angled-views of not-flat objects or from the temporal undersampling of rapidly evolving phenomena. To address this problem, we developed a procedure that generates a sequence of intermediate synthetic images for gradually tracking the pixel subset transformation between the two extreme configurations. To this scope, a proper image warping function is defined over the entire image domain through the adoption of a robust feature-based algorithm followed by a NURBS-based interpolation scheme. This allows a fast and reliable estimation of the initial guess of the deformation parameters for the subsequent refinement stage of the DIC analysis. The proposed method is described step-by-step by illustrating the measurement of the large and heterogeneous deformation of a circular silicone membrane undergoing axisymmetric indentation. A comparative analysis of the results is carried out by taking as a benchmark a standard reference-updating approach. Finally, the morphing scheme is extended to the most general case of the correspondence search between two largely deformed textured 3D geometries. The feasibility of this latter approach is demonstrated on a very challenging case: the full-surface measurement of the severe deformation (> 150% strain) suffered by an aluminum sheet blank subjected to a pneumatic bulge test.

  3. Single trial discrimination of individual finger movements on one hand: A combined MEG and EEG study☆

    PubMed Central

    Quandt, F.; Reichert, C.; Hinrichs, H.; Heinze, H.J.; Knight, R.T.; Rieger, J.W.

    2012-01-01

    It is crucial to understand what brain signals can be decoded from single trials with different recording techniques for the development of Brain-Machine Interfaces. A specific challenge for non-invasive recording methods are activations confined to small spatial areas on the cortex such as the finger representation of one hand. Here we study the information content of single trial brain activity in non-invasive MEG and EEG recordings elicited by finger movements of one hand. We investigate the feasibility of decoding which of four fingers of one hand performed a slight button press. With MEG we demonstrate reliable discrimination of single button presses performed with the thumb, the index, the middle or the little finger (average over all subjects and fingers 57%, best subject 70%, empirical guessing level: 25.1%). EEG decoding performance was less robust (average over all subjects and fingers 43%, best subject 54%, empirical guessing level 25.1%). Spatiotemporal patterns of amplitude variations in the time series provided best information for discriminating finger movements. Non-phase-locked changes of mu and beta oscillations were less predictive. Movement related high gamma oscillations were observed in average induced oscillation amplitudes in the MEG but did not provide sufficient information about the finger's identity in single trials. Importantly, pre-movement neuronal activity provided information about the preparation of the movement of a specific finger. Our study demonstrates the potential of non-invasive MEG to provide informative features for individual finger control in a Brain-Machine Interface neuroprosthesis. PMID:22155040

  4. Algorithm based on the Thomson problem for determination of equilibrium structures of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Arias, E.; Florez, E.; Pérez-Torres, J. F.

    2017-06-01

    A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.

  5. Detecting Patterns of Anomalies

    DTIC Science & Technology

    2009-03-01

    0.0057 0.9668 ± 0.0053 guess passwd 0.7316 ± 0.0133 0.7792 ± 0.0145 mailbomb 0.1782 ± 0.0104 0.2243 ± 0.014 neptune 0.9938 ± 0.003 0.9938 ± 0.003 smurf...1.0 ± 0.0 1.0 ± 0.0 0.727 ± 0.051 guess passwd 1.0 ± 0.0 1.0 ± 0.0 0.957 ± 0.016 1.0 ± 0.0 0.610 ± 0.045 mailbomb 0.788 ± 0.02 0.82 ± 0.023 0.276...0.951 ± 0.004 0.882 ± 0.021 0.215 ± 0.042 guess passwd 0.991 ± 0.002 0.773 ± 0.008 0.124 ± 0.005 0.804 ± 0.013 0.205 ± 0.041 mailbomb 0.587 ± 0.007

  6. Algorithm based on the Thomson problem for determination of equilibrium structures of metal nanoclusters.

    PubMed

    Arias, E; Florez, E; Pérez-Torres, J F

    2017-06-28

    A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu 7 , Cu 9 , and Cu 11 as benchmark systems, and Cu 38 and Ni 9 as novel systems. New equilibrium structures for Cu 9 , Cu 11 , Cu 38 , and Ni 9 are reported.

  7. Objective analysis of pseudostress over the Indian Ocean using a direct-minimization approach

    NASA Technical Reports Server (NTRS)

    Legler, David M.; Navon, I. M.; O'Brien, James J.

    1989-01-01

    A technique not previously used in objective analysis of meteorological data is used here to produce monthly average surface pseudostress data over the Indian Ocean. An initial guess field is derived and a cost functional is constructed with five terms: approximation to initial guess, approximation to climatology, a smoothness parameter, and two kinematic terms. The functional is minimized using a conjugate-gradient technique, and the weight for the climatology term controls the overall balance of influence between the climatology and the initial guess. Results from various weight combinations are presented for January and July 1984. Quantitative and qualitative comparisons to the subject analysis are made to find which weight combination provides the best results. The weight on the approximation to climatology is found to balance the influence of the original field and climatology.

  8. Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms

    DTIC Science & Technology

    2011-08-31

    2011 4 . TITLE AND SUBTITLE Guess Again (and Again and Again): Measuring Password Strength by Simulating Password-Cracking Algorithms 5a. CONTRACT...large numbers of hashed passwords (Booz Allen Hamilton, HBGary, Gawker, Sony Playstation , etc.), coupled with the availability of botnets that offer...when evaluating the strength of different password-composition policies. 4 . We investigate the effectiveness of entropy as a measure of password

  9. An Empirical Test of a Strategy for Training Examinees in the Use of Partial Information in Taking Multiple Choice Tests.

    ERIC Educational Resources Information Center

    Bliss, Leonard B.

    The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…

  10. Understanding Inference as a Source of Knowledge: Children's Ability To Evaluate the Certainty of Deduction, Perception, and Guessing.

    ERIC Educational Resources Information Center

    Pillow, Bradford H.; Hill, Valerie; Boyce, April; Stein, Catherine

    2000-01-01

    Three experiments investigated children's understanding of inference as a knowledge source. Most 4- to 6-year-olds did not rate a puppet as more certain of a toy's color after the puppet looked at the toy or inferred its color than they did after the puppet guessed the color. Most 8- and 9-year-olds distinguished inference and looking from…

  11. A Generic Expert Scheduling System Architecture and Toolkit: GUESS (Generically Used Expert Scheduling System)

    NASA Technical Reports Server (NTRS)

    Liebowitz, Jay; Krishnamurthy, Vijaya; Rodens, Ira; Houston, Chapman; Liebowitz, Alisa; Baek, Seung; Radko, Joe; Zeide, Janet

    1996-01-01

    Scheduling has become an increasingly important element in today's society and workplace. Within the NASA environment, scheduling is one of the most frequently performed and challenging functions. Towards meeting NASA's scheduling needs, a research version of a generic expert scheduling system architecture and toolkit has been developed. This final report describes the development and testing of GUESS (Generically Used Expert Scheduling System).

  12. Telling good from bad news: ADHD differentially affects processing of positive and negative feedback during guessing.

    PubMed

    van Meel, Catharina S; Oosterlaan, Jaap; Heslenfeld, Dirk J; Sergeant, Joseph A

    2005-01-01

    Neuroimaging studies on ADHD suggest abnormalities in brain regions associated with decision-making and reward processing such as the anterior cingulate cortex (ACC) and orbitofrontal cortex. Recently, event-related potential (ERP) studies demonstrated that the ACC is involved in processing feedback signals during guessing and gambling. The resulting negative deflection, the 'feedback-related negativity' (FRN) has been interpreted as reflecting an error in reward prediction. In the present study, ERPs elicited by positive and negative feedback were recorded in children with ADHD and normal controls during guessing. 'Correct' and 'incorrect' guesses resulted in respectively monetary gains and losses. The FRN amplitude to losses was more pronounced in the ADHD group than in normal controls. Positive and negative feedback differentially affected long latency components in the ERP waveforms of normal controls, but not ADHD children. These later deflections might be related to further emotional or strategic processing. The present findings suggest an enhanced sensitivity to unfavourable outcomes in children with ADHD, probably due to abnormalities in mesolimbic reward circuits. In addition, further processing, such as affective evaluation and the assessment of future consequences of the feedback signal seems to be altered in ADHD. These results may further help understanding the neural basis of decision-making deficits in ADHD.

  13. Influences of Age and Emotion on Source Guessing: Are Older Adults More Likely to Show Fear-Relevant Illusory Correlations?

    PubMed

    Meyer, Miriam Magdalena; Buchner, Axel; Bell, Raoul

    2016-09-01

    The present study investigates age differences in the vulnerability to illusory correlations between fear-relevant stimuli and threatening information. Younger and older adults saw pictures of threatening snakes and nonthreatening fish, paired with threatening and nonthreatening context information ("poisonous" and "nonpoisonous") with a null contingency between animal type and poisonousness. In a source monitoring test, participants were required to remember whether an animal was associated with poisonousness or nonpoisonousness. Illusory correlations were implicitly measured via a multinomial model. One advantage of this approach is that memory and guessing processes can be assessed independently. An illusory correlation would be reflected in a higher probability of guessing that a snake rather than a fish was poisonous if the poisonousness of the animal was not remembered. Older adults showed evidence of illusory correlations in source guessing while younger adults did not; instead they showed evidence of probability matching. Moreover, snake fear was associated with increased vulnerability to illusory correlations in older adults. The findings confirm that older adults are more susceptible to fear-relevant illusory correlations than younger adults. © The Author 2015. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. Memory and the Korsakoff syndrome: not remembering what is remembered.

    PubMed

    d'Ydewalle, Géry; Van Damme, Ilse

    2007-03-14

    Following the distinction between involuntary unconscious memory, involuntary conscious memory, and intentional retrieval, the focus of the present paper is whether there is an impairment of involuntary conscious memory among Korsakoff patients. At study, participants generated associations versus counted the number of letters with enclosed spaces or the number of vowels in the target words (semantic versus perceptual processing). In the Direct tests, stems were to be used to retrieve the targets with either guessing or no guessing allowed; in the Opposition tests, the stems were to be completed with the first word that came to mind but using another word if that first word was a target word; and in the Indirect tests, no reference was made to the target words from the study phase. In the Direct tests, the performance of Korsakoff patients was not necessarily worse than the one of healthy controls, provided guessing was allowed. More critical for the Korsakoff patients was the deficient involuntary conscious memory. The deficiency explained the suppression failures in the Opposition tests, the absence of performance differences between the Indirect and Opposition tests, the absence of a beneficial effect in providing information about the status of the stem, the performance boost when allowed to guess, and the very low rate of "Know"/"Remember" responses.

  15. Automated Tests for Telephone Telepathy Using Mobile Phones.

    PubMed

    Sheldrake, Rupert; Smart, Pamela; Avraamides, Leonidas

    2015-01-01

    To carry out automated experiments on mobile phones to test for telepathy in connection with telephone calls. Subjects, aged from 10 to 83, registered online with the names and mobile telephone numbers of three or two senders. A computer selected a sender at random, and asked him to call the subject via the computer. The computer then asked the subject to guess the caller׳s name, and connected the caller and the subject after receiving the guess. A test consisted of six trials. The effects of subjects׳ sex and age and the effects of time delays on guesses. The proportion of correct guesses of the caller׳s name, compared with the 33.3% or 50% mean chance expectations. In 2080 trials with three callers there were 869 hits (41.8%), above the 33.3% chance level (P < 1 × 10(-15)). The hit rate in incomplete tests was 43.8% (P = .00003) showing that optional stopping could not explain the positive results. In 745 trials with two callers, there were 411 hits (55.2%), above the 50% chance level (P = .003). An analysis of the data made it very unlikely that cheating could explain the positive results. These experiments showed that automated tests for telephone telepathy can be carried out using mobile phones. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Beyond semantic accuracy: preschoolers evaluate a speaker's reasons.

    PubMed

    Koenig, Melissa A

    2012-01-01

    Children's sensitivity to the quality of epistemic reasons and their selective trust in the more reasonable of 2 informants was investigated in 2 experiments. Three-, 4-, and 5-year-old children (N = 90) were presented with speakers who stated different kinds of evidence for what they believed. Experiment 1 showed that children of all age groups appropriately judged looking, reliable testimony, and inference as better reasons for belief than pretense, guessing, and desiring. Experiment 2 showed that 3- and 4-year-old children preferred to seek and accept new information from a speaker who was previously judged to use the "best" way of thinking. The findings demonstrate that children distinguish certain good from bad reasons and prefer to learn from those who showcased good reasoning in the past. © 2012 The Author. Child Development © 2012 Society for Research in Child Development, Inc.

  17. Taboo: Working memory and mental control in an interactive task

    PubMed Central

    Hansen, Whitney A.; Goldinger, Stephen D.

    2014-01-01

    Individual differences in working memory (WM) predict principled variation in tasks of reasoning, response time, memory, and other abilities. Theoretically, a central function of WM is keeping task-relevant information easily accessible while suppressing irrelevant information. The present experiment was a novel study of mental control, using performance in the game Taboo as a measure. We tested effects of WM capacity on several indices, including perseveration errors (repeating previous guesses or clues) and taboo errors (saying at least part of a taboo or target word). By most measures, high-span participants were superior to low-span participants: High-spans were better at guessing answers, better at encouraging correct guesses from teammates, and less likely to either repeat themselves or produce taboo clues. Differences in taboo errors occurred only in an easy control condition. The results suggest that WM capacity predicts behavior in tasks requiring mental control, extending this finding to an interactive group setting. PMID:19827699

  18. The effect of guessing on the speech reception thresholds of children.

    PubMed

    Moodley, A

    1990-01-01

    Speech audiometry is an essential part of the assessment of hearing impaired children and it is now widely used throughout the United Kingdom. Although instructions are universally agreed upon as an important aspect in the administration of any form of audiometric testing, there has been little, if any, research towards evaluating the influence which instructions that are given to a listener have on the Speech Reception Threshold obtained. This study attempts to evaluate what effect guessing has on the Speech Reception Threshold of children. A sample of 30 secondary school pupils between 16 and 18 years of age with normal hearing was used in the study. It is argued that the type of instruction normally used for Speech Reception Threshold in audiometric testing may not provide a sufficient amount of control for guessing and the implications of this, using data obtained in the study, are examined.

  19. Quantifying the effects of social influence

    PubMed Central

    Mavrodiev, Pavlin; Tessone, Claudio J.; Schweitzer, Frank

    2013-01-01

    How do humans respond to indirect social influence when making decisions? We analysed an experiment where subjects had to guess the answer to factual questions, having only aggregated information about the answers of others. While the response of humans to aggregated information is a widely observed phenomenon, it has not been investigated quantitatively, in a controlled setting. We found that the adjustment of individual guesses depends linearly on the distance to the mean of all guesses. This is a remarkable, and yet surprisingly simple regularity. It holds across all questions analysed, even though the correct answers differ by several orders of magnitude. Our finding supports the assumption that individual diversity does not affect the response to indirect social influence. We argue that the nature of the response crucially changes with the level of information aggregation. This insight contributes to the empirical foundation of models for collective decisions under social influence. PMID:23449043

  20. Assessing the Reliability of Regional Depth-Duration-Frequency Equations for Gauged and Ungauged Sites

    NASA Astrophysics Data System (ADS)

    Castellarin, A.; Montanari, A.; Brath, A.

    2002-12-01

    The study derives Regional Depth-Duration-Frequency (RDDF) equations for a wide region of northern-central Italy (37,200 km 2) by following an adaptation of the approach originally proposed by Alila [WRR, 36(7), 2000]. The proposed RDDF equations have a rather simple structure and allow an estimation of the design storm, defined as the rainfall depth expected for a given storm duration and recurrence interval, in any location of the study area for storm durations from 1 to 24 hours and for recurrence intervals up to 100 years. The reliability of the proposed RDDF equations represents the main concern of the study and it is assessed at two different levels. The first level considers the gauged sites and compares estimates of the design storm obtained with the RDDF equations with at-site estimates based upon the observed annual maximum series of rainfall depth and with design storm estimates resulting from a regional estimator recently developed for the study area through a Hierarchical Regional Approach (HRA) [Gabriele and Arnell, WRR, 27(6), 1991]. The second level performs a reliability assessment of the RDDF equations for ungauged sites by means of a jack-knife procedure. Using the HRA estimator as a reference term, the jack-knife procedure assesses the reliability of design storm estimates provided by the RDDF equations for a given location when dealing with the complete absence of pluviometric information. The results of the analysis show that the proposed RDDF equations represent practical and effective computational means for producing a first guess of the design storm at the available raingauges and reliable design storm estimates for ungauged locations. The first author gratefully acknowledges D.H. Burn for sponsoring the submission of the present abstract.

  1. Learning automata-based solutions to the nonlinear fractional knapsack problem with applications to optimal resource allocation.

    PubMed

    Granmo, Ole-Christoffer; Oommen, B John; Myrer, Svein Arild; Olsen, Morten Goodwin

    2007-02-01

    This paper considers the nonlinear fractional knapsack problem and demonstrates how its solution can be effectively applied to two resource allocation problems dealing with the World Wide Web. The novel solution involves a "team" of deterministic learning automata (LA). The first real-life problem relates to resource allocation in web monitoring so as to "optimize" information discovery when the polling capacity is constrained. The disadvantages of the currently reported solutions are explained in this paper. The second problem concerns allocating limited sampling resources in a "real-time" manner with the purpose of estimating multiple binomial proportions. This is the scenario encountered when the user has to evaluate multiple web sites by accessing a limited number of web pages, and the proportions of interest are the fraction of each web site that is successfully validated by an HTML validator. Using the general LA paradigm to tackle both of the real-life problems, the proposed scheme improves a current solution in an online manner through a series of informed guesses that move toward the optimal solution. At the heart of the scheme, a team of deterministic LA performs a controlled random walk on a discretized solution space. Comprehensive experimental results demonstrate that the discretization resolution determines the precision of the scheme, and that for a given precision, the current solution (to both problems) is consistently improved until a nearly optimal solution is found--even for switching environments. Thus, the scheme, while being novel to the entire field of LA, also efficiently handles a class of resource allocation problems previously not addressed in the literature.

  2. Blending Determinism with Evolutionary Computing: Applications to the Calculation of the Molecular Electronic Structure of Polythiophene.

    PubMed

    Sarkar, Kanchan; Sharma, Rahul; Bhattacharyya, S P

    2010-03-09

    A density matrix based soft-computing solution to the quantum mechanical problem of computing the molecular electronic structure of fairly long polythiophene (PT) chains is proposed. The soft-computing solution is based on a "random mutation hill climbing" scheme which is modified by blending it with a deterministic method based on a trial single-particle density matrix [P((0))(R)] for the guessed structural parameters (R), which is allowed to evolve under a unitary transformation generated by the Hamiltonian H(R). The Hamiltonian itself changes as the geometrical parameters (R) defining the polythiophene chain undergo mutation. The scale (λ) of the transformation is optimized by making the energy [E(λ)] stationary with respect to λ. The robustness and the performance levels of variants of the algorithm are analyzed and compared with those of other derivative free methods. The method is further tested successfully with optimization of the geometry of bipolaron-doped long PT chains.

  3. A General Approach to the Geostationary Transfer Orbit Mission Recovery

    NASA Technical Reports Server (NTRS)

    Faber, Nicolas; Aresini, Andrea; Wauthier, Pascal; Francken, Philippe

    2007-01-01

    This paper discusses recovery scenarios for geosynchronous satellites injected in a non-nominal orbit due to a launcher underperformance. The theory on minimum-fuel orbital transfers is applied to develop an operational tool capable to design a recovery mission. To obtain promising initial guesses for the recovery three complementary techniques are used: p-optimized impulse function contouring, a numerical impulse function minimization and the solutions to the switching equations. The tool evaluates the feasibility of a recovery with the on-board propellant of the spacecraft and performs the complete mission design. This design takes into account for various mission operational constraints such as e.g., the requirement of multiple finite-duration burns, third-body orbital perturbations, spacecraft attitude constraints and ground station visibility. In a final case study, we analyze the consequences of a premature breakdown of an upper rocket stage engine during injection on a geostationary transfer orbit, as well as the possible recovery solution with the satellite on-board propellant.

  4. Evolution of Cosmology

    NASA Astrophysics Data System (ADS)

    Ross, Charles H.

    2005-04-01

    Aristotle thought that the universe was finite and Earth centered. Newton thought that it was infinite. Einstein guessed that the universe was finite, spherical, static, warped, and closed. Hubble's 1930 discovery of the expanding universe, Penzias and Wilson's 1968 discovery of the isotropic CMB, and measurements on light element abundances, however, established a big bang origin. Vera Rubin's 1980 dark matter discovery significantly impacted contending theories. However, 1998 is the year when sufficiently accurate supernova and primordial deuterium data was available to truly explore the universe. CMB anisotropy measurements further extended our cosmological database in 2003. On the theoretical side, Friedmann's 1922 perturbation solution of Einstein's general relativity equations for a static universe has shaped the thought and direction in cosmology for the past 80 years. It describes 3D space as a dynamic function of time. However, 80 years of trying to fit Friedmann's solution to observational data has been a bumpy road - resulting in such counter-intuitive, but necessary, features as rapid inflation, precision tuning, esoteric dark matter, and an accelerating input of esoteric dark energy.

  5. Steady-state configuration and tension calculations of marine cables under complex currents via separated particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Xu, Xue-song

    2014-12-01

    Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.

  6. Knowing about guessing and guessing about knowing: preschoolers' understanding of indeterminacy.

    PubMed

    Fay, A L; Klahr, D

    1996-04-01

    In this article we investigate preschool children's understanding of indeterminacy by examining their ability to distinguish between determinate situations--in which the available evidence eliminates all uncertainty about an outcome--and indeterminate situations--in which it does not. We argue that a full understanding of indeterminacy requires the coordination of 3 processes: search, evaluation, and mapping. We describe 3 experiments aimed at discovering the extent to which these processes, each of which has been implicated in previous accounts of indeterminate reasoning, are developed in preschoolers and the extent to which different children organize the processes into different strategies. Experiment 1 examines 5-year-olds' performance on 1- versus 2-solution problems having different configurations of irrelevant information. Experiments 2 and 3 extend the possible sources of indeterminacy from 2 to 4 and vary the amount of consistent, inconsistent, and to-be-discovered evidence. Our results show that 4- and 5-year-old children readily give "Can tell" responses to determinate problems, as well as "Can't tell" responses when they think that the evidence warrants such a response. In addition, we report 2 new findings: (a) different children use different strategies to process determinate evidence, and these strategies, in turn, predict their performance on indeterminate problems; (b) evidence patterns in which a single positive instance is contrasted with 1 or more negative or unknown instances are particularly difficult to resolve. Many children use a decision rule--the Positive Capture rule--that produces consistent errors on this type of problem.

  7. On the Convenience of Using the Complete Linearization Method in Modelling the BLR of AGN

    NASA Astrophysics Data System (ADS)

    Patriarchi, P.; Perinotto, M.

    The Complete Linearization Method (Mihalas, 1978) consists in the determination of the radiation field (at a set of frequency points), atomic level populations, temperature, electron density etc., by resolving the system of radiative transfer, thermal equilibrium, statistical equilibrium equations simultaneously and self-consistently. Since the system is not linear, it must be solved by iteration after linearization, using a perturbative method, starting from an initial guess solution. Of course the Complete Linearization Method is more time consuming than the previous one. But how great can this disadvantage be in the age of supercomputers? It is possible to approximately evaluate the CPU time needed to run a model by computing the number of multiplications necessary to solve the system.

  8. How to Prevent Type-Flaw Guessing Attacks on Password Protocols

    DTIC Science & Technology

    2003-01-01

    How to prevent type-flaw guessing attacks on password protocols∗ Sreekanth Malladi , Jim Alves-Foss Center for Secure and Dependable Systems...respectively. R Retagging 〈−(t, f),+(t′, f)〉. The retagging strand captures the concept of receiving a message of one type and sending it, with a claim of a...referrees for insightful comments. Thanks are also due to Ricardo Corin for many helpful technical discus- sions. References [AN94] M. Abadi and R

  9. Cleared Hot: A Forward Air Control (Airborne) Concepts Trainer

    DTIC Science & Technology

    2006-09-01

    list of high-level objectives imitating a detailed requirements document. In those cases, software developers are forced to make best guesses about...software developers are forced to make best guesses about how to meet those objectives. Is there a better method? We embarked on a project to create a...with participants at the end of an 18-month development cycle, we did the next best thing: Cleared Hot was taken to the mission subject matter

  10. Refinements of Stout’s Procedure for Assessing Latent Trait Unidimensionality

    DTIC Science & Technology

    1992-08-01

    in the presence of guessing when coupled with many high-discriminating items. A revision of DIMTEST is proposed to overcome this limitation. Also, an...used for factor analysis. When guessing is present in the responses to items, however, linear factor analysis of tetrachoric correlations can produce...significance when d=1 and maintaining good power when d=2, even when the correlation between the abilities is as high as .7. The present study provides a

  11. Testing of Strategies for the Acceleration of the Cost Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ponciroli, Roberto; Vilim, Richard B.

    The general problem addressed in the Nuclear-Renewable Hybrid Energy System (N-R HES) project is finding the optimum economical dispatch (ED) and capacity planning solutions for the hybrid energy systems. In the present test-problem configuration, the N-R HES unit is composed of three electrical power-generating components, i.e. the Balance of Plant (BOP), the Secondary Energy Source (SES), and the Energy Storage (ES). In addition, there is an Industrial Process (IP), which is devoted to hydrogen generation. At this preliminary stage, the goal is to find the power outputs of each one of the N-R HES unit components (BOP, SES, ES) andmore » the IP hydrogen production level that maximizes the unit profit by simultaneously satisfying individual component operational constraints. The optimization problem is meant to be solved in the Risk Analysis Virtual Environment (RAVEN) framework. The dynamic response of the N-R HES unit components is simulated by using dedicated object-oriented models written in the Modelica modeling language. Though this code coupling provides for very accurate predictions, the ensuing optimization problem is characterized by a very large number of solution variables. To ease the computational burden and to improve the path to a converged solution, a method to better estimate the initial guess for the optimization problem solution was developed. The proposed approach led to the definition of a suitable Monte Carlo-based optimization algorithm (called the preconditioner), which provides an initial guess for the optimal N-R HES power dispatch and the optimal installed capacity for each one of the unit components. The preconditioner samples a set of stochastic power scenarios for each one of the N-R HES unit components, and then for each of them the corresponding value of a suitably defined cost function is evaluated. After having simulated a sufficient number of power histories, the configuration which ensures the highest profit is selected as the optimal one. The component physical dynamics are represented through suitable ramp constraints, which considerably simplify the numerical solving. In order to test the capabilities of the proposed approach, in the present report, the dispatch problem only is tackled, i.e. a reference unit configuration is assumed, and each one of the N-R HES unit components is assumed to have a fixed installed capacity. As for the next steps, the main improvement will concern the operation strategy of the ES facility. In particular, in order to describe a more realistic battery commitment strategy, the ES operation will be regulated according to the electricity price forecasts.« less

  12. Non-penetrating sham needle, is it an adequate sham control in acupuncture research?

    PubMed

    Lee, Hyangsook; Bang, Heejung; Kim, Youngjin; Park, Jongbae; Lee, Sangjae; Lee, Hyejung; Park, Hi-Joon

    2011-01-01

    This study aimed to determine whether a non-penetrating sham needle can serve as an adequate sham control. We conducted a randomised, subject-blind, sham-controlled trial in both acupuncture-naïve and experienced healthy volunteers. Participants were randomly allocated to receive either real acupuncture (n=39) or non-penetrating sham acupuncture (n=40) on the hand (LI4), abdomen (CV12) and leg (ST36). The procedures were standardised and identical for both groups. Participants rated acupuncture sensations on a 10-point scale. A blinding index was calculated based on the participants' guesses on the type of acupuncture they had received (real, sham or do not know) for each acupuncture point. The association of knowledge about and experience in acupuncture with correct guessing was also examined. The subjects in both groups were similar with respect to age, gender, experience or knowledge about acupuncture. The sham needle tended to produce less penetration, pain and soreness only at LI4. Blinding appeared to be successfully achieved for ST36. Although 41% of participants in the real acupuncture group made correct guesses for LI4, 31% guessed incorrectly for CV12, beyond chance level. People with more experience and knowledge about acupuncture were more likely to correctly guess the type of needle they received at ST36 only, compared to that at the other points. A non-penetrating sham needle may successfully blind participants and thus, may be a credible sham control. However, the small sample size, the different needle sensations, and the degree and direction of unblinding across acupuncture points warrant further studies in Korea as well as other countries to confirm our finding. Our results also justify the incorporation of formal testing of the use of sham controls in clinical trials of acupuncture. Copyright © 2010 Elsevier Ltd. All rights reserved.

  13. Adaptive time stepping for fluid-structure interaction solvers

    DOE PAGES

    Mayr, M.; Wall, W. A.; Gee, M. W.

    2017-12-22

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  14. A finite element based method for solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Calise, Anthony J.

    1989-01-01

    A temporal finite element based on a mixed form of the Hamiltonian weak principle is presented for optimal control problems. The mixed form of this principle contains both states and costates as primary variables that are expanded in terms of elemental values and simple shape functions. Unlike other variational approaches to optimal control problems, however, time derivatives of the states and costates do not appear in the governing variational equation. Instead, the only quantities whose time derivatives appear therein are virtual states and virtual costates. Also noteworthy among characteristics of the finite element formulation is the fact that in the algebraic equations which contain costates, they appear linearly. Thus, the remaining equations can be solved iteratively without initial guesses for the costates; this reduces the size of the problem by about a factor of two. Numerical results are presented herein for an elementary trajectory optimization problem which show very good agreement with the exact solution along with excellent computational efficiency and self-starting capability. The goal is to evaluate the feasibility of this approach for real-time guidance applications. To this end, a simplified two-stage, four-state model for an advanced launch vehicle application is presented which is suitable for finite element solution.

  15. Adaptive time stepping for fluid-structure interaction solvers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mayr, M.; Wall, W. A.; Gee, M. W.

    In this work, a novel adaptive time stepping scheme for fluid-structure interaction (FSI) problems is proposed that allows for controlling the accuracy of the time-discrete solution. Furthermore, it eases practical computations by providing an efficient and very robust time step size selection. This has proven to be very useful, especially when addressing new physical problems, where no educated guess for an appropriate time step size is available. The fluid and the structure field, but also the fluid-structure interface are taken into account for the purpose of a posteriori error estimation, rendering it easy to implement and only adding negligible additionalmore » cost. The adaptive time stepping scheme is incorporated into a monolithic solution framework, but can straightforwardly be applied to partitioned solvers as well. The basic idea can be extended to the coupling of an arbitrary number of physical models. Accuracy and efficiency of the proposed method are studied in a variety of numerical examples ranging from academic benchmark tests to complex biomedical applications like the pulsatile blood flow through an abdominal aortic aneurysm. Finally, the demonstrated accuracy of the time-discrete solution in combination with reduced computational cost make this algorithm very appealing in all kinds of FSI applications.« less

  16. Parameter Estimation and Model Selection in Computational Biology

    PubMed Central

    Lillacci, Gabriele; Khammash, Mustafa

    2010-01-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants) are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection. PMID:20221262

  17. Concentrations of Volatiles in the Lunar Regolith

    NASA Technical Reports Server (NTRS)

    Taylor, Jeff; Taylor, Larry; Duke, Mike

    2007-01-01

    To set lower and upper limits on the overall amounts and types of volatiles released during heating of polar regolith, we examined the data for equatorial lunar regolith and for the compositions of comets. The purpose, specifically, was to answer these questions: 1. Upper/Lower limits and 'best guess' for total amount of volatiles (by weight %) released from lunar regolith up to 150C 2. Upper/Lower limit and 'best guess' for composition of the volatiles released from the lunar regolith by weight %

  18. Statistical Image Recovery From Laser Speckle Patterns With Polarization Diversity

    DTIC Science & Technology

    2010-09-01

    Fourier Transform is taken mapping the data to the pupil plane . The computed phase from this operation is multiplied to the amplitude of the pupil...guess generated by a uniform ran- dom number generator (−π to π). The guessed phase is multiplied to the measured amplitude in the image plane and the... plane data. Again, a Fourier transform is performed mapping the manipulated data set back to the image plane . The computed phase in this op- eration is

  19. Innovation, imitation, and problem-solving in a networked group.

    PubMed

    Wisdom, Thomas N; Goldstone, Robert L

    2011-04-01

    We implemented a problem-solving task in which groups of participants simultaneously played a simple innovation game in a complex problem space, with score feedback provided after each of a number of rounds. Each participant in a group was allowed to view and imitate the guesses of others during the game. The results showed the use of social learning strategies previously studied in other species, and demonstrated benefits of social learning and nonlinear effects of group size on strategy and performance. Rather than simply encouraging conformity, groups provided information to each individual about the distribution of useful innovations in the problem space. Imitation facilitated innovation rather than displacing it, because the former allowed good solutions to be propagated and preserved for further cumulative innovations in the group. Participants generally improved their solutions through the use of fairly conservative strategies, such as changing only a small portion of one's solution at a time, and tending to imitate solutions similar to one's own. Changes in these strategies over time had the effect of making solutions increasingly entrenched, both at individual and group levels. These results showed evidence of nonlinear dynamics in the decentralization of innovation, the emergence of group phenomena from complex interactions of individual efforts, stigmergy in the use of social information, and dynamic tradeoffs between exploration and exploitation of solutions. These results also support the idea that innovation and creativity can be recognized at the group level even when group members are generally cautious and imitative.

  20. Iterative discrete ordinates solution of the equation for surface-reflected radiance

    NASA Astrophysics Data System (ADS)

    Radkevich, Alexander

    2017-11-01

    This paper presents a new method of numerical solution of the integral equation for the radiance reflected from an anisotropic surface. The equation relates the radiance at the surface level with BRDF and solutions of the standard radiative transfer problems for a slab with no reflection on its surfaces. It is also shown that the kernel of the equation satisfies the condition of the existence of a unique solution and the convergence of the successive approximations to that solution. The developed method features two basic steps: discretization on a 2D quadrature, and solving the resulting system of algebraic equations with successive over-relaxation method based on the Gauss-Seidel iterative process. Presented numerical examples show good coincidence between the surface-reflected radiance obtained with DISORT and the proposed method. Analysis of contributions of the direct and diffuse (but not yet reflected) parts of the downward radiance to the total solution is performed. Together, they represent a very good initial guess for the iterative process. This fact ensures fast convergence. The numerical evidence is given that the fastest convergence occurs with the relaxation parameter of 1 (no relaxation). An integral equation for BRDF is derived as inversion of the original equation. The potential of this new equation for BRDF retrievals is analyzed. The approach is found not viable as the BRDF equation appears to be an ill-posed problem, and it requires knowledge the surface-reflected radiance on the entire domain of both Sun and viewing zenith angles.

  1. An advanced temporal credential-based security scheme with mutual authentication and key agreement for wireless sensor networks.

    PubMed

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi

    2013-07-24

    Wireless sensor networks (WSNs) can be quickly and randomly deployed in any harsh and unattended environment and only authorized users are allowed to access reliable sensor nodes in WSNs with the aid of gateways (GWNs). Secure authentication models among the users, the sensor nodes and GWN are important research issues for ensuring communication security and data privacy in WSNs. In 2013, Xue et al. proposed a temporal-credential-based mutual authentication and key agreement scheme for WSNs. However, in this paper, we point out that Xue et al.'s scheme cannot resist stolen-verifier, insider, off-line password guessing, smart card lost problem and many logged-in users' attacks and these security weaknesses make the scheme inapplicable to practical WSN applications. To tackle these problems, we suggest a simple countermeasure to prevent proposed attacks while the other merits of Xue et al.'s authentication scheme are left unchanged.

  2. An Advanced Temporal Credential-Based Security Scheme with Mutual Authentication and Key Agreement for Wireless Sensor Networks

    PubMed Central

    Li, Chun-Ta; Weng, Chi-Yao; Lee, Cheng-Chi

    2013-01-01

    Wireless sensor networks (WSNs) can be quickly and randomly deployed in any harsh and unattended environment and only authorized users are allowed to access reliable sensor nodes in WSNs with the aid of gateways (GWNs). Secure authentication models among the users, the sensor nodes and GWN are important research issues for ensuring communication security and data privacy in WSNs. In 2013, Xue et al. proposed a temporal-credential-based mutual authentication and key agreement scheme for WSNs. However, in this paper, we point out that Xue et al.'s scheme cannot resist stolen-verifier, insider, off-line password guessing, smart card lost problem and many logged-in users' attacks and these security weaknesses make the scheme inapplicable to practical WSN applications. To tackle these problems, we suggest a simple countermeasure to prevent proposed attacks while the other merits of Xue et al.'s authentication scheme are left unchanged. PMID:23887085

  3. A digital memories based user authentication scheme with privacy preservation.

    PubMed

    Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang

    2017-01-01

    The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users' privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results.

  4. Computer simulations of phase field drops on super-hydrophobic surfaces

    NASA Astrophysics Data System (ADS)

    Fedeli, Livio

    2017-09-01

    We present a novel quasi-Newton continuation procedure that efficiently solves the system of nonlinear equations arising from the discretization of a phase field model for wetting phenomena. We perform a comparative numerical analysis that shows the improved speed of convergence gained with respect to other numerical schemes. Moreover, we discuss the conditions that, on a theoretical level, guarantee the convergence of this method. At each iterative step, a suitable continuation procedure develops and passes to the nonlinear solver an accurate initial guess. Discretization performs through cell-centered finite differences. The resulting system of equations is solved on a composite grid that uses dynamic mesh refinement and multi-grid techniques. The final code achieves three-dimensional, realistic computer experiments comparable to those produced in laboratory settings. This code offers not only new insights into the phenomenology of super-hydrophobicity, but also serves as a reliable predictive tool for the study of hydrophobic surfaces.

  5. Optimizing the rapid measurement of detection thresholds in infants

    PubMed Central

    Jones, Pete R.; Kalwarowsky, Sarah; Braddick, Oliver J.; Atkinson, Janette; Nardini, Marko

    2015-01-01

    Accurate measures of perceptual threshold are difficult to obtain in infants. In a clinical context, the challenges are particularly acute because the methods must yield meaningful results quickly and within a single individual. The present work considers how best to maximize speed, accuracy, and reliability when testing infants behaviorally and suggests some simple principles for improving test efficiency. Monte Carlo simulations, together with empirical (visual acuity) data from 65 infants, are used to demonstrate how psychophysical methods developed with adults can produce misleading results when applied to infants. The statistical properties of an effective clinical infant test are characterized, and based on these, it is shown that (a) a reduced (false-positive) guessing rate can greatly increase test efficiency, (b) the ideal threshold to target is often below 50% correct, and (c) simply taking the max correct response can often provide the best measure of an infant's perceptual sensitivity. PMID:26237298

  6. A digital memories based user authentication scheme with privacy preservation

    PubMed Central

    Liu, JunLiang; Lyu, Qiuyun; Wang, Qiuhua; Yu, Xiangxiang

    2017-01-01

    The traditional username/password or PIN based authentication scheme, which still remains the most popular form of authentication, has been proved insecure, unmemorable and vulnerable to guessing, dictionary attack, key-logger, shoulder-surfing and social engineering. Based on this, a large number of new alternative methods have recently been proposed. However, most of them rely on users being able to accurately recall complex and unmemorable information or using extra hardware (such as a USB Key), which makes authentication more difficult and confusing. In this paper, we propose a Digital Memories based user authentication scheme adopting homomorphic encryption and a public key encryption design which can protect users’ privacy effectively, prevent tracking and provide multi-level security in an Internet & IoT environment. Also, we prove the superior reliability and security of our scheme compared to other schemes and present a performance analysis and promising evaluation results. PMID:29190659

  7. Retrospective Attention Gates Discrete Conscious Access to Past Sensory Stimuli.

    PubMed

    Thibault, Louis; van den Berg, Ronald; Cavanagh, Patrick; Sergent, Claire

    2016-01-01

    Cueing attention after the disappearance of visual stimuli biases which items will be remembered best. This observation has historically been attributed to the influence of attention on memory as opposed to subjective visual experience. We recently challenged this view by showing that cueing attention after the stimulus can improve the perception of a single Gabor patch at threshold levels of contrast. Here, we test whether this retro-perception actually increases the frequency of consciously perceiving the stimulus, or simply allows for a more precise recall of its features. We used retro-cues in an orientation-matching task and performed mixture-model analysis to independently estimate the proportion of guesses and the precision of non-guess responses. We find that the improvements in performance conferred by retrospective attention are overwhelmingly determined by a reduction in the proportion of guesses, providing strong evidence that attracting attention to the target's location after its disappearance increases the likelihood of perceiving it consciously.

  8. Elementary School Children’s Cheating Behavior and its Cognitive Correlates

    PubMed Central

    Ding, Xiao Pan; Omrin, Danielle S.; Evans, Angela D.; Fu, Genyue; Chen, Guopeng; Lee, Kang

    2014-01-01

    Elementary school children’s cheating behavior and its cognitive correlates were investigated using a guessing game. Children (N = 95) between 8 and 12 years of age were asked to guess which side of the screen a coin would appear on and received rewards based on their self-reported accuracy. Children’s cheating behavior was measured by examining whether children failed to adhere to the game rules by falsely reporting their accuracy. Children’s theory-of-mind understanding and executive functioning skills were also assessed. The majority of children cheated during the guessing game, and cheating behavior decreased with age. Children with better working memory and inhibitory control were less likely to cheat. However, among the cheaters, those with greater cognitive flexibility use more tactics while cheating. Results revealed the unique role that executive functioning plays in children’s cheating behavior: Like a double-edged sword, executive functioning can inhibit children’s cheating behavior on the one side, while it can promote the sophistication of children’s cheating tactics on the other. PMID:24464240

  9. Production and discrimination of facial expressions by preschool children.

    PubMed

    Field, T M; Walden, T A

    1982-10-01

    Production and discrimination of the 8 basic facial expressions were investigated among 34 3-5-year-old preschool children. The children's productions were elicited and videotaped under 4 different prompt conditions (imitation of photographs of children's facial expressions, imitation of those in front of a mirror, imitation of those when given labels for the expressions, and when given only labels). Adults' "guesses" of the children's productions as well as the children's guesses of their own expressions on videotape were more accurate for the happy than afraid or angry expressions and for those expressions elicited during the imitation conditions. Greater accuracy of guessing by the adult than the child suggests that the children's productions were superior to their discriminations, although these skills appeared to be related. Children's production skills were also related to sociometric ratings by their peers and expressivity ratings by their teachers. These were not related to the child's age and only weakly related to the child's expressivity during classroom free-play observations.

  10. Effects of prospective-user factors and sign design features on guessability of pharmaceutical pictograms.

    PubMed

    Chan, Alan H S; Chan, Ken W L

    2013-02-01

    To examine the associations between the guessing performance of 25 pharmaceutical pictograms and five sign features for naïve participants. The effect of prospective-user factors on guessing performance was also investigated. A total of 160 Hong Kong Chinese people, drawn largely from a young student population, guessed the meanings of 25 pharmaceutical pictograms that were generally not familiar to them. Participants then completed a questionnaire about their drug buying and drug label reading habits, and their demographics and medication history. Finally they rated five features (familiarity, concreteness, complexity, meaningfulness, and semantic distance) of the pharmaceutical pictograms using 0-100 scales. For all pharmaceutical pictograms, mean and standard deviation of guessability score were 64.8 and 17.1, respectively. Prospective-user factors of 'occupation', 'age' and 'education level' significantly affected guessing performance. For sign features, semantic closeness was the best predictor of guessability score, followed by simplicity, concreteness, meaningfulness and familiarity. User characteristics and sign features are critical for pharmaceutical pictograms. To be effective, pharmaceutical pictograms should have obvious and direct connections with familiar things and it is recommended that pharmaceutical pictograms should be designed with consideration of the five sign features investigated here. This study provides useful information and recommendations to assist interface designers to create and evaluate icons for pharmaceutical products and to design more user-friendly pharmaceutical pictograms. However, further work is needed to see how older people respond to such pharmaceutical pictograms. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Performance evaluation of algebraic reconstruction technique (ART) for prototype chest digital tomosynthesis (CDT) system

    NASA Astrophysics Data System (ADS)

    Lee, Haenghwa; Choi, Sunghoon; Jo, Byungdu; Kim, Hyemi; Lee, Donghoon; Kim, Dohyeon; Choi, Seungyeon; Lee, Youngjin; Kim, Hee-Joung

    2017-03-01

    Chest digital tomosynthesis (CDT) is a new 3D imaging technique that can be expected to improve the detection of subtle lung disease over conventional chest radiography. Algorithm development for CDT system is challenging in that a limited number of low-dose projections are acquired over a limited angular range. To confirm the feasibility of algebraic reconstruction technique (ART) method under variations in key imaging parameters, quality metrics were conducted using LUNGMAN phantom included grand-glass opacity (GGO) tumor. Reconstructed images were acquired from the total 41 projection images over a total angular range of +/-20°. We evaluated contrast-to-noise ratio (CNR) and artifacts spread function (ASF) to investigate the effect of reconstruction parameters such as number of iterations, relaxation parameter and initial guess on image quality. We found that proper value of ART relaxation parameter could improve image quality from the same projection. In this study, proper value of relaxation parameters for zero-image (ZI) and back-projection (BP) initial guesses were 0.4 and 0.6, respectively. Also, the maximum CNR values and the minimum full width at half maximum (FWHM) of ASF were acquired in the reconstructed images after 20 iterations and 3 iterations, respectively. According to the results, BP initial guess for ART method could provide better image quality than ZI initial guess. In conclusion, ART method with proper reconstruction parameters could improve image quality due to the limited angular range in CDT system.

  12. Identification of Arbitrary Zonation in Groundwater Parameters using the Level Set Method and a Parallel Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Lei, H.; Lu, Z.; Vesselinov, V. V.; Ye, M.

    2017-12-01

    Simultaneous identification of both the zonation structure of aquifer heterogeneity and the hydrogeological parameters associated with these zones is challenging, especially for complex subsurface heterogeneity fields. In this study, a new approach, based on the combination of the level set method and a parallel genetic algorithm is proposed. Starting with an initial guess for the zonation field (including both zonation structure and the hydraulic properties of each zone), the level set method ensures that material interfaces are evolved through the inverse process such that the total residual between the simulated and observed state variables (hydraulic head) always decreases, which means that the inversion result depends on the initial guess field and the minimization process might fail if it encounters a local minimum. To find the global minimum, the genetic algorithm (GA) is utilized to explore the parameters that define initial guess fields, and the minimal total residual corresponding to each initial guess field is considered as the fitness function value in the GA. Due to the expensive evaluation of the fitness function, a parallel GA is adapted in combination with a simulated annealing algorithm. The new approach has been applied to several synthetic cases in both steady-state and transient flow fields, including a case with real flow conditions at the chromium contaminant site at the Los Alamos National Laboratory. The results show that this approach is capable of identifying the arbitrary zonation structures of aquifer heterogeneity and the hydrogeological parameters associated with these zones effectively.

  13. Adjoint assimilation of altimetric, surface drifter, and hydrographic data in a quasi-geostrophic model of the Azores Current

    NASA Astrophysics Data System (ADS)

    Morrow, Rosemary; de Mey, Pierre

    1995-12-01

    The flow characteristics in the region of the Azores Current are investigated by assimilating TOPEX/POSEIDON and ERS 1 altimeter data into the multilevel Harvard quasigeostrophic (QG) model with open boundaries (Miller et al., 1983) using an adjoint variational scheme (Moore, 1991). The study site lies in the path of the Azores Current, where a branch retroflects to the south in the vicinity of the Madeira Rise. The region was the site of an intensive field program in 1993, SEMAPHORE. We had two main aims in this adjoint assimilation project. The first was to see whether the adjoint method could be applied locally to optimize an initial guess field, derived from the continous assimilation of altimetry data using optimal interpolation (OI). The second aim was to assimilate a variety of different data sets and evaluate their importance in constraining our QG model. The adjoint assimilation of surface data was effective in optimizing the initial conditions from OI. After 20 iterations the cost function was generally reduced by 50-80%, depending on the chosen data constraints. The primary adjustment process was via the barotropic mode. Altimetry proved to be a good constraint on the variable flow field, in particular, for constraining the barotropic field. The excellent data quality of the TOPEX/POSEIDON (T/P) altimeter data provided smooth and reliable forcing; but for our mesoscale study in a region of long decorrelation times O(30 days), the spatial coverage from the combined T/P and ERS 1 data sets was more important for constraining the solution and providing stable flow at all levels. Surface drifters provided an excellent constraint on both the barotropic and baroclinic model fields. More importantly, the drifters provided a reliable measure of the mean field. Hydrographic data were also applied as a constraint; in general, hydrography provided a weak but effective constraint on the vertical Rossby modes in the model. Finally, forecasts run over a 2-month period indicate that the initial conditions optimized by the 20-day adjoint assimilation provide more stable, longer-term forecasts.

  14. Validation of AIRS Retrievals of CO2 via Comparison to In Situ Measurements

    NASA Technical Reports Server (NTRS)

    Olsen, Edward T.; Chahine, Moustafa T.; Chen, Luke L.; Jiang, Xun; Pagano, Thomas S.; Yung, Yuk L.

    2008-01-01

    Topics include AIRS on Aqua, 2002-present with discussion about continued operation to 2011 and beyond and background, including spectrum, weighting functions, and initialization; comparison with aircraft and FTIR measurements in Masueda (CONTRAIL) JAL flask measurements, Park Falls, WI FTIR, Bremen, GDF, and Spitsbergen, Norway; AIRS retrievals over addition FTIR sites in Darwin, AU and Lauder, NZ; and mid-tropospheric carbon dioxide weather and contribution from major surface sources. Slide titles include typical AIRS infrared spectrum, AIRS sensitivity for retrieving CO2 profiles, independence of CO2 solution with respect to the initial guess, available in situ measurements for validation and comparison, comparison of collocated V1.5x AIRS CO2 (N_coll greater than or equal to 9) with INTEX-NA and SPURT;

  15. The More Things Change the More They Stay the Same

    NASA Astrophysics Data System (ADS)

    Moore, John W.

    1998-01-01

    In what year would you guess that these statements appeared in this Journal? Students can be classified as problem oriented or answer oriented. The answer-oriented student ... does little or no reflective thinking. ...To simply work a problem for a student may not be educational at all. The student should be taught the process used in the solution. ...My experience indicates that an answer-oriented attitude can be changed. ...But one can't do much teaching of problem-solving techniques and at the same time get on with the day's lecture. ...Problem-solving technique is a tool of learning. ...To teach it well should be about the most rewarding academic activity. ...A year of stressing methods of problem solving would alter the orientation and motivation of many students we now call poor.

  16. Technology Insertion (TI)/Industrial Process Improvement (IP) Task Order Number 18. Contract Summary Report/Quick Fix Plan for OO-ALC (Optical Repair).

    DTIC Science & Technology

    1991-01-30

    states that continual education and training at all levels of the company is the most important element in enabling companies to gain competitive...staked on information known to be inaccurate and educated guesses from the same people who provided much of the original inaccurate information. The second... educated guesses. 7.1.2.6 Implementation Cost/Schedule Refer to Paragraph 7.1.1.6. 7.1-8 TASK ORDER NO. 18 PROCESS CHARACTERIZATION SCHEDULER RECEIVES ITEM

  17. A comparison of methods for estimating the weight of preterm infants.

    PubMed

    Elser, A S; Vessey, J A

    1995-09-01

    Four methods of predicting a preterm infant's weight (upper mid-arm circumference, gestational age, tape measure nomogram, and guessing) were investigated to see which was the most accurate. The weights of 37 preterm neonates were initially guessed by an experienced clinician, then estimated by the other three approaches applied in a random order, and then confirmed through actual weighing. The correlations between the four estimated weights and the actual weights were .96, .84, .97, and .98, respectively. The tape measure nomogram method was the best overall approach for clinical use.

  18. Autonomous Adaptation and Collaboration of Unmanned Vehicles for Tracking Submerged Contacts

    DTIC Science & Technology

    2012-06-01

    filter: CRS RANGE REPORT =”name=archie,range=23.4,target= jackal ,time=2342551.213” • Line 8: ping wait is the time delay between range pulses. • Line 13: rn...uFldContactRangeSensor Settings 1: ProcessConfig = uFldContactRangeSensor 2: { 3: AppTick = 4 4: CommsTick = 4 5: 6: reply distance = jackal = 50 7: reach distance...REPORT = CRS RANGE REPORT 8: MY SHIP = archie 9: MY FRIEND = betty 10: MY CONTACT = jackal 11: MY BEST GUESS = besttarget 12: MY AVG GUESS = avgtarget 13

  19. Negative Marking and the Student Physician–-A Descriptive Study of Nigerian Medical Schools

    PubMed Central

    Ndu, Ikenna Kingsley; Ekwochi, Uchenna; Di Osuorah, Chidiebere; Asinobi, Isaac Nwabueze; Nwaneri, Michael Osita; Uwaezuoke, Samuel Nkachukwu; Amadi, Ogechukwu Franscesca; Okeke, Ifeyinwa Bernadette; Chinawa, Josephat Maduabuchi; Orjioke, Casmir James Ginikanwa

    2016-01-01

    Background There is considerable debate about the two most commonly used scoring methods, namely, the formula scoring (popularly referred to as negative marking method in our environment) and number right scoring methods. Although the negative marking scoring system attempts to discourage students from guessing in order to increase test reliability and validity, there is the view that it is an excessive and unfair penalty that also increases anxiety. Feedback from students is part of the education process; thus, this study assessed the perception of medical students about negative marking method for multiple choice question (MCQ) examination formats and also the effect of gender and risk-taking behavior on scores obtained with this assessment method. Methods This was a prospective multicenter survey carried out among fifth year medical students in Enugu State University and the University of Nigeria. A structured questionnaire was administered to 175 medical students from the two schools, while a class test was administered to medical students from Enugu State University. Qualitative statistical methods including frequencies, percentages, and chi square were used to analyze categorical variables. Quantitative statistics using analysis of variance was used to analyze continuous variables. Results Inquiry into assessment format revealed that most of the respondents preferred MCQs (65.9%). One hundred and thirty students (74.3%) had an unfavorable perception of negative marking. Thirty-nine students (22.3%) agreed that negative marking reduces the tendency to guess and increases the validity of MCQs examination format in testing knowledge content of a subject compared to 108 (61.3%) who disagreed with this assertion (χ2 = 23.0, df = 1, P = 0.000). The median score of the students who were not graded with negative marking was significantly higher than the score of the students graded with negative marking (P = 0.001). There was no statistically significant difference in the risk-taking behavior between male and female students in their MCQ answering patterns with negative marking method (P = 0.618). Conclusions In the assessment of students, it is more desirable to adopt fair penalties for discouraging guessing rather than excessive penalties for incorrect answers, which could intimidate students in negative marking schemes. There is no consensus on the penalty for an incorrect answer. Thus, there is a need for continued research into an effective and objective assessment tool that will ensure that the students’ final score in a test truly represents their level of knowledge. PMID:29349304

  20. LPJ-GUESS Simulated North America Vegetation for 21-0 ka Using the TraCE-21ka Climate Simulation

    NASA Astrophysics Data System (ADS)

    Shafer, S. L.; Bartlein, P. J.

    2016-12-01

    Transient climate simulations that span multiple millennia (e.g., TraCE-21ka) have become more common as computing power has increased, allowing climate models to complete long simulations in relatively short periods of time (i.e., months). These climate simulations provide information on the potential rate, variability, and spatial expression of past climate changes. They also can be used as input data for other environmental models to simulate transient changes for different components of paleoenvironmental systems, such as vegetation. Long, transient paleovegetation simulations can provide information on a range of ecological processes, describe the spatial and temporal patterns of changes in species distributions, and identify the potential locations of past species refugia. Paleovegetation simulations also can be used to fill in spatial and temporal gaps in observed paleovegetation data (e.g., pollen records from lake sediments) and to test hypotheses of past vegetation change. We used the TraCE-21ka transient climate simulation for 21-0 ka from CCSM3, a coupled atmosphere-ocean general circulation model. The TraCE-21ka simulated temperature, precipitation, and cloud data were regridded onto a 10-minute grid of North America. These regridded climate data, along with soil data and atmospheric carbon dioxide concentrations, were used as input to LPJ-GUESS, a general ecosystem model, to simulate North America vegetation from 21-0 ka. LPJ-GUESS simulates many of the processes controlling the distribution of vegetation (e.g., competition), although some important processes (e.g., dispersal) are not simulated. We evaluate the LPJ-GUESS-simulated vegetation (in the form of plant functional types and biomes) for key time periods and compare the simulated vegetation with observed paleovegetation data, such as data archived in the Neotoma Paleoecology Database. In general, vegetation simulated by LPJ-GUESS reproduces the major North America vegetation patterns (e.g., forest, grassland) with regional areas of disagreement between simulated and observed vegetation. We describe the regions and time periods with the greatest data-model agreement and disagreement, and discuss some of the strengths and weaknesses of both the simulated climate and simulated vegetation data.

  1. Comparing bird and human soaring strategies

    PubMed Central

    Ákos, Zsuzsa; Nagy, Máté; Vicsek, Tamás

    2008-01-01

    Gliding saves much energy, and to make large distances using only this form of flight represents a great challenge for both birds and people. The solution is to make use of the so-called thermals, which are localized, warmer regions in the atmosphere moving upwards with a speed exceeding the descent rate of bird and plane. Whereas birds use this technique mainly for foraging, humans do it as a sporting activity. Thermalling involves efficient optimization including the skilful localization of thermals, trying to guess the most favorable route, estimating the best descending rate, etc. In this study, we address the question whether there are any analogies between the solutions birds and humans find to handle the above task. High-resolution track logs were taken from thermalling falcons and paraglider pilots to determine the essential parameters of the flight patterns. We find that there are relevant common features in the ways birds and humans use thermals. In particular, falcons seem to reproduce the MacCready formula widely used by gliders to calculate the best slope to take before an upcoming thermal. PMID:18316724

  2. Loss of information in quantum guessing game

    NASA Astrophysics Data System (ADS)

    Plesch, Martin; Pivoluska, Matej

    2018-02-01

    Incompatibility of certain measurements—impossibility of obtaining deterministic outcomes simultaneously—is a well known property of quantum mechanics. This feature can be utilized in many contexts, ranging from Bell inequalities to device dependent QKD protocols. Typically, in these applications the measurements are chosen from a predetermined set based on a classical random variable. One can naturally ask, whether the non-determinism of the outcomes is due to intrinsic hiding property of quantum mechanics, or rather by the fact that classical, incoherent information entered the system via the choice of the measurement. Authors Rozpedek et al (2017 New J. Phys. 19 023038) examined this question for a specific case of two mutually unbiased measurements on systems of different dimensions. They have somewhat surprisingly shown that in case of qubits, if the measurements are chosen coherently with the use of a controlled unitary, outcomes of both measurements can be guessed deterministically. Here we extend their analysis and show that specifically for qubits, measurement result for any set of measurements with any a priori probability distribution can be faithfully guessed by a suitable state preparation and measurement. We also show that up to a small set of specific cases, this is not possible for higher dimensions. This result manifests a deep difference in properties of qubits and higher dimensional systems and suggests that these systems might offer higher security in specific cryptographic protocols. More fundamentally, the results show that the impossibility of predicting a result of a measurement is not caused solely by a loss of coherence between the choice of the measurement and the guessing procedure.

  3. Elementary school children's cheating behavior and its cognitive correlates.

    PubMed

    Ding, Xiao Pan; Omrin, Danielle S; Evans, Angela D; Fu, Genyue; Chen, Guopeng; Lee, Kang

    2014-05-01

    Elementary school children's cheating behavior and its cognitive correlates were investigated using a guessing game. Children (n=95) between 8 and 12 years of age were asked to guess which side of the screen a coin would appear on and received rewards based on their self-reported accuracy. Children's cheating behavior was measured by examining whether children failed to adhere to the game rules by falsely reporting their accuracy. Children's theory-of-mind understanding and executive functioning skills were also assessed. The majority of children cheated during the guessing game, and cheating behavior decreased with age. Children with better working memory and inhibitory control were less likely to cheat. However, among the cheaters, those with greater cognitive flexibility use more tactics while cheating. Results revealed the unique role that executive functioning plays in children's cheating behavior: Like a double-edged sword, executive functioning can inhibit children's cheating behavior, on the one hand, while it can promote the sophistication of children's cheating tactics, on the other. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  4. Logical synchronization: how evidence and hypotheses steer atomic clocks

    NASA Astrophysics Data System (ADS)

    Myers, John M.; Madjid, F. Hadi

    2014-05-01

    A clock steps a computer through a cycle of phases. For the propagation of logical symbols from one computer to another, each computer must mesh its phases with arrivals of symbols from other computers. Even the best atomic clocks drift unforeseeably in frequency and phase; feedback steers them toward aiming points that depend on a chosen wave function and on hypotheses about signal propagation. A wave function, always under-determined by evidence, requires a guess. Guessed wave functions are coded into computers that steer atomic clocks in frequency and position—clocks that step computers through their phases of computations, as well as clocks, some on space vehicles, that supply evidence of the propagation of signals. Recognizing the dependence of the phasing of symbol arrivals on guesses about signal propagation elevates `logical synchronization.' from its practice in computer engineering to a dicipline essential to physics. Within this discipline we begin to explore questions invisible under any concept of time that fails to acknowledge the unforeseeable. In particular, variation of spacetime curvature is shown to limit the bit rate of logical communication.

  5. On the asymptotic standard error of a class of robust estimators of ability in dichotomous item response models.

    PubMed

    Magis, David

    2014-11-01

    In item response theory, the classical estimators of ability are highly sensitive to response disturbances and can return strongly biased estimates of the true underlying ability level. Robust methods were introduced to lessen the impact of such aberrant responses on the estimation process. The computation of asymptotic (i.e., large-sample) standard errors (ASE) for these robust estimators, however, has not yet been fully considered. This paper focuses on a broad class of robust ability estimators, defined by an appropriate selection of the weight function and the residual measure, for which the ASE is derived from the theory of estimating equations. The maximum likelihood (ML) and the robust estimators, together with their estimated ASEs, are then compared in a simulation study by generating random guessing disturbances. It is concluded that both the estimators and their ASE perform similarly in the absence of random guessing, while the robust estimator and its estimated ASE are less biased and outperform their ML counterparts in the presence of random guessing with large impact on the item response process. © 2013 The British Psychological Society.

  6. Axisymmetric Vortices with Swirl

    NASA Astrophysics Data System (ADS)

    Elcrat, A.

    2007-11-01

    This talk is concerned with finding solutions of the Euler equations by solving elliptic boundary value problems for the Bragg-Hawthorne equation L u= -urr -(1/r)ur - = r^2f (u) + h(u). Theoretical results have been given for previously (Elcrat and Miller, Differential and Integral Equations 16(4) 2003, 949-968) for problems with swirl and general classes of profile functions f, h by iterating Lu(n+1)= rf(u)n)) + h(u(n)), and showing u(n) converges montonically to a solution. The solutions obtained depend on the initial guess, which can be thought of as prescribing level sets of the vortex. When a computational program was attempted these monotone iterations turned out to be numerically unstable, and a stable computation was acheived by fixing the moment of the cross section of a vortex in the merideanal plane. (This generalizes previous computational results in Elcrat, Fornberg and Miller, JFM 433 2001, (315-328) We obtain famillies of vortices related to vortex rings with swirl, Moffatt's generalization of Hill's vortex and tubes of vorticity with swirl wrapped around the symmetry axis. The vortices are embedded in either an irrotational flow or a flow with shear, and we deal with the transition form no swirl in the vortex to flow with only swirl, a Beltrami flow.

  7. Fluid dynamics of the magnetic field dependent thermosolutal convection and viscosity between coaxial contracting discs

    NASA Astrophysics Data System (ADS)

    Khan, Aamir; Shah, Rehan Ali; Shuaib, Muhammad; Ali, Amjad

    2018-06-01

    The effects of magnetic field dependent (MFD) thermosolutal convection and MFD viscosity of the fluid dynamics are investigated between squeezing discs rotating with different velocities. The unsteady constitutive expressions of mass conservation, modified Navier-Stokes, Maxwell and MFD thermosolutal convection are coupled as a system of ordinary differential equations. The corresponding solutions for the transformed radial and azimuthal momentum as well as solutions for the azimuthal and axial induced magnetic field equations are determined, also the MHD pressure and torque which the fluid exerts on the upper disc is derived and discussed in details. In the case of smooth discs the self-similar equations are solved using Homotopy Analysis Method (HAM) with appropriate initial guesses and auxiliary parameters to produce an algorithm with an accelerated and assured convergence. The validity and accuracy of HAM results is proved by comparison of the HAM solutions with numerical solver package BVP4c. It has been shown that magnetic Reynolds number causes to decrease magnetic field distributions, fluid temperature, axial and tangential velocity. Also azimuthal and axial components of magnetic field have opposite behavior with increase in MFD viscosity. Applications of the study include automotive magneto-rheological shock absorbers, novel aircraft landing gear systems, heating up or cooling processes, biological sensor systems and biological prosthetic etc.

  8. Orbital Maneuvers for Spacecrafts Travelling to/from the Lagrangian Points

    NASA Astrophysics Data System (ADS)

    Bertachini, A.

    The well-known Lagrangian points that appear in the planar restricted three-body problem (Szebehely, 1967) are very important for astronautical applications. They are five points of equilibrium in the equations of motion, what means that a particle located at one of those points with zero velocity will remain there indefinitely. The collinear points (L1, L2 and L3) are always unstable and the triangular points (L4 and L5) are stable in the present case studied (Sun-Earth system). They are all very good points to locate a space-station, since they require a small amount of V (and fuel), the control to be used for station-keeping. The triangular points are specially good for this purpose, since they are stable equilibrium points. In this paper, the planar restricted three-body problem is regularized (using Lemaître regularization) and combined with numerical integration and gradient methods to solve the two point boundary value problem (the Lambert's three-body problem). This combination is applied to the search of families of transfer orbits between the Lagrangian points and the Earth, in the Sun-Earth system, with the minimum possible cost of the control used. So, the final goal of this paper is to find the magnitude and direction of the two impulses to be applied in the spacecraft to complete the transfer: the first one when leaving/arriving at the Lagrangian point and the second one when arriving/living at the Earth. This paper is a continuation of two previous papers that studied transfers in the Earth-Moon system: Broucke (1979), that studied transfer orbits between the Lagrangian points and the Moon and Prado (1996), that studied transfer orbits between the Lagrangian points and the Earth. So, the equations of motion are: whereis the pseudo-potential given by: To solve the TPBVP in the regularized variables the following steps are used: i) Guess a initial velocity Vi, so together with the initial prescribed position ri the complete initial state is known; ii) Guess a final regularized time f and integrate the regularized equations of motion from 0 = 0 until f; iii) Check the final position rf obtained from the numerical integration with the prescribed final position and the final real time with the specified time of flight. If there is an agreement (difference less than a specified error allowed) the solution is found and the process can stop here. If there is no agreement, an increment in the initial guessed velocity Vi and in the guessed final regularized time is made and the process goes back to step i). The method used to find the increment in the guessed variables is the standard gradient method, as described in Press et. al., 1989. The routines available in this reference are also used in this research with minor modifications. After that this algorithm is implemented, the Lambert's three-body problem between the Earth and the Lagrangian points is solved for several values of the time of flight. Since the regularized system is used to solve this problem, there is no need to specify the final position of M3 as lying in an primary's parking orbit (to avoid the singularity). Then, to make a comparison with previous papers (Broucke, 1979 and Prado, 1996) the centre of the primary is used as the final position for M3. The results are organized in plots of the energy and the initial flight path angle (the control to be used) in the rotating frame against the time of flight. The definition of the angle is such that the zero is in the "x" axis, (pointing to the positive direction) and it increases in the counter-clock-wise sense. This problem, as well as the Lambert's original version, has two solutions for a given transfer time: one in the counter-clock-wise direction and one in the clock-wise direction in the inertial frame. In this paper, emphasis is given in finding the families with the smallest possible energy (and velocity), although many other families do exist. Broucke, R., (1979) Travelling Between the Lagrange Points and the Moon, Journal of Guidance and Control, Vol. 2, Prado, A.F.B.A., (1969) Travelling Between the Lagrangian Points and the Earth, Acta Astronautica, Vol. 39, No. 7, pp. Press, W. H.; B. P. Flannery; S. A. Teukolsky and W. T. Vetterling (1989), Numerical Recipes, Cambridge University Szebehely, V., (1967), Theory of Orbits, Academic Press, New York.

  9. Effect of slip on existence, uniqueness, and behavior of similarity solutions for steady incompressible laminar flow in porous tubes and channels

    NASA Astrophysics Data System (ADS)

    Chellam, Shankararaman; Liu, Mei

    2006-08-01

    The existence and multiplicity of similarity solutions for steady, fully developed, incompressible laminar flow in uniformly porous tubes and channels with one or two permeable walls is investigated from first principles. A fourth-order ordinary differential equation obtained by simplifying the Navier-Stokes equations by introducing Berman's stream function [A. S. Berman, J. Appl. Phys. 24, 1232 (1953)] and Terrill's transformation [R. M. Terrill, Aeronaut. Q. 15, 299 (1964)] is probed analytically. In this work that considers only symmetric flows for symmetric ducts; the no-slip boundary condition at porous walls is relaxed to account for momentum transfer within the porous walls. By employing the Saffman [P. G. Saffman, Stud. Appl. Math. 50, 93 (1971)] form of the slip boundary condition, the uniqueness of similarity solutions is investigated theoretically in terms of the signs of the guesses for the missing initial conditions. Solutions were obtained for all wall Reynolds numbers for channel flows whereas no solutions existed for intermediate values for tube flows. Introducing slip did not fundamentally change the number or the character of solutions corresponding to different sections. However, the range of wall Reynolds numbers for which similarity solutions are theoretically impossible in tube flows was found to be a weak function of the slip coefficient. Slip also weakly influenced the transition wall Reynolds number corresponding to flow in the direction of a favorable axial pressure gradient to one in the direction of an adverse pressure gradient. Momentum transfer from the longitudinal axis to the walls appears to occur more efficiently in porous channels compared to porous tubes even in the presence of slip.

  10. Significant consequences of heat generation/absorption and homogeneous-heterogeneous reactions in second grade fluid due to rotating disk

    NASA Astrophysics Data System (ADS)

    Hayat, Tasawar; Qayyum, Sumaira; Alsaedi, Ahmed; Ahmad, Bashir

    2018-03-01

    Flow of second grade fluid by a rotating disk with heat and mass transfer is discussed. Additional effects of heat generation/absorption are also analyzed. Flow is also subjected to homogeneous-heterogeneous reactions. The convergence of computed solution is assured through appropriate choices of initial guesses and auxiliary parameters. Investigation is made for the effects of involved parameters on velocities (radial, axial, tangential), temperature and concentration. Skin friction and Nusselt number are also analyzed. Graphical results depict that an increase in viscoelastic parameter enhances the axial, radial and tangential velocities. Opposite behavior of temperature is observed for larger values of viscoelastic and heat generation/absorption parameters. Concentration profile is increasing function of Schmidt number, viscoelastic parameter and heterogeneous reaction parameter. Magnitude of skin friction and Nusselt number are enhanced for larger viscoelastic parameter.

  11. Traffic congestion and reliability : linking solutions to problems.

    DOT National Transportation Integrated Search

    2004-07-19

    The Traffic Congestion and Reliability: Linking Solutions to Problems Report provides : a snapshot of congestion in the United States by summarizing recent trends in : congestion, highlighting the role of unreliable travel times in the effects of con...

  12. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks.

    PubMed

    Dâmaso, Antônio; Rosa, Nelson; Maciel, Paulo

    2017-11-05

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way.

  13. Dynamic one-dimensional modeling of secondary settling tanks and system robustness evaluation.

    PubMed

    Li, Ben; Stenstrom, M K

    2014-01-01

    One-dimensional secondary settling tank models are widely used in current engineering practice for design and optimization, and usually can be expressed as a nonlinear hyperbolic or nonlinear strongly degenerate parabolic partial differential equation (PDE). Reliable numerical methods are needed to produce approximate solutions that converge to the exact analytical solutions. In this study, we introduced a reliable numerical technique, the Yee-Roe-Davis (YRD) method as the governing PDE solver, and compared its reliability with the prevalent Stenstrom-Vitasovic-Takács (SVT) method by assessing their simulation results at various operating conditions. The YRD method also produced a similar solution to the previously developed Method G and Enquist-Osher method. The YRD and SVT methods were also used for a time-to-failure evaluation, and the results show that the choice of numerical method can greatly impact the solution. Reliable numerical methods, such as the YRD method, are strongly recommended.

  14. Ultrasound discloses entheseal involvement in inactive and low active inflammatory bowel disease without clinical signs and symptoms of spondyloarthropathy.

    PubMed

    Bandinelli, Francesca; Milla, Monica; Genise, Stefania; Giovannini, Leonardo; Bagnoli, Siro; Candelieri, Antonio; Collaku, Ledio; Biagini, Silvia; Cerinic, Marco Matucci

    2011-07-01

    To investigate the presence of lower limb entheseal abnormalities in IBD patients without clinical signs and symptoms of SpA and their correlation with IBD clinical variables. A total of 81 IBD patients [55 Crohn's disease (CD) and 26 ulcerative colitis (UC), 43 females and 38 males, mean age 41.3 (12.4) years, BMI 24 (2)] with low active (12) and inactive (67) disease were consecutively studied with US (LOGIQ5 General Electric 10-MHz linear array transducer) of lower limb entheses and compared with 40 healthy controls matched for sex, age and BMI. Quadriceps, patellar, Achilleon and plantar fascia entheses were scored according to the 0-36 Glasgow Ultrasound Enthesitis Scoring System (GUESS) and power Doppler (PD). Correlations of GUESS and PD with IBD features [duration, type (CD/UC) and activity (disease activity index for CD/Truelove score for UC)] were investigated. The intra- and inter-reader agreements for US were estimated in all images detected in patients and controls. Of the 81 patients, 71 (92.6%) presented almost one tendon alteration with mean GUESS 5.1 (3.5): 81.5% thickness (higher than controls P < 0.05), 67.9% enthesophytosis, 27.1% bursitis and 16.1% erosions. PD was positive in 13/81 (16%) patients. In controls, US showed only enthesophytes (5%) and no PD. GUESS and PD were independent of duration, activity or type (CD/UC) of IBD. The intra- and inter-reader agreements were high (>0.9 intra-class correlation variability). US entheseal abnormalities are present in IBD patients without clinical signs and symptoms of SpA. US enthesopathy is independent of activity, duration and type of gut disease.

  15. Using functional neuroimaging combined with a think-aloud protocol to explore clinical reasoning expertise in internal medicine.

    PubMed

    Durning, Steven J; Graner, John; Artino, Anthony R; Pangaro, Louis N; Beckman, Thomas; Holmboe, Eric; Oakes, Terrance; Roy, Michael; Riedy, Gerard; Capaldi, Vincent; Walter, Robert; van der Vleuten, Cees; Schuwirth, Lambert

    2012-09-01

    Clinical reasoning is essential to medical practice, but because it entails internal mental processes, it is difficult to assess. Functional magnetic resonance imaging (fMRI) and think-aloud protocols may improve understanding of clinical reasoning as these methods can more directly assess these processes. The objective of our study was to use a combination of fMRI and think-aloud procedures to examine fMRI correlates of a leading theoretical model in clinical reasoning based on experimental findings to date: analytic (i.e., actively comparing and contrasting diagnostic entities) and nonanalytic (i.e., pattern recognition) reasoning. We hypothesized that there would be functional neuroimaging differences between analytic and nonanalytic reasoning theory. 17 board-certified experts in internal medicine answered and reflected on validated U.S. Medical Licensing Exam and American Board of Internal Medicine multiple-choice questions (easy and difficult) during an fMRI scan. This procedure was followed by completion of a formal think-aloud procedure. fMRI findings provide some support for the presence of analytic and nonanalytic reasoning systems. Statistically significant activation of prefrontal cortex distinguished answering incorrectly versus correctly (p < 0.01), whereas activation of precuneus and midtemporal gyrus distinguished not guessing from guessing (p < 0.01). We found limited fMRI evidence to support analytic and nonanalytic reasoning theory, as our results indicate functional differences with correct vs. incorrect answers and guessing vs. not guessing. However, our findings did not suggest one consistent fMRI activation pattern of internal medicine expertise. This model of employing fMRI correlates offers opportunities to enhance our understanding of theory, as well as improve our teaching and assessment of clinical reasoning, a key outcome of medical education.

  16. A proactive password checker

    NASA Technical Reports Server (NTRS)

    Bishop, Matt

    1990-01-01

    Password selection has long been a difficult issue; traditionally, passwords are either assigned by the computer or chosen by the user. When the computer does the assignment, the passwords are often hard to remember; when the user makes the selection, the passwords are often easy to guess. This paper describes a technique, and a mechanism, to allow users to select passwords which to them are easy to remember but to others would be very difficult to guess. The technique is site, user, and group compatible, and allows rapid changing of constraints imposed upon the password. Although experience with this technique is limited, it appears to have much promise.

  17. Quantum gambling using two nonorthogonal states

    NASA Astrophysics Data System (ADS)

    Hwang, Won Young; Ahn, Doyeol; Hwang, Sung Woo

    2001-12-01

    We give a (remote) quantum-gambling scheme that makes use of the fact that quantum nonorthogonal states cannot be distinguished with certainty. In the proposed scheme, two participants Alice and Bob can be regarded as playing a game of making guesses on identities of quantum states that are in one of two given nonorthogonal states: if Bob makes a correct (an incorrect) guess on the identity of a quantum state that Alice has sent, he wins (loses). It is shown that the proposed scheme is secure against the nonentanglement attack. It can also be shown heuristically that the scheme is secure in the case of the entanglement attack.

  18. Are Birds Smarter Than Mathematicians? Pigeons (Columba livia) Perform Optimally on a Version of the Monty Hall Dilemma

    PubMed Central

    Herbranson, Walter T.; Schroeder, Julia

    2011-01-01

    The “Monty Hall Dilemma” (MHD) is a well known probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial choice is made, one of the remaining doors is opened, revealing no prize. The player is then given the option of staying with their initial guess or switching to the other unopened door. Most people opt to stay with their initial guess, despite the fact that switching doubles the probability of winning. A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training. PMID:20175592

  19. An adaptive procedure for defect identification problems in elasticity

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Sergio; Mura, J.

    2010-07-01

    In the context of inverse problems in mechanics, it is well known that the most typical situation is that neither the interior nor all the boundary is available to obtain data to detect the presence of inclusions or defects. We propose here an adaptive method that uses loads and measures of displacements only on part of the surface of the body, to detect defects in the interior of an elastic body. The method is based on Small Amplitude Homogenization, that is, we work under the assumption that the contrast on the values of the Lamé elastic coefficients between the defect and the matrix is not very large. The idea is that given the data for one loading state and one location of the displacement sensors, we use an optimization method to obtain a guess for the location of the inclusion and then, using this guess, we adapt the position of the sensors and the loading zone, hoping to refine the current guess. Numerical results show that the method is quite efficient in some cases, using in those cases no more than three loading positions and three different positions of the sensors.

  20. Peer norm guesses and self-reported attitudes towards performance-related pay.

    PubMed

    Georgantzis, Nikolaos; Vasileiou, Efi; Kotzaivazoglou, Iordanis

    2017-01-01

    Due to a variety of reasons, people see themselves differently from how they see others. This basic asymmetry has broad consequences. It leads people to judge themselves and their own behavior differently from how they judge others and others' behavior. This research, first, studies the perceptions and attitudes of Greek Public Sector employees towards the introduction of Performance-Related Pay (PRP) systems trying to reveal whether there is a divergence between individual attitudes and guesses on peers' attitudes. Secondly, it is investigated whether divergence between own self-reported and peer norm guesses could mediate the acceptance of the aforementioned implementation once job status has been controlled for. This study uses a unique questionnaire of 520 observations which was designed to address the questions outlined in the preceding lines. Our econometric results indicate that workers have heterogeneous attitudes and hold heterogeneous beliefs on others' expectations regarding a successful implementation of PRP. Specifically, individual perceptions are less skeptical towards PRP than are beliefs on others' attitudes. Additionally, we found that managers are significantly more optimistic than lower rank employees regarding the expected success of PRP systems in their jobs. However, they both expect their peers to be more negative than they themselves are.

  1. Peer norm guesses and self-reported attitudes towards performance-related pay

    PubMed Central

    Vasileiou, Efi; Kotzaivazoglou, Iordanis

    2017-01-01

    Due to a variety of reasons, people see themselves differently from how they see others. This basic asymmetry has broad consequences. It leads people to judge themselves and their own behavior differently from how they judge others and others’ behavior. This research, first, studies the perceptions and attitudes of Greek Public Sector employees towards the introduction of Performance-Related Pay (PRP) systems trying to reveal whether there is a divergence between individual attitudes and guesses on peers’ attitudes. Secondly, it is investigated whether divergence between own self-reported and peer norm guesses could mediate the acceptance of the aforementioned implementation once job status has been controlled for. This study uses a unique questionnaire of 520 observations which was designed to address the questions outlined in the preceding lines. Our econometric results indicate that workers have heterogeneous attitudes and hold heterogeneous beliefs on others’ expectations regarding a successful implementation of PRP. Specifically, individual perceptions are less skeptical towards PRP than are beliefs on others’ attitudes. Additionally, we found that managers are significantly more optimistic than lower rank employees regarding the expected success of PRP systems in their jobs. However, they both expect their peers to be more negative than they themselves are. PMID:28414737

  2. Are birds smarter than mathematicians? Pigeons (Columba livia) perform optimally on a version of the Monty Hall Dilemma.

    PubMed

    Herbranson, Walter T; Schroeder, Julia

    2010-02-01

    The "Monty Hall Dilemma" (MHD) is a well known probability puzzle in which a player tries to guess which of three doors conceals a desirable prize. After an initial choice is made, one of the remaining doors is opened, revealing no prize. The player is then given the option of staying with their initial guess or switching to the other unopened door. Most people opt to stay with their initial guess, despite the fact that switching doubles the probability of winning. A series of experiments investigated whether pigeons (Columba livia), like most humans, would fail to maximize their expected winnings in a version of the MHD. Birds completed multiple trials of a standard MHD, with the three response keys in an operant chamber serving as the three doors and access to mixed grain as the prize. Across experiments, the probability of gaining reinforcement for switching and staying was manipulated, and birds adjusted their probability of switching and staying to approximate the optimal strategy. Replication of the procedure with human participants showed that humans failed to adopt optimal strategies, even with extensive training.

  3. The art of fault-tolerant system reliability modeling

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1990-01-01

    A step-by-step tutorial of the methods and tools used for the reliability analysis of fault-tolerant systems is presented. Emphasis is on the representation of architectural features in mathematical models. Details of the mathematical solution of complex reliability models are not presented. Instead the use of several recently developed computer programs--SURE, ASSIST, STEM, PAWS--which automate the generation and solution of these models is described.

  4. Effect of Cage-Induced Stereotypies on Measures of Affective State and Recurrent Perseveration in CD-1 and C57BL/6 Mice

    PubMed Central

    Novak, Janja; Bailoo, Jeremy D.; Melotti, Luca; Würbel, Hanno

    2016-01-01

    Stereotypies are abnormal repetitive behaviour patterns that are highly prevalent in laboratory mice and are thought to reflect impaired welfare. Thus, they are associated with impaired behavioural inhibition and may also reflect negative affective states. However, in mice the relationship between stereotypies and behavioural inhibition is inconclusive, and reliable measures of affective valence are lacking. Here we used an exploration based task to assess cognitive bias as a measure of affective valence and a two-choice guessing task to assess recurrent perseveration as a measure of impaired behavioural inhibition to test mice with different forms and expression levels of stereotypic behaviour. We trained 44 CD-1 and 40 C57BL/6 female mice to discriminate between positively and negatively cued arms in a radial maze and tested their responses to previously inaccessible ambiguous arms. In CD-1 mice (i) mice with higher stereotypy levels displayed a negative cognitive bias and this was influenced by the form of stereotypy performed, (ii) negative cognitive bias was evident in back-flipping mice, and (iii) no such effect was found in mice displaying bar-mouthing or cage-top twirling. In C57BL/6 mice neither route-tracing nor bar-mouthing was associated with cognitive bias, indicating that in this strain these stereotypies may not reflect negative affective states. Conversely, while we found no relation of stereotypy to recurrent perseveration in CD-1 mice, C57BL/6 mice with higher levels of route-tracing, but not bar-mouthing, made more repetitive responses in the guessing task. Our findings confirm previous research indicating that the implications of stereotypies for animal welfare may strongly depend on the species and strain of animal as well as on the form and expression level of the stereotypy. Furthermore, they indicate that variation in stereotypic behaviour may represent an important source of variation in many animal experiments. PMID:27145080

  5. Integrated Evaluation of Reliability and Power Consumption of Wireless Sensor Networks

    PubMed Central

    Dâmaso, Antônio; Maciel, Paulo

    2017-01-01

    Power consumption is a primary interest in Wireless Sensor Networks (WSNs), and a large number of strategies have been proposed to evaluate it. However, those approaches usually neither consider reliability issues nor the power consumption of applications executing in the network. A central concern is the lack of consolidated solutions that enable us to evaluate the power consumption of applications and the network stack also considering their reliabilities. To solve this problem, we introduce a fully automatic solution to design power consumption aware WSN applications and communication protocols. The solution presented in this paper comprises a methodology to evaluate the power consumption based on the integration of formal models, a set of power consumption and reliability models, a sensitivity analysis strategy to select WSN configurations and a toolbox named EDEN to fully support the proposed methodology. This solution allows accurately estimating the power consumption of WSN applications and the network stack in an automated way. PMID:29113078

  6. Community detection in networks with unequal groups.

    PubMed

    Zhang, Pan; Moore, Cristopher; Newman, M E J

    2016-01-01

    Recently, a phase transition has been discovered in the network community detection problem below which no algorithm can tell which nodes belong to which communities with success any better than a random guess. This result has, however, so far been limited to the case where the communities have the same size or the same average degree. Here we consider the case where the sizes or average degrees differ. This asymmetry allows us to assign nodes to communities with better-than-random success by examining their local neighborhoods. Using the cavity method, we show that this removes the detectability transition completely for networks with four groups or fewer, while for more than four groups the transition persists up to a critical amount of asymmetry but not beyond. The critical point in the latter case coincides with the point at which local information percolates, causing a global transition from a less-accurate solution to a more-accurate one.

  7. Balloon Ascent: 3-D Simulation Tool for the Ascent and Float of High-Altitude Balloons

    NASA Technical Reports Server (NTRS)

    Farley, Rodger E.

    2005-01-01

    The BalloonAscent balloon flight simulation code represents a from-scratch development using Visual Basic 5 as the software platform. The simulation code is a transient analysis of balloon flight, predicting the skin and gas temperatures along with the 3-D position and velocity in a time and spatially varying environment. There are manual and automated controls for gas valving and the dropping of ballast. Also, there are many handy calculators, such as appropriate free lift, and steady-state thermal solutions with temperature gradients. The strength of this simulation model over others in the past is that the infrared environment is deterministic rather than guessed at. The ground temperature is specified along with the emissivity, which creates a ground level IR environment that is then partially absorbed as it travels upward through the atmosphere to the altitude of the balloon.

  8. An interior-point method-based solver for simulation of aircraft parts riveting

    NASA Astrophysics Data System (ADS)

    Stefanova, Maria; Yakunin, Sergey; Petukhova, Margarita; Lupuleac, Sergey; Kokkolaras, Michael

    2018-05-01

    The particularities of the aircraft parts riveting process simulation necessitate the solution of a large amount of contact problems. A primal-dual interior-point method-based solver is proposed for solving such problems efficiently. The proposed method features a worst case polynomial complexity bound ? on the number of iterations, where n is the dimension of the problem and ε is a threshold related to desired accuracy. In practice, the convergence is often faster than this worst case bound, which makes the method applicable to large-scale problems. The computational challenge is solving the system of linear equations because the associated matrix is ill conditioned. To that end, the authors introduce a preconditioner and a strategy for determining effective initial guesses based on the physics of the problem. Numerical results are compared with ones obtained using the Goldfarb-Idnani algorithm. The results demonstrate the efficiency of the proposed method.

  9. Coupling Conceptual and Quantitative Problems to Develop Expertise in Introductory Physics Students

    NASA Astrophysics Data System (ADS)

    Singh, Chandralekha

    2008-10-01

    We discuss the effect of administering conceptual and quantitative isomorphic problem pairs (CQIPP) back to back vs. asking students to solve only one of the problems in the CQIPP in introductory physics courses. Students who answered both questions in a CQIPP often performed better on the conceptual questions than those who answered the corresponding conceptual questions only. Although students often took advantage of the quantitative counterpart to answer a conceptual question of a CQIPP correctly, when only given the conceptual question, students seldom tried to convert it into a quantitative question, solve it and then reason about the solution conceptually. Even in individual interviews, when students who were only given conceptual questions had difficulty and the interviewer explicitly encouraged them to convert the conceptual question into the corresponding quantitative problem by choosing appropriate variables, a majority of students were reluctant and preferred to guess the answer to the conceptual question based upon their gut feeling.

  10. Integrated performance and reliability specification for digital avionics systems

    NASA Technical Reports Server (NTRS)

    Brehm, Eric W.; Goettge, Robert T.

    1995-01-01

    This paper describes an automated tool for performance and reliability assessment of digital avionics systems, called the Automated Design Tool Set (ADTS). ADTS is based on an integrated approach to design assessment that unifies traditional performance and reliability views of system designs, and that addresses interdependencies between performance and reliability behavior via exchange of parameters and result between mathematical models of each type. A multi-layer tool set architecture has been developed for ADTS that separates the concerns of system specification, model generation, and model solution. Performance and reliability models are generated automatically as a function of candidate system designs, and model results are expressed within the system specification. The layered approach helps deal with the inherent complexity of the design assessment process, and preserves long-term flexibility to accommodate a wide range of models and solution techniques within the tool set structure. ADTS research and development to date has focused on development of a language for specification of system designs as a basis for performance and reliability evaluation. A model generation and solution framework has also been developed for ADTS, that will ultimately encompass an integrated set of analytic and simulated based techniques for performance, reliability, and combined design assessment.

  11. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    NASA Astrophysics Data System (ADS)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M. N.; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-01

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  12. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory.

    PubMed

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; Niklasson, Anders M N; Head-Gordon, Teresa; Skylaris, Chris-Kriton

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities are treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes-in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.

  13. Non-linear eigensolver-based alternative to traditional SCF methods

    NASA Astrophysics Data System (ADS)

    Gavin, Brendan; Polizzi, Eric

    2013-03-01

    The self-consistent iterative procedure in Density Functional Theory calculations is revisited using a new, highly efficient and robust algorithm for solving the non-linear eigenvector problem (i.e. H(X)X = EX;) of the Kohn-Sham equations. This new scheme is derived from a generalization of the FEAST eigenvalue algorithm, and provides a fundamental and practical numerical solution for addressing the non-linearity of the Hamiltonian with the occupied eigenvectors. In contrast to SCF techniques, the traditional outer iterations are replaced by subspace iterations that are intrinsic to the FEAST algorithm, while the non-linearity is handled at the level of a projected reduced system which is orders of magnitude smaller than the original one. Using a series of numerical examples, it will be shown that our approach can outperform the traditional SCF mixing techniques such as Pulay-DIIS by providing a high converge rate and by converging to the correct solution regardless of the choice of the initial guess. We also discuss a practical implementation of the technique that can be achieved effectively using the FEAST solver package. This research is supported by NSF under Grant #ECCS-0846457 and Intel Corporation.

  14. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE PAGES

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex; ...

    2017-03-28

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  15. Performance of extended Lagrangian schemes for molecular dynamics simulations with classical polarizable force fields and density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vitale, Valerio; Dziedzic, Jacek; Albaugh, Alex

    Iterative energy minimization with the aim of achieving self-consistency is a common feature of Born-Oppenheimer molecular dynamics (BOMD) and classical molecular dynamics with polarizable force fields. In the former, the electronic degrees of freedom are optimized, while the latter often involves an iterative determination of induced point dipoles. The computational effort of the self-consistency procedure can be reduced by re-using converged solutions from previous time steps. However, this must be done carefully, as not to break time-reversal symmetry, which negatively impacts energy conservation. Self-consistent schemes based on the extended Lagrangian formalism, where the initial guesses for the optimized quantities aremore » treated as auxiliary degrees of freedom, constitute one elegant solution. We report on the performance of two integration schemes with the same underlying extended Lagrangian structure, which we both employ in two radically distinct regimes—in classical molecular dynamics simulations with the AMOEBA polarizable force field and in BOMD simulations with the Onetep linear-scaling density functional theory (LS-DFT) approach. Furthermore, both integration schemes are found to offer significant improvements over the standard (unpropagated) molecular dynamics formulation in both the classical and LS-DFT regimes.« less

  16. Predictive Simulations of Neuromuscular Coordination and Joint-Contact Loading in Human Gait.

    PubMed

    Lin, Yi-Chung; Walter, Jonathan P; Pandy, Marcus G

    2018-04-18

    We implemented direct collocation on a full-body neuromusculoskeletal model to calculate muscle forces, ground reaction forces and knee contact loading simultaneously for one cycle of human gait. A data-tracking collocation problem was solved for walking at the normal speed to establish the practicality of incorporating a 3D model of articular contact and a model of foot-ground interaction explicitly in a dynamic optimization simulation. The data-tracking solution then was used as an initial guess to solve predictive collocation problems, where novel patterns of movement were generated for walking at slow and fast speeds, independent of experimental data. The data-tracking solutions accurately reproduced joint motion, ground forces and knee contact loads measured for two total knee arthroplasty patients walking at their preferred speeds. RMS errors in joint kinematics were < 2.0° for rotations and < 0.3 cm for translations while errors in the model-computed ground-reaction and knee-contact forces were < 0.07 BW and < 0.4 BW, respectively. The predictive solutions were also consistent with joint kinematics, ground forces, knee contact loads and muscle activation patterns measured for slow and fast walking. The results demonstrate the feasibility of performing computationally-efficient, predictive, dynamic optimization simulations of movement using full-body, muscle-actuated models with realistic representations of joint function.

  17. Migration of antioxidants from polylactic acid films, a parameter estimation approach: Part I - A model including convective mass transfer coefficient.

    PubMed

    Samsudin, Hayati; Auras, Rafael; Burgess, Gary; Dolan, Kirk; Soto-Valdez, Herlinda

    2018-03-01

    A two-step solution based on the boundary conditions of Crank's equations for mass transfer in a film was developed. Three driving factors, the diffusion (D), partition (K p,f ) and convective mass transfer coefficients (h), govern the sorption and/or desorption kinetics of migrants from polymer films. These three parameters were simultaneously estimated. They provide in-depth insight into the physics of a migration process. The first step was used to find the combination of D, K p,f and h that minimized the sums of squared errors (SSE) between the predicted and actual results. In step 2, an ordinary least square (OLS) estimation was performed by using the proposed analytical solution containing D, K p,f and h. Three selected migration studies of PLA/antioxidant-based films were used to demonstrate the use of this two-step solution. Additional parameter estimation approaches such as sequential and bootstrap were also performed to acquire a better knowledge about the kinetics of migration. The proposed model successfully provided the initial guesses for D, K p,f and h. The h value was determined without performing a specific experiment for it. By determining h together with D, under or overestimation issues pertaining to a migration process can be avoided since these two parameters are correlated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Simultaneous Intrinsic and Extrinsic Parameter Identification of a Hand-Mounted Laser-Vision Sensor

    PubMed Central

    Lee, Jong Kwang; Kim, Kiho; Lee, Yongseok; Jeong, Taikyeong

    2011-01-01

    In this paper, we propose a simultaneous intrinsic and extrinsic parameter identification of a hand-mounted laser-vision sensor (HMLVS). A laser-vision sensor (LVS), consisting of a camera and a laser stripe projector, is used as a sensor component of the robotic measurement system, and it measures the range data with respect to the robot base frame using the robot forward kinematics and the optical triangulation principle. For the optimal estimation of the model parameters, we applied two optimization techniques: a nonlinear least square optimizer and a particle swarm optimizer. Best-fit parameters, including both the intrinsic and extrinsic parameters of the HMLVS, are simultaneously obtained based on the least-squares criterion. From the simulation and experimental results, it is shown that the parameter identification problem considered was characterized by a highly multimodal landscape; thus, the global optimization technique such as a particle swarm optimization can be a promising tool to identify the model parameters for a HMLVS, while the nonlinear least square optimizer often failed to find an optimal solution even when the initial candidate solutions were selected close to the true optimum. The proposed optimization method does not require good initial guesses of the system parameters to converge at a very stable solution and it could be applied to a kinematically dissimilar robot system without loss of generality. PMID:22164104

  19. Automating Initial Guess Generation for High Fidelity Trajectory Optimization Tools

    NASA Technical Reports Server (NTRS)

    Villa, Benjamin; Lantoine, Gregory; Sims, Jon; Whiffen, Gregory

    2013-01-01

    Many academic studies in spaceflight dynamics rely on simplified dynamical models, such as restricted three-body models or averaged forms of the equations of motion of an orbiter. In practice, the end result of these preliminary orbit studies needs to be transformed into more realistic models, in particular to generate good initial guesses for high-fidelity trajectory optimization tools like Mystic. This paper reviews and extends some of the approaches used in the literature to perform such a task, and explores the inherent trade-offs of such a transformation with a view toward automating it for the case of ballistic arcs. Sample test cases in the libration point regimes and small body orbiter transfers are presented.

  20. Using composite images to assess accuracy in personality attribution to faces.

    PubMed

    Little, Anthony C; Perrett, David I

    2007-02-01

    Several studies have demonstrated some accuracy in personality attribution using only visual appearance. Using composite images of those scoring high and low on a particular trait, the current study shows that judges perform better than chance in guessing others' personality, particularly for the traits conscientiousness and extraversion. This study also shows that attractiveness, masculinity and age may all provide cues to assess personality accurately and that accuracy is affected by the sex of both of those judging and being judged. Individuals do perform better than chance at guessing another's personality from only facial information, providing some support for the popular belief that it is possible to assess accurately personality from faces.

  1. Search of exploration opportunity for near earth objects based on analytical gradients

    NASA Astrophysics Data System (ADS)

    Ren, Y.; Cui, P. Y.; Luan, E. J.

    2008-01-01

    The problem of searching for exploration opportunity of near Earth objects is investigated. For rendezvous missions, the analytical gradients of performance index with respect to free parameters are derived by combining the calculus of variation with the theory of state-transition matrix. Then, some initial guesses are generated random in the search space, and the performance index is optimized with the guidance of analytical gradients from these initial guesses. This method not only keeps the property of global search in traditional method, but also avoids the blindness in the traditional exploration opportunity search; hence, the computing speed could be increased greatly. Furthermore, by using this method, the search precision could be controlled effectively.

  2. Quality assessment and control of finite element solutions

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Babuska, Ivo

    1987-01-01

    Status and some recent developments in the techniques for assessing the reliability of finite element solutions are summarized. Discussion focuses on a number of aspects including: the major types of errors in the finite element solutions; techniques used for a posteriori error estimation and the reliability of these estimators; the feedback and adaptive strategies for improving the finite element solutions; and postprocessing approaches used for improving the accuracy of stresses and other important engineering data. Also, future directions for research needed to make error estimation and adaptive movement practical are identified.

  3. Preliminary Analysis of Low-Thrust Gravity Assist Trajectories by An Inverse Method and a Global Optimization Technique.

    NASA Astrophysics Data System (ADS)

    de Pascale, P.; Vasile, M.; Casotto, S.

    The design of interplanetary trajectories requires the solution of an optimization problem, which has been traditionally solved by resorting to various local optimization techniques. All such approaches, apart from the specific method employed (direct or indirect), require an initial guess, which deeply influences the convergence to the optimal solution. The recent developments in low-thrust propulsion have widened the perspectives of exploration of the Solar System, while they have at the same time increased the difficulty related to the trajectory design process. Continuous thrust transfers, typically characterized by multiple spiraling arcs, have a broad number of design parameters and thanks to the flexibility offered by such engines, they typically turn out to be characterized by a multi-modal domain, with a consequent larger number of optimal solutions. Thus the definition of the first guesses is even more challenging, particularly for a broad search over the design parameters, and it requires an extensive investigation of the domain in order to locate the largest number of optimal candidate solutions and possibly the global optimal one. In this paper a tool for the preliminary definition of interplanetary transfers with coast-thrust arcs and multiple swing-bys is presented. Such goal is achieved combining a novel methodology for the description of low-thrust arcs, with a global optimization algorithm based on a hybridization of an evolutionary step and a deterministic step. Low thrust arcs are described in a 3D model in order to account the beneficial effects of low-thrust propulsion for a change of inclination, resorting to a new methodology based on an inverse method. The two-point boundary values problem (TPBVP) associated with a thrust arc is solved by imposing a proper parameterized evolution of the orbital parameters, by which, the acceleration required to follow the given trajectory with respect to the constraints set is obtained simply through algebraic computation. By this method a low-thrust transfer satisfying the boundary conditions on position and velocity can be quickly assessed, with low computational effort since no numerical propagation is required. The hybrid global optimization algorithm is made of a double step. Through the evolutionary search a large number of optima, and eventually the global one, are located, while the deterministic step consists of a branching process that exhaustively partitions the domain in order to have an extensive characterization of such a complex space of solutions. Furthermore, the approach implements a novel direct constraint-handling technique allowing the treatment of mixed-integer nonlinear programming problems (MINLP) typical of multiple swingby trajectories. A low-thrust transfer to Mars is studied as a test bed for the low-thrust model, thus presenting the main characteristics of the different shapes proposed and the features of the possible sub-arcs segmentations between two planets with respect to different objective functions: minimum time and minimum fuel consumption transfers. Other various test cases are also shown and further optimized, proving the effective capability of the proposed tool.

  4. Subjective measures of unconscious knowledge.

    PubMed

    Dienes, Zoltán

    2008-01-01

    The chapter gives an overview of the use of subjective measures of unconscious knowledge. Unconscious knowledge is knowledge we have, and could very well be using, but we are not aware of. Hence appropriate methods for indicating unconscious knowledge must show that the person (a) has knowledge but (b) does not know that she has it. One way of determining awareness of knowing is by taking confidence ratings after making judgments. If the judgments are above baseline but the person believes they are guessing (guessing criterion) or confidence does not relate to accuracy (zero-correlation criterion) there is evidence of unconscious knowledge. The way these methods can deal with the problem of bias is discussed, as is the use of different types of confidence scales. The guessing and zero-correlation criteria show whether or not the person is aware of knowing the content of the judgment, but not whether the person is aware of what any knowledge was that enabled the judgment. Thus, a distinction is made between judgment and structural knowledge, and it is shown how the conscious status of the latter can also be assessed. Finally, the use of control over the use of knowledge as a subjective measure of judgment knowledge is illustrated. Experiments using artificial grammar learning and a serial reaction time task explore these issues.

  5. Glyph guessing for ‘oo’ and ‘ee’: spatial frequency information in sound symbolic matching for ancient and unfamiliar scripts

    PubMed Central

    2017-01-01

    In three experiments, we asked whether diverse scripts contain interpretable information about the speech sounds they represent. When presented with a pair of unfamiliar letters, adult readers correctly guess which is /i/ (the ‘ee’ sound in ‘feet’), and which is /u/ (the ‘oo’ sound in ‘shoe’) at rates higher than expected by chance, as shown in a large sample of Singaporean university students (Experiment 1) and replicated in a larger sample of international Internet users (Experiment 2). To uncover what properties of the letters contribute to different scripts' ‘guessability,’ we analysed the visual spatial frequencies in each letter (Experiment 3). We predicted that the lower spectral frequencies in the formants of the vowel /u/ would pattern with lower spatial frequencies in the corresponding letters. Instead, we found that across all spatial frequencies, the letter with more black/white cycles (i.e. more ink) was more likely to be guessed as /u/, and the larger the difference between the glyphs in a pair, the higher the script's guessability. We propose that diverse groups of humans across historical time and geographical space tend to employ similar iconic strategies for representing speech in visual form, and provide norms for letter pairs from 56 diverse scripts. PMID:28989784

  6. The impact of age stereotypes on source monitoring in younger and older adults.

    PubMed

    Kuhlmann, Beatrice G; Bayen, Ute J; Meuser, Katharina; Kornadt, Anna E

    2016-12-01

    In 2 experiments, we examined reliance on age stereotypes when reconstructing the sources of statements. Two sources presented statements (half typical for a young adult, half for an old adult). Afterward, the sources' ages-23 and 70 years-were revealed and participants completed a source-monitoring task requiring attribution of statements to the sources. Multinomial model-based analyses revealed no age-typicality effect on source memory; however, age-typicality biased source-guessing: When not remembering the source, participants predominantly guessed the source for whose age the statement was typical. Thereby, people retrospectively described the sources as having made more statements that fit with stereotypes about their age group than they had truly made. In Experiment 1, older (60-84 years) participants' guessing bias was stronger than younger (17-26 years) participants', but they also had poorer source memory. Furthermore, older adults with better source memory were less biased than those with poorer source memory. Similarly, younger adults' age-stereotype reliance was larger when source memory was impaired in Experiment 2. Thus, age stereotypes bias source attributions, and individuals with poor source memory are particularly prone to this bias, which may contribute to the maintenance of age stereotypes over time. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  7. Bethe, Oppenheimer, Teller and the Fermi Award: Norris Bradbury Speaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meade, Roger Allen

    In 1956 the Enrico Fermi Presidential Award was established to recognize scientists, engineers, and science policymakers who gave unstintingly over their careers to advance energy science and technology. The first recipient was John von Neumann. .1 Among those scientists who were thought eligible for the award were Hans Bethe, J. Robert Oppenheimer, and Edward Teller. In 1959 Norris Bradbury was asked to comment on the relative merits of each these three men, whom he knew well from their affiliation with Los Alamos. Below is a reproduction of the letter Bradbury sent to Dr. Warren C. Johnson of the AEC’s Generalmore » Advisory Committee(GAC) containing his evaluation of each man. The letter might surprise those not accustomed to Bradbury’s modus operandi of providing very detailed and forthright answers to the AEC. The letter, itself, was found in cache of old microfilm. Whether because of the age of the microfilm or the quality of the filming process, portions of the letter are not legible. Where empty brackets appear, the word or words could not be read or deduced. Words appearing in brackets are guesses that appear, from the image, to be what was written. These guesses, of course, are just that – guesses.« less

  8. Treatment assignment guesses by study participants in a double-blind dose escalation clinical trial of saw palmetto.

    PubMed

    Lee, Jeannette Y; Moore, Page; Kusek, John; Barry, Michael

    2014-01-01

    This report assesses participant perception of treatment assignment in a randomized, double-blind, placebo-controlled trial of saw palmetto for the treatment of benign prostatic hyperplasia (BCM). Participants randomized to receive saw palmetto were instructed to take one 320 mg gelcap daily for the first 24 weeks, two 320 mg gelcaps daily for the second 24 weeks, and three 320 mg gelcaps daily for the third 24 weeks. Study participants assigned to placebo were instructed to take the same number of matching placebo gelcaps in each time period. At 24, 48, and 72 weeks postrandomization, the American Urological Association Symptom Index (AUA-SI) was administered and participants were asked to guess their treatment assignment. The study was conducted at 11 clinical centers in North America. Study participants were men, 45 years and older, with moderate to low severe BPH symptoms, randomized to saw palmetto (N=151) or placebo (N=155). Treatment arms were compared with respect to the distribution of participant guesses of treatment assignment. For participants assigned to saw palmetto, 22.5%, 24.7%, and 29.8% correctly thought they were taking saw palmetto, and 37.3%, 40.0%, and 44.4% incorrectly thought they were on placebo at 24, 48, and 72 weeks, respectively. For placebo participants, 21.8%, 27.4%, and 25.2% incorrectly thought they were on saw palmetto, and 41.6%, 39.9%, and 42.6% correctly thought they were on placebo at 24, 48, and 72 weeks, respectively. The treatment arms did not vary with respect to the distributions of participants who guessed they were on saw palmetto (p=0.823) or placebo (p=0.893). Participants who experienced an improvement in AUA-SI were 2.16 times more likely to think they were on saw palmetto. Blinding of treatment assignment was successful in this study. Improvement in BPH-related symptoms was associated with the perception that participants were taking saw palmetto.

  9. Projecting optimal land-use and -management strategies under population growth and climate change using a coupled ecosystem & land use model framework

    NASA Astrophysics Data System (ADS)

    Rabin, Sam; Alexander, Peter; Anthoni, Peter; Henry, Roslyn; Huntingford, Chris; Pugh, Thomas; Rounsevell, Mark; Arneth, Almut

    2017-04-01

    A major question facing humanity is how well agricultural production systems will be able to feed the world in a future of rapid climate change, population growth, and demand shifts—all while minimizing our impact on the natural world. Global modeling has frequently been used to investigate certain aspects of this question, but in order to properly address the challenge, no one part of the human-environmental system can be assessed in isolation. It is especially critical that the effect on agricultural yields of changing temperature and precipitation regimes (including seasonal timing and frequency and intensity of extreme events), as well as rising atmospheric carbon dioxide levels, be taken into account when planning for future food security. Coupled modeling efforts, where changes in various parts of the Earth system are allowed to feed back onto one another, represent a powerful strategy in this regard. This presentation describes the structure and initial results of an effort to couple a biologically-representative vegetation and crop production simulator, LPJ-GUESS, with the climate emulator IMOGEN and the land-use model PLUMv2. With IMOGEN providing detailed future weather simulations, LPJ-GUESS simulates natural vegetation as well as cropland and pasture/rangeland; the simulated exchange of greenhouse gases between the land and atmosphere feeds back into IMOGEN's predictions. LPJ-GUESS also produces potential vegetation yields for irrigated vs. rainfed crops under three levels of nitrogen fertilizer addition. PLUMv2 combines these potential yields with endogenous demand and agricultural commodity price to calculate an optimal set of land use distributions and management strategies across the world for the next five years of simulation, based on socio-economic scenario data. These land uses are then fed back into LPJ-GUESS, and the cycle of climate, greenhouse gas emissions, crop yields, and land-use change continues. The globally gridded nature of the model—at 0.5-degree resolution across the world—generates spatially explicit projections at a sub-national scale relevant to individual land managers. Here, we present the results of using the LPJ-GUESS-PLUM-IMOGEN coupled model to project agricultural production and management strategies under several scenarios of greenhouse gas emissions (the Representative Concentration Pathways) and socioeconomic futures (the Shared Socioeconomic Pathways) through the year 2100. In the future, the coupled model could be used to generate projections for alternative scenarios: for example, to consider the implications from land-based climate change mitigation policies, or changes to international trade tariffs regimes.

  10. Payoff Information Biases a Fast Guess Process in Perceptual Decision Making under Deadline Pressure: Evidence from Behavior, Evoked Potentials, and Quantitative Model Comparison.

    PubMed

    Noorbaloochi, Sharareh; Sharon, Dahlia; McClelland, James L

    2015-08-05

    We used electroencephalography (EEG) and behavior to examine the role of payoff bias in a difficult two-alternative perceptual decision under deadline pressure in humans. The findings suggest that a fast guess process, biased by payoff and triggered by stimulus onset, occurred on a subset of trials and raced with an evidence accumulation process informed by stimulus information. On each trial, the participant judged whether a rectangle was shifted to the right or left and responded by squeezing a right- or left-hand dynamometer. The payoff for each alternative (which could be biased or unbiased) was signaled 1.5 s before stimulus onset. The choice response was assigned to the first hand reaching a squeeze force criterion and reaction time was defined as time to criterion. Consistent with a fast guess account, fast responses were strongly biased toward the higher-paying alternative and the EEG exhibited an abrupt rise in the lateralized readiness potential (LRP) on a subset of biased payoff trials contralateral to the higher-paying alternative ∼ 150 ms after stimulus onset and 50 ms before stimulus information influenced the LRP. This rise was associated with poststimulus dynamometer activity favoring the higher-paying alternative and predicted choice and response time. Quantitative modeling supported the fast guess account over accounts of payoff effects supported in other studies. Our findings, taken with previous studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy for the integration of payoff and stimulus information. Humans and other animals often face situations in which they must make choices based on uncertain sensory information together with information about expected outcomes (gains or losses) about each choice. We investigated how differences in payoffs between available alternatives affect neural activity, overt choice, and the timing of choice responses. In our experiment, in which participants were under strong time pressure, neural and behavioral findings together with model fitting suggested that our human participants often made a fast guess toward the higher reward rather than integrating stimulus and payoff information. Our findings, taken with findings from other studies, support the idea that payoff and prior probability manipulations produce flexible adaptations to task structure and do not reflect a fixed policy. Copyright © 2015 the authors 0270-6474/15/3510989-23$15.00/0.

  11. Reliability and Validity of the Chinese Version of the Solution-Focused Inventory in College Students

    ERIC Educational Resources Information Center

    Yang, Hongfei; Hai, Tang

    2015-01-01

    The psychometrics of the Chinese Solution-Focused Inventory (CSFI) was studied in Chinese college students. Confirmatory factor analysis confirmed the 3-factor structure. All subscales showed good reliability and convergent and incremental validity. Results of hierarchical regression analyses indicated that the 3 subscales accounted for additional…

  12. Global Emissions of Terpenoid VOCs from Terrestrial Vegetation in the Last Millennium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Acosta Navarro, J. C.; Smolander, S.; Struthers, H.

    2014-06-16

    We investigated the millennial variability of global BVOC emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene and sesquiterpene and Lund-Potsdam-Jena General Ecosystem Simulator (LPJ8 GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have signicant short term globalmore » effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr-1 (13% and 19% less than during during 1750-1850 and 1000- 15 1200, respectively) and LPJ-GUESS emissions were 323 TgC yr-1 (15% and 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr-1 (10% and 6% higher than during 1750-1850 and 1000-1200, respectively) in MEGAN, and 24 TgC yr-1 (2% higher and 5% 19 20 less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr-1 (10% and 4% higher than during1750-1850 and 1000-1200, respectively). Although both models capture similar We investigated the millennial variability of global BVOC emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene and sesquiterpene and Lund-Potsdam-Jena General Ecosystem Simulator (LPJ8GUESS), for isoprene and monoterpenes. We found the millennial trends ofglobal isoprene emissions to be mostly a*ected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have signifcant short term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr-1 (13% and 19% less than during during 1750-1850 and 1000- 1200, respectively) and LPJ-GUESS emissions were 323 TgC yr-1 (15% and 16 17 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr-1 (10% and 6% higher than during 1750-1850 and 18 1000-1200, respectively) in MEGAN, and 24 TgC yr-1 (2% higher and 5% less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr-1 (10% and 4% higher than during1750-1850 and 1000-1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.« less

  13. A self-adaptive memeplexes robust search scheme for solving stochastic demands vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Chen, Xianshun; Feng, Liang; Ong, Yew Soon

    2012-07-01

    In this article, we proposed a self-adaptive memeplex robust search (SAMRS) for finding robust and reliable solutions that are less sensitive to stochastic behaviours of customer demands and have low probability of route failures, respectively, in vehicle routing problem with stochastic demands (VRPSD). In particular, the contribution of this article is three-fold. First, the proposed SAMRS employs the robust solution search scheme (RS 3) as an approximation of the computationally intensive Monte Carlo simulation, thus reducing the computation cost of fitness evaluation in VRPSD, while directing the search towards robust and reliable solutions. Furthermore, a self-adaptive individual learning based on the conceptual modelling of memeplex is introduced in the SAMRS. Finally, SAMRS incorporates a gene-meme co-evolution model with genetic and memetic representation to effectively manage the search for solutions in VRPSD. Extensive experimental results are then presented for benchmark problems to demonstrate that the proposed SAMRS serves as an efficable means of generating high-quality robust and reliable solutions in VRPSD.

  14. A Secured Authentication Protocol for SIP Using Elliptic Curves Cryptography

    NASA Astrophysics Data System (ADS)

    Chen, Tien-Ho; Yeh, Hsiu-Lien; Liu, Pin-Chuan; Hsiang, Han-Chen; Shih, Wei-Kuan

    Session initiation protocol (SIP) is a technology regularly performed in Internet Telephony, and Hyper Text Transport Protocol (HTTP) as digest authentication is one of the major methods for SIP authentication mechanism. In 2005, Yang et al. pointed out that HTTP could not resist server spoofing attack and off-line guessing attack and proposed a secret authentication with Diffie-Hellman concept. In 2009, Tsai proposed a nonce based authentication protocol for SIP. In this paper, we demonstrate that their protocol could not resist the password guessing attack and insider attack. Furthermore, we propose an ECC-based authentication mechanism to solve their issues and present security analysis of our protocol to show that ours is suitable for applications with higher security requirement.

  15. Reliability Analysis and Reliability-Based Design Optimization of Circular Composite Cylinders Under Axial Compression

    NASA Technical Reports Server (NTRS)

    Rais-Rohani, Masoud

    2001-01-01

    This report describes the preliminary results of an investigation on component reliability analysis and reliability-based design optimization of thin-walled circular composite cylinders with average diameter and average length of 15 inches. Structural reliability is based on axial buckling strength of the cylinder. Both Monte Carlo simulation and First Order Reliability Method are considered for reliability analysis with the latter incorporated into the reliability-based structural optimization problem. To improve the efficiency of reliability sensitivity analysis and design optimization solution, the buckling strength of the cylinder is estimated using a second-order response surface model. The sensitivity of the reliability index with respect to the mean and standard deviation of each random variable is calculated and compared. The reliability index is found to be extremely sensitive to the applied load and elastic modulus of the material in the fiber direction. The cylinder diameter was found to have the third highest impact on the reliability index. Also the uncertainty in the applied load, captured by examining different values for its coefficient of variation, is found to have a large influence on cylinder reliability. The optimization problem for minimum weight is solved subject to a design constraint on element reliability index. The methodology, solution procedure and optimization results are included in this report.

  16. Probabilistic Finite Element Analysis & Design Optimization for Structural Designs

    NASA Astrophysics Data System (ADS)

    Deivanayagam, Arumugam

    This study focuses on implementing probabilistic nature of material properties (Kevlar® 49) to the existing deterministic finite element analysis (FEA) of fabric based engine containment system through Monte Carlo simulations (MCS) and implementation of probabilistic analysis in engineering designs through Reliability Based Design Optimization (RBDO). First, the emphasis is on experimental data analysis focusing on probabilistic distribution models which characterize the randomness associated with the experimental data. The material properties of Kevlar® 49 are modeled using experimental data analysis and implemented along with an existing spiral modeling scheme (SMS) and user defined constitutive model (UMAT) for fabric based engine containment simulations in LS-DYNA. MCS of the model are performed to observe the failure pattern and exit velocities of the models. Then the solutions are compared with NASA experimental tests and deterministic results. MCS with probabilistic material data give a good prospective on results rather than a single deterministic simulation results. The next part of research is to implement the probabilistic material properties in engineering designs. The main aim of structural design is to obtain optimal solutions. In any case, in a deterministic optimization problem even though the structures are cost effective, it becomes highly unreliable if the uncertainty that may be associated with the system (material properties, loading etc.) is not represented or considered in the solution process. Reliable and optimal solution can be obtained by performing reliability optimization along with the deterministic optimization, which is RBDO. In RBDO problem formulation, in addition to structural performance constraints, reliability constraints are also considered. This part of research starts with introduction to reliability analysis such as first order reliability analysis, second order reliability analysis followed by simulation technique that are performed to obtain probability of failure and reliability of structures. Next, decoupled RBDO procedure is proposed with a new reliability analysis formulation with sensitivity analysis, which is performed to remove the highly reliable constraints in the RBDO, thereby reducing the computational time and function evaluations. Followed by implementation of the reliability analysis concepts and RBDO in finite element 2D truss problems and a planar beam problem are presented and discussed.

  17. A Quantitative Risk Analysis Framework for Evaluating and Monitoring Operational Reliability of Cloud Computing

    ERIC Educational Resources Information Center

    Islam, Muhammad Faysal

    2013-01-01

    Cloud computing offers the advantage of on-demand, reliable and cost efficient computing solutions without the capital investment and management resources to build and maintain in-house data centers and network infrastructures. Scalability of cloud solutions enable consumers to upgrade or downsize their services as needed. In a cloud environment,…

  18. Increasing the Reliability of Ability-Achievement Difference Scores: An Example Using the Kaufman Assessment Battery for Children.

    ERIC Educational Resources Information Center

    Caruso, John C.; Witkiewitz, Katie

    2002-01-01

    As an alternative to equally weighted difference scores, examined an orthogonal reliable component analysis (RCA) solution and an oblique principal components analysis (PCA) solution for the standardization sample of the Kaufman Assessment Battery for Children (KABC; A. Kaufman and N. Kaufman, 1983). Discusses the practical implications of the…

  19. Reliability and clinical utility of Enzyme-linked immunosorbent assay for detection of anti-aminoacyl-tRNA synthetase antibody.

    PubMed

    Abe, Takeo; Tsunoda, Shinichiro; Nishioka, Aki; Azuma, Kouta; Tsuboi, Kazuyuki; Ogita, Chie; Yokoyama, Yuichi; Furukawa, Tetsuya; Maruoka, Momo; Tamura, Masao; Yoshikawa, Takahiro; Saito, Atsushi; Sekiguchi, Masahiro; Azuma, Naoto; Kitano, Masayasu; Matsui, Kiyoshi; Hosono, Yuji; Nakashima, Ran; Ohmura, Koichiro; Mimori, Tsuneyo; Sano, Hajime

    2016-01-01

    Anti-aminoacyl-tRNA synthetase (ARS) antibody is one of the myositis-specific autoantibodies to make a diagnosis of polymyositis (PM) and dermatomyositis (DM). Recently a new enzyme-linked immunosorbent assay (ELISA) kit of concurrently detected anti-ARS antibodies (anti-Jo-1, anti-PL-7, anti-PL-12, anti-EJ and anti-KS) have become to measure in the clinical setting. To evaluate the reliability of this ELISA kit, we measured anti-ARS antibodies in 75 PM and DM patients using by this ELISA assay and compared them with the results by RNA immunoprecipitation assay. Between the measurements of anti-PL-7, anti-PL-12, anti-EJ and anti-KS autoantibodies by ELISA assay and RNA-IP assay, the concordance rate of reproducibility is 95.1% and the positive agreement rate is 90.9% and negative agreement rate is 96.0% and kappa statistic is 0.841. Between the measurements of existing anti-Jo-1 antibody ELISA kit and anti-ARS antibody ELISA kit, the concordance rate of reproducibility is 96.9%, the positive agreement rate is 100%, negative agreement rate is 96.1% and kappa statistic is 0.909. The lung involvement in patients with PM and DM patients are positive of anti-ARS antibodies and anti-melanoma differentiation associated gene5 (MDA5) antibody at a rate around 70%. Then most life-threatening ILD with anti-MDA5 positive clinically amyopathic dermatomyositis patients could be highly guessed when anti-ARS antibodies are negative.

  20. Gamma activity modulated by naming of ambiguous and unambiguous images: intracranial recording

    PubMed Central

    Cho-Hisamoto, Yoshimi; Kojima, Katsuaki; Brown, Erik C; Matsuzaki, Naoyuki; Asano, Eishi

    2014-01-01

    OBJECTIVE Humans sometimes need to recognize objects based on vague and ambiguous silhouettes. Recognition of such images may require an intuitive guess. We determined the spatial-temporal characteristics of intracranially-recorded gamma activity (at 50–120 Hz) augmented differentially by naming of ambiguous and unambiguous images. METHODS We studied ten patients who underwent epilepsy surgery. Ambiguous and unambiguous images were presented during extraoperative electrocorticography recording, and patients were instructed to overtly name the object as it is first perceived. RESULTS Both naming tasks were commonly associated with gamma-augmentation sequentially involving the occipital and occipital-temporal regions, bilaterally, within 200 ms after the onset of image presentation. Naming of ambiguous images elicited gamma-augmentation specifically involving portions of the inferior-frontal, orbitofrontal, and inferior-parietal regions at 400 ms and after. Unambiguous images were associated with more intense gamma-augmentation in portions of the occipital and occipital-temporal regions. CONCLUSIONS Frontal-parietal gamma-augmentation specific to ambiguous images may reflect the additional cortical processing involved in exerting intuitive guess. Occipital gamma-augmentation enhanced during naming of unambiguous images can be explained by visual processing of stimuli with richer detail. SIGNIFICANCE Our results support the theoretical model that guessing processes in visual domain occur following the accumulation of sensory evidence resulting from the bottom-up processing in the occipital-temporal visual pathways. PMID:24815577

  1. Reliability Problems of the Datum: Solutions for Questionnaire Responses.

    ERIC Educational Resources Information Center

    Bastick, Tony

    Questionnaires often ask for estimates, and these estimates are given with different reliabilities. It is difficult to know the different reliabilities of single estimates and to take these into account in subsequent analyses. This paper contains a practical example to show that not taking the reliability of different responses into account can…

  2. Spectral edge: gradient-preserving spectral mapping for image fusion.

    PubMed

    Connah, David; Drew, Mark S; Finlayson, Graham D

    2015-12-01

    This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.

  3. Initial value problem of space dynamics in universal Stumpff anomaly

    NASA Astrophysics Data System (ADS)

    Sharaf, M. A.; Dwidar, H. R.

    2018-05-01

    In this paper, the initial value problem of space dynamics in universal Stumpff anomaly ψ is set up and developed in analytical and computational approach. For the analytical expansions, the linear independence of the functions U_{j} (ψ;σ); {j=0,1,2,3} are proved. The differential and recurrence equations satisfied by them and their relations with the elementary functions are given. The universal Kepler equation and its validations for different conic orbits are established together with the Lagrangian coefficients. Efficient representations of these functions are developed in terms of the continued fractions. For the computational developments we consider the following items: 1. Top-down algorithm for continued fraction evaluation. 2. One-point iteration formulae. 3. Determination of the coefficients of Kepler's equation. 4. Derivatives of Kepler's equation of any integer order. 5. Determination of the initial guess for the solution of the universal Kepler equation. Finally we give summary on the computational design for the initial value problem of space dynamics in universal Stumpff anomaly. This design based on the solution of the universal Kepler's equation by an iterative schemes of quadratic up to any desired order ℓ.

  4. Improvement of a uniqueness-and-anonymity-preserving user authentication scheme for connected health care.

    PubMed

    Xie, Qi; Liu, Wenhao; Wang, Shengbao; Han, Lidong; Hu, Bin; Wu, Ting

    2014-09-01

    Patient's privacy-preserving, security and mutual authentication between patient and the medical server are the important mechanism in connected health care applications, such as telecare medical information systems and personally controlled health records systems. In 2013, Wen showed that Das et al.'s scheme is vulnerable to the replay attack, user impersonation attacks and off-line guessing attacks, and then proposed an improved scheme using biometrics, password and smart card to overcome these weaknesses. However, we show that Wen's scheme is still vulnerable to off-line password guessing attacks, does not provide user's anonymity and perfect forward secrecy. Further, we propose an improved scheme to fix these weaknesses, and use the applied pi calculus based formal verification tool ProVerif to prove the security and authentication.

  5. Major Upgrades to the AIRS Version-6 Water Vapor Profile Methodology

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Blaisdell, John; Iredell, Lena; Lee, Jae N.

    2015-01-01

    Additional changes in Version-6.19 include all previous updates made to the q(p) retrieval since Version-6: Modified Neural-Net q0(p) guess above the tropopause Linearly tapers the neural net guess to match climatology at 70 mb, not at the top of the atmosphereChanged the 11 trapezoid q(p) perturbation functions used in Version-6 so as to match the 24 functions used in T(p) retrieval step. These modifications resulted in improved water vapor profiles in Version-6.19 compared to Version-6.Version-6.19 is tested for all of August 2013 and August 2014, as well for select other days. Before finalized and operational in 2016, the V-6.19 can be acquired upon request for limited time intervals.

  6. An Integrated Approach to Parameter Learning in Infinite-Dimensional Space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boyd, Zachary M.; Wendelberger, Joanne Roth

    The availability of sophisticated modern physics codes has greatly extended the ability of domain scientists to understand the processes underlying their observations of complicated processes, but it has also introduced the curse of dimensionality via the many user-set parameters available to tune. Many of these parameters are naturally expressed as functional data, such as initial temperature distributions, equations of state, and controls. Thus, when attempting to find parameters that match observed data, being able to navigate parameter-space becomes highly non-trivial, especially considering that accurate simulations can be expensive both in terms of time and money. Existing solutions include batch-parallel simulations,more » high-dimensional, derivative-free optimization, and expert guessing, all of which make some contribution to solving the problem but do not completely resolve the issue. In this work, we explore the possibility of coupling together all three of the techniques just described by designing user-guided, batch-parallel optimization schemes. Our motivating example is a neutron diffusion partial differential equation where the time-varying multiplication factor serves as the unknown control parameter to be learned. We find that a simple, batch-parallelizable, random-walk scheme is able to make some progress on the problem but does not by itself produce satisfactory results. After reducing the dimensionality of the problem using functional principal component analysis (fPCA), we are able to track the progress of the solver in a visually simple way as well as viewing the associated principle components. This allows a human to make reasonable guesses about which points in the state space the random walker should try next. Thus, by combining the random walker's ability to find descent directions with the human's understanding of the underlying physics, it is possible to use expensive simulations more efficiently and more quickly arrive at the desired parameter set.« less

  7. Solving fuel-optimal low-thrust orbital transfers with bang-bang control using a novel continuation technique

    NASA Astrophysics Data System (ADS)

    Zhu, Zhengfan; Gan, Qingbo; Yang, Xin; Gao, Yang

    2017-08-01

    We have developed a novel continuation technique to solve optimal bang-bang control for low-thrust orbital transfers considering the first-order necessary optimality conditions derived from Lawden's primer vector theory. Continuation on the thrust amplitude is mainly described in this paper. Firstly, a finite-thrust transfer with an ;On-Off-On; thrusting sequence is modeled using a two-impulse transfer as initial solution, and then the thrust amplitude is decreased gradually to find an optimal solution with minimum thrust. Secondly, the thrust amplitude is continued from its minimum value to positive infinity to find the optimal bang-bang control, and a thrust switching principle is employed to determine the control structure by monitoring the variation of the switching function. In the continuation process, a bifurcation of bang-bang control is revealed and the concept of critical thrust is proposed to illustrate this phenomenon. The same thrust switching principle is also applicable to the continuation on other parameters, such as transfer time, orbital phase angle, etc. By this continuation technique, fuel-optimal orbital transfers with variable mission parameters can be found via an automated algorithm, and there is no need to provide an initial guess for the costate variables. Moreover, continuation is implemented in the solution space of bang-bang control that is either optimal or non-optimal, which shows that a desired solution of bang-bang control is obtained via continuation on a single parameter starting from an existing solution of bang-bang control. Finally, numerical examples are presented to demonstrate the effectiveness of the proposed continuation technique. Specifically, this continuation technique provides an approach to find multiple solutions satisfying the first-order necessary optimality conditions to the same orbital transfer problem, and a continuation strategy is presented as a preliminary approach for solving the bang-bang control of many-revolution orbital transfers.

  8. Techniques for modeling the reliability of fault-tolerant systems with the Markov state-space approach

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Johnson, Sally C.

    1995-01-01

    This paper presents a step-by-step tutorial of the methods and the tools that were used for the reliability analysis of fault-tolerant systems. The approach used in this paper is the Markov (or semi-Markov) state-space method. The paper is intended for design engineers with a basic understanding of computer architecture and fault tolerance, but little knowledge of reliability modeling. The representation of architectural features in mathematical models is emphasized. This paper does not present details of the mathematical solution of complex reliability models. Instead, it describes the use of several recently developed computer programs SURE, ASSIST, STEM, and PAWS that automate the generation and the solution of these models.

  9. LPJ-GUESS Simulated Western North America Mid-latitude Vegetation Changes for 15-10 ka Using the CCSM3 TraCE Climate Simulation

    NASA Astrophysics Data System (ADS)

    Shafer, S. L.; Bartlein, P. J.

    2017-12-01

    The period from 15-10 ka was a time of rapid vegetation changes in North America. Continental ice sheets in northern North America were receding, exposing new habitat for vegetation, and regions distant from the ice sheets experienced equally large environmental changes. Northern hemisphere temperatures during this period were increasing, promoting transitions from cold-adapted to temperate plant taxa at mid-latitudes. Long, transient paleovegetation simulations can provide important information on vegetation responses to climate changes, including both the spatial dynamics and rates of species distribution changes over time. Paleovegetation simulations also can fill the spatial and temporal gaps in observed paleovegetation records (e.g., pollen data from lake sediments), allowing us to test hypotheses about past vegetation changes (e.g., the location of past refugia). We used the CCSM3 TraCE transient climate simulation as input for LPJ-GUESS, a general ecosystem model, to simulate vegetation changes from 15-10 ka for parts of western North America at mid-latitudes ( 35-55° N). For these simulations, LPJ-GUESS was parameterized to simulate key tree taxa for western North America (e.g., Pseudotsuga, Tsuga, Quercus, etc.). The CCSM3 TraCE transient climate simulation data were regridded onto a 10-minute grid of the study area. We analyzed the simulated spatial and temporal dynamics of these taxa and compared the simulated changes with observed paleovegetation changes recorded in pollen and plant macrofossil data (e.g., data from the Neotoma Paleoecology Database). In general, the LPJ-GUESS simulations reproduce the general patterns of paleovegetation responses to climate change, although the timing of some simulated vegetation changes do not match the observed paleovegetation record. We describe the areas and time periods with the greatest data-model agreement and disagreement, and discuss some of the strengths and weaknesses of the simulated climate and vegetation data. The magnitude and rate of the simulated past vegetation changes are compared with projected future vegetation changes for the region.

  10. Treatment Assignment Guesses by Study Participants in a Double-Blind Dose Escalation Clinical Trial of Saw Palmetto

    PubMed Central

    Moore, Page; Kusek, John; Barry, Michael

    2014-01-01

    Abstract Objectives: This report assesses participant perception of treatment assignment in a randomized, double-blind, placebo-controlled trial of saw palmetto for the treatment of benign prostatic hyperplasia (BCM). Design: Participants randomized to receive saw palmetto were instructed to take one 320 mg gelcap daily for the first 24 weeks, two 320 mg gelcaps daily for the second 24 weeks, and three 320 mg gelcaps daily for the third 24 weeks. Study participants assigned to placebo were instructed to take the same number of matching placebo gelcaps in each time period. At 24, 48, and 72 weeks postrandomization, the American Urological Association Symptom Index (AUA-SI) was administered and participants were asked to guess their treatment assignment. Settings: The study was conducted at 11 clinical centers in North America. Participants: Study participants were men, 45 years and older, with moderate to low severe BPH symptoms, randomized to saw palmetto (N=151) or placebo (N=155). Outcome measures: Treatment arms were compared with respect to the distribution of participant guesses of treatment assignment. Results: For participants assigned to saw palmetto, 22.5%, 24.7%, and 29.8% correctly thought they were taking saw palmetto, and 37.3%, 40.0%, and 44.4% incorrectly thought they were on placebo at 24, 48, and 72 weeks, respectively. For placebo participants, 21.8%, 27.4%, and 25.2% incorrectly thought they were on saw palmetto, and 41.6%, 39.9%, and 42.6% correctly thought they were on placebo at 24, 48, and 72 weeks, respectively. The treatment arms did not vary with respect to the distributions of participants who guessed they were on saw palmetto (p=0.823) or placebo (p=0.893). Participants who experienced an improvement in AUA-SI were 2.16 times more likely to think they were on saw palmetto. Conclusions: Blinding of treatment assignment was successful in this study. Improvement in BPH-related symptoms was associated with the perception that participants were taking saw palmetto. PMID:23383975

  11. Diagnosis of Processes Controlling Dissolved Organic Carbon (DOC) Export in a Subarctic Region by a Dynamic Ecosystem Model

    NASA Astrophysics Data System (ADS)

    Tang, J.

    2015-12-01

    Permafrost thawing in high latitudes allows more soil organic carbon (SOC) to become hydrologically accessible. This can increase dissolved organic carbon (DOC) exports and carbon release to the atmosphere as CO2 and CH4, with a positive feedback to regional and global climate warming. However, this portion of carbon loss through DOC export is often neglected in ecosystem models. In this paper, we incorporate a set of DOC-related processes (DOC production, mineralization, diffusion, sorption-desorption and leaching) into an Arctic-enabled version of the dynamic ecosystem model LPJ-GUESS (LPJ-GUESS WHyMe) to mechanistically model the DOC export, and to link this flux to other ecosystem processes. The extended LPJ-GUESS WHyMe with these DOC processes is applied to the Stordalen catchment in northern Sweden. The relative importance of different DOC-related processes for mineral and peatland soils for this region have been explored at both monthly and annual scales based on a detailed variance-based Sobol sensitivity analysis. For mineral soils, the annual DOC export is dominated by DOC fluxes in snowmelt seasons and the peak in spring is related to the runoff passing through top organic rich layers. Two processes, DOC sorption-desorption and production, are found to contribute most to the annual variance in DOC export. For peatland soils, the DOC export during snowmelt seasons is constrained by frozen soils and the processes of DOC production and mineralization, determining the magnitudes of DOC desorption in snowmelt seasons as well as DOC sorption in the rest of months, play the most important role in annual variances of DOC export. Generally, the seasonality of DOC fluxes is closely correlated with runoff seasonality in this region. The current implementation has demonstrated that DOC-related processes in the framework of LPJ-GUESS WHyMe are at an appropriate level of complexity to represent the main mechanism of DOC dynamics in soils. The quantified contributions from different processes on DOC export dynamics could be further linked to the climate change, vegetation composition change and permafrost thawing in this region.

  12. Evaluation of DGVMs in tropical areas: linking patterns of vegetation cover, climate and fire to ecological processes

    NASA Astrophysics Data System (ADS)

    D'Onofrio, Donatella; von Hardenberg, Jost; Baudena, Mara

    2017-04-01

    Many current Dynamic Global Vegetation Models (DGVMs), including those incorporated into Earth System Models (ESMs), are able to realistically reproduce the distribution of the most worldwide biomes. However, they display high uncertainty in predicting the forest, savanna and grassland distributions and the transitions between them in tropical areas. These biomes are the most productive terrestrial ecosystems, and owing to their different biogeophysical and biogeochemical characteristics, future changes in their distributions could have also impacts on climate states. In particular, expected increasing temperature and CO2, modified precipitation regimes, as well as increasing land-use intensity could have large impacts on global biogeochemical cycles and precipitation, affecting the land-climate interactions. The difficulty of the DGVMs in simulating tropical vegetation, especially savanna structure and occurrence, has been associated with the way they represent the ecological processes and feedbacks between biotic and abiotic conditions. The inclusion of appropriate ecological mechanisms under present climatic conditions is essential for obtaining reliable future projections of vegetation and climate states. In this work we analyse observed relationships of tree and grass cover with climate and fire, and the current ecological understanding of the mechanisms driving the forest-savanna-grassland transition in Africa to evaluate the outcomes of a current state-of-the-art DGVM and to assess which ecological processes need to be included or improved within the model. Specifically, we analyse patterns of woody and herbaceous cover and fire return times from MODIS satellite observations, rainfall annual average and seasonality from TRMM satellite measurements and tree phenology information from the ESA global land cover map, comparing them with the outcomes of the LPJ-GUESS DGVM, also used by the EC-Earth global climate model. The comparison analysis with the LPJ-GUESS simulations suggests possible improvements in the model representations of tree-grass competition for water and in the vegetation-fire interaction. The proposed method could be useful for evaluating DGVMs in tropical areas, especially in the phase of model setting-up, before the coupling with Earth System Models. This could help in improving the simulations of ecological processes and consequently of land-climate interactions.

  13. Monte Carlo simulations for 20 MV X-ray spectrum reconstruction of a linear induction accelerator

    NASA Astrophysics Data System (ADS)

    Wang, Yi; Li, Qin; Jiang, Xiao-Guo

    2012-09-01

    To study the spectrum reconstruction of the 20 MV X-ray generated by the Dragon-I linear induction accelerator, the Monte Carlo method is applied to simulate the attenuations of the X-ray in the attenuators of different thicknesses and thus provide the transmission data. As is known, the spectrum estimation from transmission data is an ill-conditioned problem. The method based on iterative perturbations is employed to derive the X-ray spectra, where initial guesses are used to start the process. This algorithm takes into account not only the minimization of the differences between the measured and the calculated transmissions but also the smoothness feature of the spectrum function. In this work, various filter materials are put to use as the attenuator, and the condition for an accurate and robust solution of the X-ray spectrum calculation is demonstrated. The influences of the scattering photons within different intervals of emergence angle on the X-ray spectrum reconstruction are also analyzed.

  14. Efficient preconditioning of the electronic structure problem in large scale ab initio molecular dynamics simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schiffmann, Florian; VandeVondele, Joost, E-mail: Joost.VandeVondele@mat.ethz.ch

    2015-06-28

    We present an improved preconditioning scheme for electronic structure calculations based on the orbital transformation method. First, a preconditioner is developed which includes information from the full Kohn-Sham matrix but avoids computationally demanding diagonalisation steps in its construction. This reduces the computational cost of its construction, eliminating a bottleneck in large scale simulations, while maintaining rapid convergence. In addition, a modified form of Hotelling’s iterative inversion is introduced to replace the exact inversion of the preconditioner matrix. This method is highly effective during molecular dynamics (MD), as the solution obtained in earlier MD steps is a suitable initial guess. Filteringmore » small elements during sparse matrix multiplication leads to linear scaling inversion, while retaining robustness, already for relatively small systems. For system sizes ranging from a few hundred to a few thousand atoms, which are typical for many practical applications, the improvements to the algorithm lead to a 2-5 fold speedup per MD step.« less

  15. Random-subset fitting of digital holograms for fast three-dimensional particle tracking [invited].

    PubMed

    Dimiduk, Thomas G; Perry, Rebecca W; Fung, Jerome; Manoharan, Vinothan N

    2014-09-20

    Fitting scattering solutions to time series of digital holograms is a precise way to measure three-dimensional dynamics of microscale objects such as colloidal particles. However, this inverse-problem approach is computationally expensive. We show that the computational time can be reduced by an order of magnitude or more by fitting to a random subset of the pixels in a hologram. We demonstrate our algorithm on experimentally measured holograms of micrometer-scale colloidal particles, and we show that 20-fold increases in speed, relative to fitting full frames, can be attained while introducing errors in the particle positions of 10 nm or less. The method is straightforward to implement and works for any scattering model. It also enables a parallelization strategy wherein random-subset fitting is used to quickly determine initial guesses that are subsequently used to fit full frames in parallel. This approach may prove particularly useful for studying rare events, such as nucleation, that can only be captured with high frame rates over long times.

  16. AUTONOMOUS GAUSSIAN DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less

  17. Study on the influence of stochastic properties of correction terms on the reliability of instantaneous network RTK

    NASA Astrophysics Data System (ADS)

    Próchniewicz, Dominik

    2014-03-01

    The reliability of precision GNSS positioning primarily depends on correct carrier-phase ambiguity resolution. An optimal estimation and correct validation of ambiguities necessitates a proper definition of mathematical positioning model. Of particular importance in the model definition is the taking into account of the atmospheric errors (ionospheric and tropospheric refraction) as well as orbital errors. The use of the network of reference stations in kinematic positioning, known as Network-based Real-Time Kinematic (Network RTK) solution, facilitates the modeling of such errors and their incorporation, in the form of correction terms, into the functional description of positioning model. Lowered accuracy of corrections, especially during atmospheric disturbances, results in the occurrence of unaccounted biases, the so-called residual errors. The taking into account of such errors in Network RTK positioning model is possible by incorporating the accuracy characteristics of the correction terms into the stochastic model of observations. In this paper we investigate the impact of the expansion of the stochastic model to include correction term variances on the reliability of the model solution. In particular the results of instantaneous solution that only utilizes a single epoch of GPS observations, is analyzed. Such a solution mode due to the low number of degrees of freedom is very sensitive to an inappropriate mathematical model definition. Thus the high level of the solution reliability is very difficult to achieve. Numerical tests performed for a test network located in mountain area during ionospheric disturbances allows to verify the described method for the poor measurement conditions. The results of the ambiguity resolution as well as the rover positioning accuracy shows that the proposed method of stochastic modeling can increase the reliability of instantaneous Network RTK performance.

  18. Improving Metrological Reliability of Information-Measuring Systems Using Mathematical Modeling of Their Metrological Characteristics

    NASA Astrophysics Data System (ADS)

    Kurnosov, R. Yu; Chernyshova, T. I.; Chernyshov, V. N.

    2018-05-01

    The algorithms for improving the metrological reliability of analogue blocks of measuring channels and information-measuring systems are developed. The proposed algorithms ensure the optimum values of their metrological reliability indices for a given analogue circuit block solution.

  19. Virtual Vesta

    NASA Image and Video Library

    2011-03-10

    This image shows NASAS Dawn scientists best guess to date of what the surface of the protoplanet Vesta might look like; it incorporates the best data on dimples and bulges from ground-based telescopes and NASA Hubble Space Telescope.

  20. Mesoscale temperature and moisture fields from satellite infrared soundings

    NASA Technical Reports Server (NTRS)

    Hillger, D. W.; Vonderhaar, T. H.

    1976-01-01

    The combined use of radiosonde and satellite infrared soundings can provide mesoscale temperature and moisture fields at the time of satellite coverage. Radiance data from the vertical temperature profile radiometer on NOAA polar-orbiting satellites can be used along with a radiosonde sounding as an initial guess in an iterative retrieval algorithm. The mesoscale temperature and moisture fields at local 9 - 10 a.m., which are produced by retrieving temperature profiles at each scan spot for the BTPR (every 70 km), can be used for analysis or as a forecasting tool for subsequent weather events during the day. The advantage of better horizontal resolution of satellite soundings can be coupled with the radiosonde temperature and moisture profile both as a best initial guess profile and as a means of eliminating problems due to the limited vertical resolution of satellite soundings.

  1. Close binding of identity and location in visual feature perception

    NASA Technical Reports Server (NTRS)

    Johnston, J. C.; Pashler, H.

    1990-01-01

    The binding of identity and location information in disjunctive feature search was studied. Ss searched a heterogeneous display for a color or a form target, and reported both target identity and location. To avoid better than chance guessing of target identity (by choosing the target less likely to have been seen), the difficulty of the two targets was equalized adaptively; a mathematical model was used to quantify residual effects. A spatial layout was used that minimized postperceptual errors in reporting location. Results showed strong binding of identity and location perception. After correction for guessing, no perception of identity without location was found. A weak trend was found for accurate perception of target location without identity. We propose that activated features generate attention-calling "interrupt" signals, specifying only location; attention then retrieves the properties at that location.

  2. Identifying arbitrary parameter zonation using multiple level set functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  3. Cognitive Load Does Not Affect the Behavioral and Cognitive Foundations of Social Cooperation.

    PubMed

    Mieth, Laura; Bell, Raoul; Buchner, Axel

    2016-01-01

    The present study serves to test whether the cognitive mechanisms underlying social cooperation are affected by cognitive load. Participants interacted with trustworthy-looking and untrustworthy-looking partners in a sequential Prisoner's Dilemma Game. Facial trustworthiness was manipulated to stimulate expectations about the future behavior of the partners which were either violated or confirmed by the partners' cheating or cooperation during the game. In a source memory test, participants were required to recognize the partners and to classify them as cheaters or cooperators. A multinomial model was used to disentangle item memory, source memory and guessing processes. We found an expectancy-congruent bias toward guessing that trustworthy-looking partners were more likely to be associated with cooperation than untrustworthy-looking partners. Source memory was enhanced for cheating that violated the participants' positive expectations about trustworthy-looking partners. We were interested in whether or not this expectancy-violation effect-that helps to revise unjustified expectations about trustworthy-looking partners-depends on cognitive load induced via a secondary continuous reaction time task. Although this secondary task interfered with working memory processes in a validation study, both the expectancy-congruent guessing bias as well as the expectancy-violation effect were obtained with and without cognitive load. These findings support the hypothesis that the expectancy-violation effect is due to a simple mechanism that does not rely on demanding elaborative processes. We conclude that most cognitive mechanisms underlying social cooperation presumably operate automatically so that they remain unaffected by cognitive load.

  4. Integrating planning perception and action for informed object search.

    PubMed

    Manso, Luis J; Gutierrez, Marco A; Bustos, Pablo; Bachiller, Pilar

    2018-05-01

    This paper presents a method to reduce the time spent by a robot with cognitive abilities when looking for objects in unknown locations. It describes how machine learning techniques can be used to decide which places should be inspected first, based on images that the robot acquires passively. The proposal is composed of two concurrent processes. The first one uses the aforementioned images to generate a description of the types of objects found in each object container seen by the robot. This is done passively, regardless of the task being performed. The containers can be tables, boxes, shelves or any other kind of container of known shape whose contents can be seen from a distance. The second process uses the previously computed estimation of the contents of the containers to decide which is the most likely container having the object to be found. This second process is deliberative and takes place only when the robot needs to find an object, whether because it is explicitly asked to locate one or because it is needed as a step to fulfil the mission of the robot. Upon failure to guess the right container, the robot can continue making guesses until the object is found. Guesses are made based on the semantic distance between the object to find and the description of the types of the objects found in each object container. The paper provides quantitative results comparing the efficiency of the proposed method and two base approaches.

  5. Cognitive Load Does Not Affect the Behavioral and Cognitive Foundations of Social Cooperation

    PubMed Central

    Mieth, Laura; Bell, Raoul; Buchner, Axel

    2016-01-01

    The present study serves to test whether the cognitive mechanisms underlying social cooperation are affected by cognitive load. Participants interacted with trustworthy-looking and untrustworthy-looking partners in a sequential Prisoner’s Dilemma Game. Facial trustworthiness was manipulated to stimulate expectations about the future behavior of the partners which were either violated or confirmed by the partners’ cheating or cooperation during the game. In a source memory test, participants were required to recognize the partners and to classify them as cheaters or cooperators. A multinomial model was used to disentangle item memory, source memory and guessing processes. We found an expectancy-congruent bias toward guessing that trustworthy-looking partners were more likely to be associated with cooperation than untrustworthy-looking partners. Source memory was enhanced for cheating that violated the participants’ positive expectations about trustworthy-looking partners. We were interested in whether or not this expectancy-violation effect—that helps to revise unjustified expectations about trustworthy-looking partners—depends on cognitive load induced via a secondary continuous reaction time task. Although this secondary task interfered with working memory processes in a validation study, both the expectancy-congruent guessing bias as well as the expectancy-violation effect were obtained with and without cognitive load. These findings support the hypothesis that the expectancy-violation effect is due to a simple mechanism that does not rely on demanding elaborative processes. We conclude that most cognitive mechanisms underlying social cooperation presumably operate automatically so that they remain unaffected by cognitive load. PMID:27630597

  6. Identifying arbitrary parameter zonation using multiple level set functions

    DOE PAGES

    Lu, Zhiming; Vesselinov, Velimir Valentinov; Lei, Hongzhuan

    2018-03-14

    In this paper, we extended the analytical level set method [1, 2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity ofmore » the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.« less

  7. Identifying arbitrary parameter zonation using multiple level set functions

    NASA Astrophysics Data System (ADS)

    Lu, Zhiming; Vesselinov, Velimir V.; Lei, Hongzhuan

    2018-07-01

    In this paper, we extended the analytical level set method [1,2] for identifying a piece-wisely heterogeneous (zonation) binary system to the case with an arbitrary number of materials with unknown material properties. In the developed level set approach, starting from an initial guess, the material interfaces are propagated through iterations such that the residuals between the simulated and observed state variables (hydraulic head) is minimized. We derived an expression for the propagation velocity of the interface between any two materials, which is related to the permeability contrast between the materials on two sides of the interface, the sensitivity of the head to permeability, and the head residual. We also formulated an expression for updating the permeability of all materials, which is consistent with the steepest descent of the objective function. The developed approach has been demonstrated through many examples, ranging from totally synthetic cases to a case where the flow conditions are representative of a groundwater contaminant site at the Los Alamos National Laboratory. These examples indicate that the level set method can successfully identify zonation structures, even if the number of materials in the model domain is not exactly known in advance. Although the evolution of the material zonation depends on the initial guess field, inverse modeling runs starting with different initial guesses fields may converge to the similar final zonation structure. These examples also suggest that identifying interfaces of spatially distributed heterogeneities is more important than estimating their permeability values.

  8. On the reliability of computed chaotic solutions of non-linear differential equations

    NASA Astrophysics Data System (ADS)

    Liao, Shijun

    2009-08-01

    A new concept, namely the critical predictable time Tc, is introduced to give a more precise description of computed chaotic solutions of non-linear differential equations: it is suggested that computed chaotic solutions are unreliable and doubtable when t > Tc. This provides us a strategy to detect reliable solution from a given computed result. In this way, the computational phenomena, such as computational chaos (CC), computational periodicity (CP) and computational prediction uncertainty, which are mainly based on long-term properties of computed time-series, can be completely avoided. Using this concept, the famous conclusion `accurate long-term prediction of chaos is impossible' should be replaced by a more precise conclusion that `accurate prediction of chaos beyond the critical predictable time Tc is impossible'. So, this concept also provides us a timescale to determine whether or not a particular time is long enough for a given non-linear dynamic system. Besides, the influence of data inaccuracy and various numerical schemes on the critical predictable time is investigated in details by using symbolic computation software as a tool. A reliable chaotic solution of Lorenz equation in a rather large interval 0 <= t < 1200 non-dimensional Lorenz time units is obtained for the first time. It is found that the precision of the initial condition and the computed data at each time step, which is mathematically necessary to get such a reliable chaotic solution in such a long time, is so high that it is physically impossible due to the Heisenberg uncertainty principle in quantum physics. This, however, provides us a so-called `precision paradox of chaos', which suggests that the prediction uncertainty of chaos is physically unavoidable, and that even the macroscopical phenomena might be essentially stochastic and thus could be described by probability more economically.

  9. Model of Vesta

    NASA Image and Video Library

    2011-03-10

    This image incorporates the best data on dimples and bulges of the protoplanet Vesta from ground-based telescopes and NASA Hubble Space Telescope. This model of Vesta uses scientists best guess to date of what the surface might look like.

  10. Children's Use of Context in Word Recognition: A Psycholinguistic Guessing Game.

    ERIC Educational Resources Information Center

    Schwantes, Frederick M.; And Others

    1980-01-01

    Two experiments were conducted to investigate the effect of varying the amount of preceding-sentence context upon the lexical decision speed of third- and sixth-grade and college-level students. (Author/MP)

  11. Mass breakdown model of solar-photon sail shuttle: The case for Mars

    NASA Astrophysics Data System (ADS)

    Vulpetti, Giovanni; Circi, Christian

    2016-02-01

    The main aim of this paper is to set up a many-parameter model of mass breakdown to be applied to a reusable Earth-Mars-Earth solar-photon sail shuttle, and analyze the system behavior in two sub-problems: (1) the zero-payload shuttle, and (2) given the sailcraft sail loading and the gross payload mass, find the sail area of the shuttle. The solution to the subproblem-1 is of technological and programmatic importance. The general analysis of subproblem-2 is presented as a function of the sail side length, system mass, sail loading and thickness. In addition to the behaviors of the main system masses, useful information for future work on the sailcraft trajectory optimization is obtained via (a) a detailed mass model for the descent/ascent Martian Excursion Module, and (b) the fifty-fifty solution to the sailcraft sail loading breakdown equation. Of considerable importance is the evaluation of the minimum altitude for the rendezvous between the ascent rocket vehicle and the solar-photon sail propulsion module, a task performed via the Mars Climate Database 2014-2015. The analysis shows that such altitude is 300 km; below it, the atmospheric drag prevails over the solar-radiation thrust. By this value, an example of excursion module of 1500 kg in total mass is built, and the sailcraft sail loading and the return payload are calculated. Finally, the concept of launch opportunity-wide for a shuttle driven by solar-photon sail is introduced. The previous fifty-fifty solution may be a good initial guess for the trajectory optimization of this type of shuttle.

  12. Structural, thermodynamic, and electrical properties of polar fluids and ionic solutions on a hypersphere: Results of simulations

    NASA Astrophysics Data System (ADS)

    Caillol, J. M.; Levesque, D.

    1992-01-01

    The reliability and the efficiency of a new method suitable for the simulations of dielectric fluids and ionic solutions is established by numerical computations. The efficiency depends on the use of a simulation cell which is the surface of a four-dimensional sphere. The reliability originates from a charge-charge potential solution of the Poisson equation in this confining volume. The computation time, for systems of a few hundred molecules, is reduced by a factor of 2 or 3 compared to this of a simulation performed in a cubic volume with periodic boundary conditions and the Ewald charge-charge potential.

  13. A Flexible Latent Class Approach to Estimating Test-Score Reliability

    ERIC Educational Resources Information Center

    van der Palm, Daniël W.; van der Ark, L. Andries; Sijtsma, Klaas

    2014-01-01

    The latent class reliability coefficient (LCRC) is improved by using the divisive latent class model instead of the unrestricted latent class model. This results in the divisive latent class reliability coefficient (DLCRC), which unlike LCRC avoids making subjective decisions about the best solution and thus avoids judgment error. A computational…

  14. Final Report to the National Energy Technology Laboratory on FY09-FY13 Cooperative Research with the Consortium for Electric Reliability Technology Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vittal, Vijay

    2015-11-04

    The Consortium for Electric Reliability Technology Solutions (CERTS) was formed in 1999 in response to a call from U.S. Congress to restart a federal transmission reliability R&D program to address concerns about the reliability of the U.S. electric power grid. CERTS is a partnership between industry, universities, national laboratories, and government agencies. It researches, develops, and disseminates new methods, tools, and technologies to protect and enhance the reliability of the U.S. electric power system and the efficiency of competitive electricity markets. It is funded by the U.S. Department of Energy’s Office of Electricity Delivery and Energy Reliability (OE). This reportmore » provides an overview of PSERC and CERTS, of the overall objectives and scope of the research, a summary of the major research accomplishments, highlights of the work done under the various elements of the NETL cooperative agreement, and brief reports written by the PSERC researchers on their accomplishments, including research results, publications, and software tools.« less

  15. Reliability Based Design for a Raked Wing Tip of an Airframe

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.

    2011-01-01

    A reliability-based optimization methodology has been developed to design the raked wing tip of the Boeing 767-400 extended range airliner made of composite and metallic materials. Design is formulated for an accepted level of risk or reliability. The design variables, weight and the constraints became functions of reliability. Uncertainties in the load, strength and the material properties, as well as the design variables, were modeled as random parameters with specified distributions, like normal, Weibull or Gumbel functions. The objective function and constraint, or a failure mode, became derived functions of the risk-level. Solution to the problem produced the optimum design with weight, variables and constraints as a function of the risk-level. Optimum weight versus reliability traced out an inverted-S shaped graph. The center of the graph corresponded to a 50 percent probability of success, or one failure in two samples. Under some assumptions, this design would be quite close to the deterministic optimum solution. The weight increased when reliability exceeded 50 percent, and decreased when the reliability was compromised. A design could be selected depending on the level of risk acceptable to a situation. The optimization process achieved up to a 20-percent reduction in weight over traditional design.

  16. A Comparative Study of Different Deblurring Methods Using Filters

    NASA Astrophysics Data System (ADS)

    Srimani, P. K.; Kavitha, S.

    2011-12-01

    This paper attempts to undertake the study of Restored Gaussian Blurred Images by using four types of techniques of deblurring image viz., Wiener filter, Regularized filter, Lucy Richardson deconvolution algorithm and Blind deconvolution algorithm with an information of the Point Spread Function (PSF) corrupted blurred image. The same is applied to the scanned image of seven months baby in the womb and they are compared with one another, so as to choose the best technique for restored or deblurring image. This paper also attempts to undertake the study of restored blurred image using Regualr Filter(RF) with no information about the Point Spread Function (PSF) by using the same four techniques after executing the guess of the PSF. The number of iterations and the weight threshold of it to choose the best guesses for restored or deblurring image of these techniques are determined.

  17. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT.

    PubMed

    Carlis, John; Bruso, Kelsey

    2012-03-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n(2)) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing.

  18. Estimating the cost of compensating victims of medical negligence.

    PubMed Central

    Fenn, P.; Hermans, D.; Dingwall, R.

    1994-01-01

    The current system in Britain for compensating victims of medical injury depends on an assessment of negligence. Despite the sporadic pressure on the government to adopt a "no fault" approach, such as exists in Sweden, the negligence system will probably remain for the immediate future. The cost of this system was estimated to be 52.3m pounds for England 1990-1. The problem for the future, however, is one of forecasting accuracy at provider level: too high a guess and current patient care will suffer; too low a guess and future patient care will suffer. The introduction of a mutual insurance scheme may not resolve these difficulties, as someone will have to set the rates. Moreover, the figures indicate that if a no fault scheme was introduced the cost might be four times that of the current system, depending on the type of scheme adopted. PMID:8081145

  19. Event-related potential evidence suggesting voters remember political events that never happened

    PubMed Central

    Federmeier, Kara D.; Gonsalves, Brian D.

    2014-01-01

    Voters tend to misattribute issue positions to political candidates that are consistent with their partisan affiliation, even though these candidates have never explicitly stated or endorsed such stances. The prevailing explanation in political science is that voters misattribute candidates’ issue positions because they use their political knowledge to make educated but incorrect guesses. We suggest that voter errors can also stem from a different source: false memories. The current study examined event-related potential (ERP) responses to misattributed and accurately remembered candidate issue information. We report here that ERP responses to misattributed information can elicit memory signals similar to that of correctly remembered old information—a pattern consistent with a false memory rather than educated guessing interpretation of these misattributions. These results suggest that some types of voter misinformation about candidates may be harder to correct than previously thought. PMID:23202775

  20. RSQRT: AN HEURISTIC FOR ESTIMATING THE NUMBER OF CLUSTERS TO REPORT

    PubMed Central

    Bruso, Kelsey

    2012-01-01

    Clustering can be a valuable tool for analyzing large datasets, such as in e-commerce applications. Anyone who clusters must choose how many item clusters, K, to report. Unfortunately, one must guess at K or some related parameter. Elsewhere we introduced a strongly-supported heuristic, RSQRT, which predicts K as a function of the attribute or item count, depending on attribute scales. We conducted a second analysis where we sought confirmation of the heuristic, analyzing data sets from theUCImachine learning benchmark repository. For the 25 studies where sufficient detail was available, we again found strong support. Also, in a side-by-side comparison of 28 studies, RSQRT best-predicted K and the Bayesian information criterion (BIC) predicted K are the same. RSQRT has a lower cost of O(log log n) versus O(n2) for BIC, and is more widely applicable. Using RSQRT prospectively could be much better than merely guessing. PMID:22773923

  1. Analyzing force concept inventory with item response theory

    NASA Astrophysics Data System (ADS)

    Wang, Jing; Bao, Lei

    2010-10-01

    Item response theory is a popular assessment method used in education. It rests on the assumption of a probability framework that relates students' innate ability and their performance on test questions. Item response theory transforms students' raw test scores into a scaled proficiency score, which can be used to compare results obtained with different test questions. The scaled score also addresses the issues of ceiling effects and guessing, which commonly exist in quantitative assessment. We used item response theory to analyze the force concept inventory (FCI). Our results show that item response theory can be useful for analyzing physics concept surveys such as the FCI and produces results about the individual questions and student performance that are beyond the capability of classical statistics. The theory yields detailed measurement parameters regarding the difficulty, discrimination features, and probability of correct guess for each of the FCI questions.

  2. Recollective experience in odor recognition: influences of adult age and familiarity.

    PubMed

    Larsson, Maria; Oberg, Christina; Bäckman, Lars

    2006-01-01

    We examined recollective experience in odor memory as a function of age, intention to learn, and familiarity. Young and older adults studied a set of familiar and unfamiliar odors with incidental or intentional encoding instructions. At recognition, participants indicated whether their response was based on explicit recollection (remembering), a feeling of familiarity (knowing), or guessing. The results indicated no age-related differences in the distribution of experiential responses for unfamiliar odors. By contrast, for familiar odors the young demonstrated more explicit recollection than the older adults, who produced more "know" and "guess" responses. Intention to learn was unrelated to recollective experience. In addition, the observed age differences in "remember" responses for familiar odors were eliminated when odor naming was statistically controlled. This suggests that age-related deficits in activating specific odor knowledge (i.e., odor names) play an important role for age differences in recollective experience of olfactory information.

  3. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.

  4. Basic Principles of Electrical Network Reliability Optimization in Liberalised Electricity Market

    NASA Astrophysics Data System (ADS)

    Oleinikova, I.; Krishans, Z.; Mutule, A.

    2008-01-01

    The authors propose to select long-term solutions to the reliability problems of electrical networks in the stage of development planning. The guide lines or basic principles of such optimization are: 1) its dynamical nature; 2) development sustainability; 3) integrated solution of the problems of network development and electricity supply reliability; 4) consideration of information uncertainty; 5) concurrent consideration of the network and generation development problems; 6) application of specialized information technologies; 7) definition of requirements for independent electricity producers. In the article, the major aspects of liberalized electricity market, its functions and tasks are reviewed, with emphasis placed on the optimization of electrical network development as a significant component of sustainable management of power systems.

  5. Reliable Transition State Searches Integrated with the Growing String Method.

    PubMed

    Zimmerman, Paul

    2013-07-09

    The growing string method (GSM) is highly useful for locating reaction paths connecting two molecular intermediates. GSM has often been used in a two-step procedure to locate exact transition states (TS), where GSM creates a quality initial structure for a local TS search. This procedure and others like it, however, do not always converge to the desired transition state because the local search is sensitive to the quality of the initial guess. This article describes an integrated technique for simultaneous reaction path and exact transition state search. This is achieved by implementing an eigenvector following optimization algorithm in internal coordinates with Hessian update techniques. After partial convergence of the string, an exact saddle point search begins under the constraint that the maximized eigenmode of the TS node Hessian has significant overlap with the string tangent near the TS. Subsequent optimization maintains connectivity of the string to the TS as well as locks in the TS direction, all but eliminating the possibility that the local search leads to the wrong TS. To verify the robustness of this approach, reaction paths and TSs are found for a benchmark set of more than 100 elementary reactions.

  6. Determination of elastic moduli from measured acoustic velocities.

    PubMed

    Brown, J Michael

    2018-06-01

    Methods are evaluated in solution of the inverse problem associated with determination of elastic moduli for crystals of arbitrary symmetry from elastic wave velocities measured in many crystallographic directions. A package of MATLAB functions provides a robust and flexible environment for analysis of ultrasonic, Brillouin, or Impulsive Stimulated Light Scattering datasets. Three inverse algorithms are considered: the gradient-based methods of Levenberg-Marquardt and Backus-Gilbert, and a non-gradient-based (Nelder-Mead) simplex approach. Several data types are considered: body wave velocities alone, surface wave velocities plus a side constraint on X-ray-diffraction-based axes compressibilities, or joint body and surface wave velocities. The numerical algorithms are validated through comparisons with prior published results and through analysis of synthetic datasets. Although all approaches succeed in finding low-misfit solutions, the Levenberg-Marquardt method consistently demonstrates effectiveness and computational efficiency. However, linearized gradient-based methods, when applied to a strongly non-linear problem, may not adequately converge to the global minimum. The simplex method, while slower, is less susceptible to being trapped in local misfit minima. A "multi-start" strategy (initiate searches from more than one initial guess) provides better assurance that global minima have been located. Numerical estimates of parameter uncertainties based on Monte Carlo simulations are compared to formal uncertainties based on covariance calculations. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  8. Quantum chromodynamics near the confinement limit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quigg, C.

    1985-09-01

    These nine lectures deal at an elementary level with the strong interaction between quarks and its implications for the structure of hadrons. Quarkonium systems are studied as a means for measuring the interquark interaction. This is presumably (part of) the answer a solution to QCD must yield, if it is indeed the correct theory of the strong interactions. Some elements of QCD are reviewed, and metaphors for QCD as a confining theory are introduced. The 1/N expansion is summarized as a way of guessing the consequences of QCD for hadron physics. Lattice gauge theory is developed as a means formore » going beyond perturbation theory in the solution of QCD. The correspondence between statistical mechanics, quantum mechanics, and field theory is made, and simple spin systems are formulated on the lattice. The lattice analog of local gauge invariance is developed, and analytic methods for solving lattice gauge theory are considered. The strong-coupling expansion indicates the existence of a confining phase, and the renormalization group provides a means for recovering the consequences of continuum field theory. Finally, Monte Carlo simulations of lattice theories give evidence for the phase structure of gauge theories, yield an estimate for the string tension characterizing the interquark force, and provide an approximate description of the quarkonium potential in encouraging good agreement with what is known from experiment.« less

  9. Glenn-ht/bem Conjugate Heat Transfer Solver for Large-scale Turbomachinery Models

    NASA Technical Reports Server (NTRS)

    Divo, E.; Steinthorsson, E.; Rodriquez, F.; Kassab, A. J.; Kapat, J. S.; Heidmann, James D. (Technical Monitor)

    2003-01-01

    A coupled Boundary Element/Finite Volume Method temperature-forward/flux-hack algorithm is developed for conjugate heat transfer (CHT) applications. A loosely coupled strategy is adopted with each field solution providing boundary conditions for the other in an iteration seeking continuity of temperature and heat flux at the fluid-solid interface. The NASA Glenn Navier-Stokes code Glenn-HT is coupled to a 3-D BEM steady state heat conduction code developed at the University of Central Florida. Results from CHT simulation of a 3-D film-cooled blade section are presented and compared with those computed by a two-temperature approach. Also presented are current developments of an iterative domain decomposition strategy accommodating large numbers of unknowns in the BEM. The blade is artificially sub-sectioned in the span-wise direction, 3-D BEM solutions are obtained in the subdomains, and interface temperatures are averaged symmetrically when the flux is updated while the fluxes are averaged anti-symmetrically to maintain continuity of heat flux when the temperatures are updated. An initial guess for interface temperatures uses a physically-based 1-D conduction argument to provide an effective starting point and significantly reduce iteration. 2-D and 3-D results show the process converges efficiently and offers substantial computational and storage savings. Future developments include a parallel multi-grid implementation of the approach under MPI for computation on PC clusters.

  10. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass- Or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood-allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  11. Gravity-Assist Trajectories to the Ice Giants: An Automated Method to Catalog Mass-or Time-Optimal Solutions

    NASA Technical Reports Server (NTRS)

    Hughes, Kyle M.; Knittel, Jeremy M.; Englander, Jacob A.

    2017-01-01

    This work presents an automated method of calculating mass (or time) optimal gravity-assist trajectories without a priori knowledge of the flyby-body combination. Since gravity assists are particularly crucial for reaching the outer Solar System, we use the Ice Giants, Uranus and Neptune, as example destinations for this work. Catalogs are also provided that list the most attractive trajectories found over launch dates ranging from 2024 to 2038. The tool developed to implement this method, called the Python EMTG Automated Trade Study Application (PEATSA), iteratively runs the Evolutionary Mission Trajectory Generator (EMTG), a NASA Goddard Space Flight Center in-house trajectory optimization tool. EMTG finds gravity-assist trajectories with impulsive maneuvers using a multiple-shooting structure along with stochastic methods (such as monotonic basin hopping) and may be run with or without an initial guess provided. PEATSA runs instances of EMTG in parallel over a grid of launch dates. After each set of runs completes, the best results within a neighborhood of launch dates are used to seed all other cases in that neighborhood---allowing the solutions across the range of launch dates to improve over each iteration. The results here are compared against trajectories found using a grid-search technique, and PEATSA is found to outperform the grid-search results for most launch years considered.

  12. Towards New Metrics for High-Performance Computing Resilience

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hukerikar, Saurabh; Ashraf, Rizwan A; Engelmann, Christian

    Ensuring the reliability of applications is becoming an increasingly important challenge as high-performance computing (HPC) systems experience an ever-growing number of faults, errors and failures. While the HPC community has made substantial progress in developing various resilience solutions, it continues to rely on platform-based metrics to quantify application resiliency improvements. The resilience of an HPC application is concerned with the reliability of the application outcome as well as the fault handling efficiency. To understand the scope of impact, effective coverage and performance efficiency of existing and emerging resilience solutions, there is a need for new metrics. In this paper, wemore » develop new ways to quantify resilience that consider both the reliability and the performance characteristics of the solutions from the perspective of HPC applications. As HPC systems continue to evolve in terms of scale and complexity, it is expected that applications will experience various types of faults, errors and failures, which will require applications to apply multiple resilience solutions across the system stack. The proposed metrics are intended to be useful for understanding the combined impact of these solutions on an application's ability to produce correct results and to evaluate their overall impact on an application's performance in the presence of various modes of faults.« less

  13. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate.

    PubMed

    Rademaker, Rosanne L; van de Ven, Vincent G; Tong, Frank; Sack, Alexander T

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise.

  14. The impact of early visual cortex transcranial magnetic stimulation on visual working memory precision and guess rate

    PubMed Central

    van de Ven, Vincent G.; Tong, Frank; Sack, Alexander T.

    2017-01-01

    Neuroimaging studies have demonstrated that activity patterns in early visual areas predict stimulus properties actively maintained in visual working memory. Yet, the mechanisms by which such information is represented remain largely unknown. In this study, observers remembered the orientations of 4 briefly presented gratings, one in each quadrant of the visual field. A 10Hz Transcranial Magnetic Stimulation (TMS) triplet was applied directly at stimulus offset, or midway through a 2-second delay, targeting early visual cortex corresponding retinotopically to a sample item in the lower hemifield. Memory for one of the four gratings was probed at random, and participants reported this orientation via method of adjustment. Recall errors were smaller when the visual field location targeted by TMS overlapped with that of the cued memory item, compared to errors for stimuli probed diagonally to TMS. This implied topographic storage of orientation information, and a memory-enhancing effect at the targeted location. Furthermore, early pulses impaired performance at all four locations, compared to late pulses. Next, response errors were fit empirically using a mixture model to characterize memory precision and guess rates. Memory was more precise for items proximal to the pulse location, irrespective of pulse timing. Guesses were more probable with early TMS pulses, regardless of stimulus location. Thus, while TMS administered at the offset of the stimulus array might disrupt early-phase consolidation in a non-topographic manner, TMS also boosts the precise representation of an item at its targeted retinotopic location, possibly by increasing attentional resources or by injecting a beneficial amount of noise. PMID:28384347

  15. Global emissions of terpenoid VOCs from terrestrial vegetation in the last millennium.

    PubMed

    Acosta Navarro, J C; Smolander, S; Struthers, H; Zorita, E; Ekman, A M L; Kaplan, J O; Guenther, A; Arneth, A; Riipinen, I

    2014-06-16

    We investigated the millennial variability (1000 A.D.-2000 A.D.) of global biogenic volatile organic compound (BVOC) emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene, and sesquiterpene, and Lund-Potsdam-Jena-General Ecosystem Simulator (LPJ-GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission trends were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have significant short-term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr -1 (13% and 19% less than during 1750-1850 and 1000-1200, respectively), and LPJ-GUESS emissions were 323 TgC yr -1 (15% and 20% less than during 1750-1850 and 1000-1200, respectively). Monoterpene emissions were 89 TgC yr -1 (10% and 6% higher than during 1750-1850 and 1000-1200, respectively) in MEGAN, and 24 TgC yr -1 (2% higher and 5% less than during 1750-1850 and 1000-1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr -1 (10% and 4% higher than during 1750-1850 and 1000-1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation.

  16. An algebraic equation solution process formulated in anticipation of banded linear equations.

    DOT National Transportation Integrated Search

    1971-01-01

    A general method for the solution of large, sparsely banded, positive-definite, coefficient matrices is presented. The goal in developing the method was to produce an efficient and reliable solution process and to provide the user-programmer with a p...

  17. A new approach for improving reliability of personal navigation devices under harsh GNSS signal conditions.

    PubMed

    Dhital, Anup; Bancroft, Jared B; Lachapelle, Gérard

    2013-11-07

    In natural and urban canyon environments, Global Navigation Satellite System (GNSS) signals suffer from various challenges such as signal multipath, limited or lack of signal availability and poor geometry. Inertial sensors are often employed to improve the solution continuity under poor GNSS signal quality and availability conditions. Various fault detection schemes have been proposed in the literature to detect and remove biased GNSS measurements to obtain a more reliable navigation solution. However, many of these methods are found to be sub-optimal and often lead to unavailability of reliability measures, mostly because of the improper characterization of the measurement errors. A robust filtering architecture is thus proposed which assumes a heavy-tailed distribution for the measurement errors. Moreover, the proposed filter is capable of adapting to the changing GNSS signal conditions such as when moving from open sky conditions to deep canyons. Results obtained by processing data collected in various GNSS challenged environments show that the proposed scheme provides a robust navigation solution without having to excessively reject usable measurements. The tests reported herein show improvements of nearly 15% and 80% for position accuracy and reliability, respectively, when applying the above approach.

  18. A New Approach for Improving Reliability of Personal Navigation Devices under Harsh GNSS Signal Conditions

    PubMed Central

    Dhital, Anup; Bancroft, Jared B.; Lachapelle, Gérard

    2013-01-01

    In natural and urban canyon environments, Global Navigation Satellite System (GNSS) signals suffer from various challenges such as signal multipath, limited or lack of signal availability and poor geometry. Inertial sensors are often employed to improve the solution continuity under poor GNSS signal quality and availability conditions. Various fault detection schemes have been proposed in the literature to detect and remove biased GNSS measurements to obtain a more reliable navigation solution. However, many of these methods are found to be sub-optimal and often lead to unavailability of reliability measures, mostly because of the improper characterization of the measurement errors. A robust filtering architecture is thus proposed which assumes a heavy-tailed distribution for the measurement errors. Moreover, the proposed filter is capable of adapting to the changing GNSS signal conditions such as when moving from open sky conditions to deep canyons. Results obtained by processing data collected in various GNSS challenged environments show that the proposed scheme provides a robust navigation solution without having to excessively reject usable measurements. The tests reported herein show improvements of nearly 15% and 80% for position accuracy and reliability, respectively, when applying the above approach. PMID:24212120

  19. GAPPARD: a computationally efficient method of approximating gap-scale disturbance in vegetation models

    NASA Astrophysics Data System (ADS)

    Scherstjanoi, M.; Kaplan, J. O.; Thürig, E.; Lischke, H.

    2013-02-01

    Models of vegetation dynamics that are designed for application at spatial scales larger than individual forest gaps suffer from several limitations. Typically, either a population average approximation is used that results in unrealistic tree allometry and forest stand structure, or models have a high computational demand because they need to simulate both a series of age-based cohorts and a number of replicate patches to account for stochastic gap-scale disturbances. The detail required by the latter method increases the number of calculations by two to three orders of magnitude compared to the less realistic population average approach. In an effort to increase the efficiency of dynamic vegetation models without sacrificing realism, and to explore patterns of spatial scaling in forests, we developed a new method for simulating stand-replacing disturbances that is both accurate and 10-50x faster than approaches that use replicate patches. The GAPPARD (approximating GAP model results with a Probabilistic Approach to account for stand Replacing Disturbances) method works by postprocessing the output of deterministic, undisturbed simulations of a cohort-based vegetation model by deriving the distribution of patch ages at any point in time on the basis of a disturbance probability. With this distribution, the expected value of any output variable can be calculated from the output values of the deterministic undisturbed run at the time corresponding to the patch age. To account for temporal changes in model forcing, e.g., as a result of climate change, GAPPARD performs a series of deterministic simulations and interpolates between the results in the postprocessing step. We integrated the GAPPARD method in the forest models LPJ-GUESS and TreeM-LPJ, and evaluated these in a series of simulations along an altitudinal transect of an inner-alpine valley. With GAPPARD applied to LPJ-GUESS results were insignificantly different from the output of the original model LPJ-GUESS using 100 replicate patches, but simulation time was reduced by approximately the factor 10. Our new method is therefore highly suited rapidly approximating LPJ-GUESS results, and provides the opportunity for future studies over large spatial domains, allows easier parameterization of tree species, faster identification of areas of interesting simulation results, and comparisons with large-scale datasets and forest models.

  20. Crowdsourcing Participatory Evaluation of Medical Pictograms Using Amazon Mechanical Turk

    PubMed Central

    Willis, Matt; Sun, Peiyuan; Wang, Jun

    2013-01-01

    Background Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the “turkers”. Objective To answer two research questions: (1) Is the turkers’ collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers’ demographic characteristics affect their performance in medical pictogram comprehension? Methods We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers’ guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers’ interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers’ demographic characteristics and their pictogram comprehension performance. Results The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response–based open-ended testing with local people. The turkers’ misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. Conclusions The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers’ misunderstandings overlap with those elicited from low-literate people. PMID:23732572

  1. Crowdsourcing participatory evaluation of medical pictograms using Amazon Mechanical Turk.

    PubMed

    Yu, Bei; Willis, Matt; Sun, Peiyuan; Wang, Jun

    2013-06-03

    Consumer and patient participation proved to be an effective approach for medical pictogram design, but it can be costly and time-consuming. We proposed and evaluated an inexpensive approach that crowdsourced the pictogram evaluation task to Amazon Mechanical Turk (MTurk) workers, who are usually referred to as the "turkers". To answer two research questions: (1) Is the turkers' collective effort effective for identifying design problems in medical pictograms? and (2) Do the turkers' demographic characteristics affect their performance in medical pictogram comprehension? We designed a Web-based survey (open-ended tests) to ask 100 US turkers to type in their guesses of the meaning of 20 US pharmacopeial pictograms. Two judges independently coded the turkers' guesses into four categories: correct, partially correct, wrong, and completely wrong. The comprehensibility of a pictogram was measured by the percentage of correct guesses, with each partially correct guess counted as 0.5 correct. We then conducted a content analysis on the turkers' interpretations to identify misunderstandings and assess whether the misunderstandings were common. We also conducted a statistical analysis to examine the relationship between turkers' demographic characteristics and their pictogram comprehension performance. The survey was completed within 3 days of our posting the task to the MTurk, and the collected data are publicly available in the multimedia appendix for download. The comprehensibility for the 20 tested pictograms ranged from 45% to 98%, with an average of 72.5%. The comprehensibility scores of 10 pictograms were strongly correlated to the scores of the same pictograms reported in another study that used oral response-based open-ended testing with local people. The turkers' misinterpretations shared common errors that exposed design problems in the pictograms. Participant performance was positively correlated with their educational level. The results confirmed that crowdsourcing can be used as an effective and inexpensive approach for participatory evaluation of medical pictograms. Through Web-based open-ended testing, the crowd can effectively identify problems in pictogram designs. The results also confirmed that education has a significant effect on the comprehension of medical pictograms. Since low-literate people are underrepresented in the turker population, further investigation is needed to examine to what extent turkers' misunderstandings overlap with those elicited from low-literate people.

  2. Exploration of the (Interrater) Reliability and Latent Factor Structure of the Alcohol Use Disorders Identification Test (AUDIT) and the Drug Use Disorders Identification Test (DUDIT) in a Sample of Dutch Probationers.

    PubMed

    Hildebrand, Martin; Noteborn, Mirthe G C

    2015-01-01

    The use of brief, reliable, valid, and practical measures of substance use is critical for conducting individual (risk and need) assessments in probation practice. In this exploratory study, the basic psychometric properties of the Alcohol Use Disorders Identification Test (AUDIT) and the Drug Use Disorders Identification Test (DUDIT) are evaluated. The instruments were administered as an oral interview instead of a self-report questionnaire. The sample comprised 383 offenders (339 men, 44 women). A subset of 56 offenders (49 men, 7 women) participated in the interrater reliability study. Data collection took place between September 2011 and November 2012. Overall, both instruments have acceptable levels of interrater reliability for total scores and acceptable to good interrater reliabilities for most of the individual items. Confirmatory factor analyses (CFA) indicated that the a priori one-, two- and three-factor solutions for the AUDIT did not fit the observed data very well. Principal axis factoring (PAF) supported a two-factor solution for the AUDIT that included a level of alcohol consumption/consequences factor (Factor 1) and a dependence factor (Factor 2), with both factors explaining substantial variance in AUDIT scores. For the DUDIT, CFA and PAF suggest that a one-factor solution is the preferred model (accounting for 62.61% of total variance). The Dutch language versions of the AUDIT and the DUDIT are reliable screening instruments for use with probationers and both instruments can be reliably administered by probation officers in probation practice. However, future research on concurrent and predictive validity is warranted.

  3. Reliable design of a closed loop supply chain network under uncertainty: An interval fuzzy possibilistic chance-constrained model

    NASA Astrophysics Data System (ADS)

    Vahdani, Behnam; Tavakkoli-Moghaddam, Reza; Jolai, Fariborz; Baboli, Arman

    2013-06-01

    This article seeks to offer a systematic approach to establishing a reliable network of facilities in closed loop supply chains (CLSCs) under uncertainties. Facilities that are located in this article concurrently satisfy both traditional objective functions and reliability considerations in CLSC network designs. To attack this problem, a novel mathematical model is developed that integrates the network design decisions in both forward and reverse supply chain networks. The model also utilizes an effective reliability approach to find a robust network design. In order to make the results of this article more realistic, a CLSC for a case study in the iron and steel industry has been explored. The considered CLSC is multi-echelon, multi-facility, multi-product and multi-supplier. Furthermore, multiple facilities exist in the reverse logistics network leading to high complexities. Since the collection centres play an important role in this network, the reliability concept of these facilities is taken into consideration. To solve the proposed model, a novel interactive hybrid solution methodology is developed by combining a number of efficient solution approaches from the recent literature. The proposed solution methodology is a bi-objective interval fuzzy possibilistic chance-constraint mixed integer linear programming (BOIFPCCMILP). Finally, computational experiments are provided to demonstrate the applicability and suitability of the proposed model in a supply chain environment and to help decision makers facilitate their analyses.

  4. Modelling Holocene peatland and permafrost dynamics with the LPJ-GUESS dynamic vegetation model

    NASA Astrophysics Data System (ADS)

    Chaudhary, Nitin; Miller, Paul A.; Smith, Benjamin

    2016-04-01

    Dynamic global vegetation models (DGVMs) are an important platform to study past, present and future vegetation patterns together with associated biogeochemical cycles and climate feedbacks (e.g. Sitch et al. 2008, Smith et al. 2001). However, very few attempts have been made to simulate peatlands using DGVMs (Kleinen et al. 2012, Tang et al. 2015, Wania et al. 2009a). In the present study, we have improved the peatland dynamics in the state-of-the-art dynamic vegetation model (LPJ-GUESS) in order to understand the long-term evolution of northern peatland ecosystems and to assess the effect of changing climate on peatland carbon balance. We combined a dynamic multi-layer approach (Frolking et al. 2010, Hilbert et al. 2000) with soil freezing-thawing functionality (Ekici et al. 2015, Wania et al. 2009a) in LPJ-GUESS. The new model is named LPJ-GUESS Peatland (LPJ-GUESS-P) (Chaudhary et al. in prep). The model was calibrated and tested at the sub-arctic mire in Stordalen, Sweden, and the model was able to capture the reported long-term vegetation dynamics and peat accumulation patterns in the mire (Kokfelt et al. 2010). For evaluation, the model was run at 13 grid points across a north to south transect in Europe. The modelled peat accumulation values were found to be consistent with the published data for each grid point (Loisel et al. 2014). Finally, a series of additional experiments were carried out to investigate the vulnerability of high-latitude peatlands to climate change. We find that the Stordalen mire will sequester more carbon in the future due to milder and wetter climate conditions, longer growing seasons, and the carbon fertilization effect. References: - Chaudhary et al. (in prep.). Modelling Holocene peatland and permafrost dynamics with the LPJ-GUESS dynamic vegetation model - Ekici A, et al. 2015. Site-level model intercomparison of high latitude and high altitude soil thermal dynamics in tundra and barren landscapes. The Cryosphere 9: 1343-1361. - Frolking S, Roulet NT, Tuittila E, Bubier JL, Quillet A, Talbot J, Richard PJH. 2010. A new model of Holocene peatland net primary production, decomposition, water balance, and peat accumulation. Earth Syst. Dynam., 1, 1-21, doi:10.5194/esd-1-1-2010, 2010. - Hilbert DW, Roulet N, Moore T. 2000. Modelling and analysis of peatlands as dynamical systems. Journal of Ecology 88: 230-242. - Kleinen T, Brovkin V, Schuldt RJ. 2012. A dynamic model of wetland extent and peat accumulation: results for the Holocene. Biogeosciences 9: 235-248. - Kokfelt U, Reuss N, Struyf E, Sonesson M, Rundgren M, Skog G, Rosen P, Hammarlund D. 2010. Wetland development, permafrost history and nutrient cycling inferred from late Holocene peat and lake sediment records in subarctic Sweden. Journal of Paleolimnology 44: 327-342. - Loisel J, et al. 2014. A database and synthesis of northern peatland soil properties and Holocene carbon and nitrogen accumulation. Holocene 24: 1028-1042. - Sitch S, et al. 2008. Evaluation of the terrestrial carbon cycle, future plant geography and climate-carbon cycle feedbacks using five Dynamic Global Vegetation Models (DGVMs). Global Change Biology 14: 2015-2039. - Smith B, Prentice IC, Sykes MT. 2001. Representation of vegetation dynamics in the modelling of terrestrial ecosystems: comparing two contrasting approaches within European climate space. Global Ecology and Biogeography 10: 621-637. - Tang J, et al. 2015. Carbon budget estimation of a subarctic catchment using a dynamic ecosystem model at high spatial resolution. Biogeosciences 12: 2791-2808. - Wania R, Ross I, Prentice IC. 2009a. Integrating peatlands and permafrost into a dynamic global vegetation model: 1. Evaluation and sensitivity of physical land surface processes. Global Biogeochemical Cycles 23.

  5. Decision-theoretic methodology for reliability and risk allocation in nuclear power plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, N.Z.; Papazoglou, I.A.; Bari, R.A.

    1985-01-01

    This paper describes a methodology for allocating reliability and risk to various reactor systems, subsystems, components, operations, and structures in a consistent manner, based on a set of global safety criteria which are not rigid. The problem is formulated as a multiattribute decision analysis paradigm; the multiobjective optimization, which is performed on a PRA model and reliability cost functions, serves as the guiding principle for reliability and risk allocation. The concept of noninferiority is used in the multiobjective optimization problem. Finding the noninferior solution set is the main theme of the current approach. The assessment of the decision maker's preferencesmore » could then be performed more easily on the noninferior solution set. Some results of the methodology applications to a nontrivial risk model are provided and several outstanding issues such as generic allocation and preference assessment are discussed.« less

  6. Experimental application of OMA solutions on the model of industrial structure

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mironovs, D.

    2017-10-01

    It is very important and sometimes even vital to maintain reliability of industrial structures. High quality control during production and structural health monitoring (SHM) in exploitation provides reliable functioning of large, massive and remote structures, like wind generators, pipelines, power line posts, etc. This paper introduces a complex of technological and methodical solutions for SHM and diagnostics of industrial structures, including those that are actuated by periodic forces. Solutions were verified on a wind generator scaled model with integrated system of piezo-film deformation sensors. Simultaneous and multi-patch Operational Modal Analysis (OMA) approaches were implemented as methodical means for structural diagnostics and monitoring. Specially designed data processing algorithms provide objective evaluation of structural state modification.

  7. Inverse modelling of radionuclide release rates using gamma dose rate observations

    NASA Astrophysics Data System (ADS)

    Hamburger, Thomas; Evangeliou, Nikolaos; Stohl, Andreas; von Haustein, Christoph; Thummerer, Severin; Wallner, Christian

    2015-04-01

    Severe accidents in nuclear power plants such as the historical accident in Chernobyl 1986 or the more recent disaster in the Fukushima Dai-ichi nuclear power plant in 2011 have drastic impacts on the population and environment. Observations and dispersion modelling of the released radionuclides help to assess the regional impact of such nuclear accidents. Modelling the increase of regional radionuclide activity concentrations, which results from nuclear accidents, underlies a multiplicity of uncertainties. One of the most significant uncertainties is the estimation of the source term. That is, the time dependent quantification of the released spectrum of radionuclides during the course of the nuclear accident. The quantification of the source term may either remain uncertain (e.g. Chernobyl, Devell et al., 1995) or rely on estimates given by the operators of the nuclear power plant. Precise measurements are mostly missing due to practical limitations during the accident. The release rates of radionuclides at the accident site can be estimated using inverse modelling (Davoine and Bocquet, 2007). The accuracy of the method depends amongst others on the availability, reliability and the resolution in time and space of the used observations. Radionuclide activity concentrations are observed on a relatively sparse grid and the temporal resolution of available data may be low within the order of hours or a day. Gamma dose rates, on the other hand, are observed routinely on a much denser grid and higher temporal resolution and provide therefore a wider basis for inverse modelling (Saunier et al., 2013). We present a new inversion approach, which combines an atmospheric dispersion model and observations of radionuclide activity concentrations and gamma dose rates to obtain the source term of radionuclides. We use the Lagrangian particle dispersion model FLEXPART (Stohl et al., 1998; Stohl et al., 2005) to model the atmospheric transport of the released radionuclides. The inversion method uses a Bayesian formulation considering uncertainties for the a priori source term and the observations (Eckhardt et al., 2008, Stohl et al., 2012). The a priori information on the source term is a first guess. The gamma dose rate observations are used to improve the first guess and to retrieve a reliable source term. The details of this method will be presented at the conference. This work is funded by the Bundesamt für Strahlenschutz BfS, Forschungsvorhaben 3612S60026. References Davoine, X. and Bocquet, M., Atmos. Chem. Phys., 7, 1549-1564, 2007. Devell, L., et al., OCDE/GD(96)12, 1995. Eckhardt, S., et al., Atmos. Chem. Phys., 8, 3881-3897, 2008. Saunier, O., et al., Atmos. Chem. Phys., 13, 11403-11421, 2013. Stohl, A., et al., Atmos. Environ., 32, 4245-4264, 1998. Stohl, A., et al., Atmos. Chem. Phys., 5, 2461-2474, 2005. Stohl, A., et al., Atmos. Chem. Phys., 12, 2313-2343, 2012.

  8. Implementing eco friendly highly reliable upload feature using multi 3G service

    NASA Astrophysics Data System (ADS)

    Tanutama, Lukas; Wijaya, Rico

    2017-12-01

    The current trend of eco friendly Internet access is preferred. In this research the understanding of eco friendly is minimum power consumption. The devices that are selected have operationally low power consumption and normally have no power consumption as they are hibernating during idle state. To have the reliability a router of a router that has internal load balancing feature will provide the improvement of previous research on multi 3G services for broadband lines. Previous studies emphasized on accessing and downloading information files from Public Cloud residing Web Servers. The demand is not only for speed but high reliability of access as well. High reliability will mean mitigating both direct and indirect high cost due to repeated attempts of uploading and downloading the large files. Nomadic and mobile computer users need viable solution. Following solution for downloading information has been proposed and tested. The solution is promising. The result is now extended to providing reliable access line by means of redundancy and automatic reconfiguration for uploading and downloading large information files to a Web Server in the Cloud. The technique is taking advantage of internal load balancing feature to provision a redundant line acting as a backup line. A router that has the ability to provide load balancing to several WAN lines is chosen. The WAN lines are constructed using multiple 3G lines. The router supports the accessing Internet with more than one 3G access line which increases the reliability and availability of the Internet access as the second line immediately takes over if the first line is disturbed.

  9. Reliability of ionic polymer metallic composite for opto-mechanical applications

    NASA Astrophysics Data System (ADS)

    Yu, Chung-Yi; Su, Guo-Dung J.

    2014-09-01

    Electroactive polymer (EAP) is capable of exhibiting large shape changes in response to electrical stimulation. EAPs can produce large deformation with lower applied voltage for actuation applications. IPMC (Ionic Polymer Metal Composite) is a well-known ionic EAPs. It has numerous attractive advantages, such as low electrical energy consumption and light weight. The mechanism of IPMC actuator is due to the ionic diffusion when the voltage gradient is applied, so that the type of ionic solution has a large impact on the physical properties of IPMC. In this paper, the reliability tests of IPMC with non-aqueous ionic solution are demonstrated. Pt-IPMC with LiOH aqueous solution exhibits the best maximum displacement, but the water in LiOH solution is electrolyzed because of the low electrolysis voltage 1.23 V of water. To improve electrolysis problems and the operation time in the air, proper solvents including high electrolysis voltage and low vapor pressure should be considered. The reliability tests focus on the durability of IPMC in the air. The surface resistance, tip displacement and response time of IPMC are presented. More improvements of IPMC fabrication, such as Ag-IPMC, was developed in this paper.

  10. Simulated Vesta from the South Pole

    NASA Image and Video Library

    2011-03-10

    This image shows the scientists best guess to date of what the surface of the protoplanet Vesta might look like from the south pole and incorporates the best data on dimples and bulges Vesta from ground-based telescopes and NASA Hubble Space Telescope.

  11. Eight Cs and a G.

    ERIC Educational Resources Information Center

    Brown, Dorothy F.

    1988-01-01

    A discussion of vocabulary development for intermediate and advanced students preparing for the Australian certification test for Teaching English as a Foreign Language focuses on nine areas: collocations, clines, clusters, cloze procedures, context, consultation or checking, cards, creativity, and guessing. (seven references) (LB)

  12. 75 FR 76962 - Application To Export Electric Energy; MAG Energy Solutions, Inc.

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-10

    ... DEPARTMENT OF ENERGY [OE Docket No. EA-306-A] Application To Export Electric Energy; MAG Energy Solutions, Inc. AGENCY: Office of Electricity Delivery and Energy Reliability, DOE. ACTION: Notice of Application. SUMMARY: MAG Energy Solutions, Inc. (MAG E.S.) has applied to renew its authority to transmit...

  13. The PAWS and STEM reliability analysis programs

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Stevenson, Philip H.

    1988-01-01

    The PAWS and STEM programs are new design/validation tools. These programs provide a flexible, user-friendly, language-based interface for the input of Markov models describing the behavior of fault-tolerant computer systems. These programs produce exact solutions of the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. PAWS uses a Pade approximation as a solution technique; STEM uses a Taylor series as a solution technique. Both programs have the capability to solve numerically stiff models. PAWS and STEM possess complementary properties with regard to their input space; and, an additional strength of these programs is that they accept input compatible with the SURE program. If used in conjunction with SURE, PAWS and STEM provide a powerful suite of programs to analyze the reliability of fault-tolerant computer systems.

  14. Silicon Nanophotonics for Many-Core On-Chip Networks

    NASA Astrophysics Data System (ADS)

    Mohamed, Moustafa

    Number of cores in many-core architectures are scaling to unprecedented levels requiring ever increasing communication capacity. Traditionally, architects follow the path of higher throughput at the expense of latency. This trend has evolved into being problematic for performance in many-core architectures. Moreover, the trends of power consumption is increasing with system scaling mandating nontraditional solutions. Nanophotonics can address these problems, offering benefits in the three frontiers of many-core processor design: Latency, bandwidth, and power. Nanophotonics leverage circuit-switching flow control allowing low latency; in addition, the power consumption of optical links is significantly lower compared to their electrical counterparts at intermediate and long links. Finally, through wave division multiplexing, we can keep the high bandwidth trends without sacrificing the throughput. This thesis focuses on realizing nanophotonics for communication in many-core architectures at different design levels considering reliability challenges that our fabrication and measurements reveal. First, we study how to design on-chip networks for low latency, low power, and high bandwidth by exploiting the full potential of nanophotonics. The design process considers device level limitations and capabilities on one hand, and system level demands in terms of power and performance on the other hand. The design involves the choice of devices, designing the optical link, the topology, the arbitration technique, and the routing mechanism. Next, we address the problem of reliability in on-chip networks. Reliability not only degrades performance but can block communication. Hence, we propose a reliability-aware design flow and present a reliability management technique based on this flow to address reliability in the system. In the proposed flow reliability is modeled and analyzed for at the device, architecture, and system level. Our reliability management technique is superior to existing solutions in terms of power and performance. In fact, our solution can scale to thousand core with low overhead.

  15. Thick-foils activation technique for neutron spectrum unfolding with the MINUIT routine-Comparison with GEANT4 simulations

    NASA Astrophysics Data System (ADS)

    Vagena, E.; Theodorou, K.; Stoulos, S.

    2018-04-01

    Neutron activation technique has been applied using a proposed set of twelve thick metal foils (Au, As, Cd, In, Ir, Er, Mn, Ni, Se, Sm, W, Zn) for off-site measurements to obtain the neutron spectrum over a wide energy range (from thermal up to a few MeV) in intense neutron-gamma mixed fields such as around medical Linacs. The unfolding procedure takes into account the activation rates measured using thirteen (n , γ) and two (n , p) reactions without imposing a guess solution-spectrum. The MINUIT minimization routine unfolds a neutron spectrum that is dominated by fast neutrons (70%) peaking at 0.3 MeV, while the thermal peak corresponds to the 15% of the total neutron fluence equal to the epithermal-resonances area. The comparison of the unfolded neutron spectrum against the simulated one with the GEANT4 Monte-Carlo code shows a reasonable agreement within the measurement uncertainties. Therefore, the proposed set of activation thick-foils could be a useful tool in order to determine low flux neutrons spectrum in intense mixed field.

  16. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    PubMed Central

    Nguyen, Thu L. N.; Shin, Yoan

    2016-01-01

    Localization in wireless sensor networks (WSNs) is one of the primary functions of the intelligent Internet of Things (IoT) that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach. PMID:27213378

  17. A modelling approach to assessing the timescale uncertainties in proxy series with chronological errors

    NASA Astrophysics Data System (ADS)

    Divine, D. V.; Godtliebsen, F.; Rue, H.

    2012-01-01

    The paper proposes an approach to assessment of timescale errors in proxy-based series with chronological uncertainties. The method relies on approximation of the physical process(es) forming a proxy archive by a random Gamma process. Parameters of the process are partly data-driven and partly determined from prior assumptions. For a particular case of a linear accumulation model and absolutely dated tie points an analytical solution is found suggesting the Beta-distributed probability density on age estimates along the length of a proxy archive. In a general situation of uncertainties in the ages of the tie points the proposed method employs MCMC simulations of age-depth profiles yielding empirical confidence intervals on the constructed piecewise linear best guess timescale. It is suggested that the approach can be further extended to a more general case of a time-varying expected accumulation between the tie points. The approach is illustrated by using two ice and two lake/marine sediment cores representing the typical examples of paleoproxy archives with age models based on tie points of mixed origin.

  18. Time-optimal trajectory planning for underactuated spacecraft using a hybrid particle swarm optimization algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Yufei; Huang, Haibin

    2014-02-01

    A hybrid algorithm combining particle swarm optimization (PSO) algorithm with the Legendre pseudospectral method (LPM) is proposed for solving time-optimal trajectory planning problem of underactuated spacecrafts. At the beginning phase of the searching process, an initialization generator is constructed by the PSO algorithm due to its strong global searching ability and robustness to random initial values, however, PSO algorithm has a disadvantage that its convergence rate around the global optimum is slow. Then, when the change in fitness function is smaller than a predefined value, the searching algorithm is switched to the LPM to accelerate the searching process. Thus, with the obtained solutions by the PSO algorithm as a set of proper initial guesses, the hybrid algorithm can find a global optimum more quickly and accurately. 200 Monte Carlo simulations results demonstrate that the proposed hybrid PSO-LPM algorithm has greater advantages in terms of global searching capability and convergence rate than both single PSO algorithm and LPM algorithm. Moreover, the PSO-LPM algorithm is also robust to random initial values.

  19. Capture of near-Earth objects with low-thrust propulsion and invariant manifolds

    NASA Astrophysics Data System (ADS)

    Tang, Gao; Jiang, Fanghua

    2016-01-01

    In this paper, a mission incorporating low-thrust propulsion and invariant manifolds to capture near-Earth objects (NEOs) is investigated. The initial condition has the spacecraft rendezvousing with the NEO. The mission terminates once it is inserted into a libration point orbit (LPO). The spacecraft takes advantage of stable invariant manifolds for low-energy ballistic capture. Low-thrust propulsion is employed to retrieve the joint spacecraft-asteroid system. Global optimization methods are proposed for the preliminary design. Local direct and indirect methods are applied to optimize the two-impulse transfers. Indirect methods are implemented to optimize the low-thrust trajectory and estimate the largest retrievable mass. To overcome the difficulty that arises from bang-bang control, a homotopic approach is applied to find an approximate solution. By detecting the switching moments of the bang-bang control the efficiency and accuracy of numerical integration are guaranteed. By using the homotopic approach as the initial guess the shooting function is easy to solve. The relationship between the maximum thrust and the retrieval mass is investigated. We find that both numerically and theoretically a larger thrust is preferred.

  20. Cool Cow Quiz.

    ERIC Educational Resources Information Center

    DeRosa, Bill

    1988-01-01

    Provides a game to help develop the skill of estimating and making educated guesses. Uses facts about cows to explain some problems associated with the dairy industry. Includes cards and rules for playing, class adaptation procedures, follow-up activities, and availability of background information on humane concerns. (RT)

  1. The Scaling of Sociometric Nominations.

    ERIC Educational Resources Information Center

    Veldman, Donald J.; Sheffield, John R.

    1979-01-01

    A sociometric nominations instrument called Guess Who was administered to 13,045 elementary school children and then subjected to an image analysis. Four factors were extracted--disruptive, bright, dull, and quiet/well-behaved--and related to teacher ratings, self-reports and other measures. (Author/JKS)

  2. The Good Language Learner: Another Look.

    ERIC Educational Resources Information Center

    Reiss, Mary-Ann

    1985-01-01

    A study of the learning techniques and strategies of successful learners revealed these strategies: monitoring which often involves silent speaking, attending to form and meaning, guessing, practicing, motivation to communicate, and mnemonics. It also revealed a high tolerance for ambiguity in successful learners. (MSE)

  3. Catalyzing Genetic Thinking in Undergraduate Mathematics Education

    ERIC Educational Resources Information Center

    King, Samuel Olugbenga

    2016-01-01

    In undergraduate mathematics education, atypical problem solving approaches are usually discouraged because they are not adaptive to systematic deduction on which undergraduate instructional systems are predicated. I present preliminary qualitative research evidence that indicates that these atypical approaches, such as genetic guessing, which…

  4. Happy Time Mathematics

    ERIC Educational Resources Information Center

    Chilcote, Elinor; And Others

    1975-01-01

    Games and activities, which are fun, practical, and related to the child's world, are presented. Suggestions are given for building skills in estimating lengths, guessing how many, recognizing patterns in counting, multiplying with "waffles," classifying by attributes, and adding and subtracting with special cards, relays, and play…

  5. On the next generation of reliability analysis tools

    NASA Technical Reports Server (NTRS)

    Babcock, Philip S., IV; Leong, Frank; Gai, Eli

    1987-01-01

    The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.

  6. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE PAGES

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.; ...

    2017-01-18

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  7. Reliable and efficient solution of genome-scale models of Metabolism and macromolecular Expression

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Ding; Yang, Laurence; Fleming, Ronan M. T.

    Currently, Constraint-Based Reconstruction and Analysis (COBRA) is the only methodology that permits integrated modeling of Metabolism and macromolecular Expression (ME) at genome-scale. Linear optimization computes steady-state flux solutions to ME models, but flux values are spread over many orders of magnitude. Data values also have greatly varying magnitudes. Furthermore, standard double-precision solvers may return inaccurate solutions or report that no solution exists. Exact simplex solvers based on rational arithmetic require a near-optimal warm start to be practical on large problems (current ME models have 70,000 constraints and variables and will grow larger). We also developed a quadrupleprecision version of ourmore » linear and nonlinear optimizer MINOS, and a solution procedure (DQQ) involving Double and Quad MINOS that achieves reliability and efficiency for ME models and other challenging problems tested here. DQQ will enable extensive use of large linear and nonlinear models in systems biology and other applications involving multiscale data.« less

  8. Predicting chemical degradation during storage from two successive concentration ratios: Theoretical investigation.

    PubMed

    Peleg, Micha; Normand, Mark D

    2015-09-01

    When a vitamin's, pigment's or other food component's chemical degradation follows a known fixed order kinetics, and its rate constant's temperature-dependence follows a two parameter model, then, at least theoretically, it is possible to extract these two parameters from two successive experimental concentration ratios determined during the food's non-isothermal storage. This requires numerical solution of two simultaneous equations, themselves the numerical solutions of two differential rate equations, with a program especially developed for the purpose. Once calculated, these parameters can be used to reconstruct the entire degradation curve for the particular temperature history and predict the degradation curves for other temperature histories. The concept and computation method were tested with simulated degradation under rising and/or falling oscillating temperature conditions, employing the exponential model to characterize the rate constant's temperature-dependence. In computer simulations, the method's predictions were robust against minor errors in the two concentration ratios. The program to do the calculations was posted as freeware on the Internet. The temperature profile can be entered as an algebraic expression that can include 'If' statements, or as an imported digitized time-temperature data file, to be converted into an Interpolating Function by the program. The numerical solution of the two simultaneous equations requires close initial guesses of the exponential model's parameters. Programs were devised to obtain these initial values by matching the two experimental concentration ratios with a generated degradation curve whose parameters can be varied manually with sliders on the screen. These programs too were made available as freeware on the Internet and were tested with published data on vitamin A. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Global emissions of terpenoid VOCs from terrestrial vegetation in the last millennium

    PubMed Central

    Acosta Navarro, J C; Smolander, S; Struthers, H; Zorita, E; Ekman, A M L; Kaplan, J O; Guenther, A; Arneth, A; Riipinen, I

    2014-01-01

    We investigated the millennial variability (1000 A.D.–2000 A.D.) of global biogenic volatile organic compound (BVOC) emissions by using two independent numerical models: The Model of Emissions of Gases and Aerosols from Nature (MEGAN), for isoprene, monoterpene, and sesquiterpene, and Lund-Potsdam-Jena-General Ecosystem Simulator (LPJ-GUESS), for isoprene and monoterpenes. We found the millennial trends of global isoprene emissions to be mostly affected by land cover and atmospheric carbon dioxide changes, whereas monoterpene and sesquiterpene emission trends were dominated by temperature change. Isoprene emissions declined substantially in regions with large and rapid land cover change. In addition, isoprene emission sensitivity to drought proved to have significant short-term global effects. By the end of the past millennium MEGAN isoprene emissions were 634 TgC yr−1 (13% and 19% less than during 1750–1850 and 1000–1200, respectively), and LPJ-GUESS emissions were 323 TgC yr−1(15% and 20% less than during 1750–1850 and 1000–1200, respectively). Monoterpene emissions were 89 TgC yr−1(10% and 6% higher than during 1750–1850 and 1000–1200, respectively) in MEGAN, and 24 TgC yr−1 (2% higher and 5% less than during 1750–1850 and 1000–1200, respectively) in LPJ-GUESS. MEGAN sesquiterpene emissions were 36 TgC yr−1(10% and 4% higher than during 1750–1850 and 1000–1200, respectively). Although both models capture similar emission trends, the magnitude of the emissions are different. This highlights the importance of building better constraints on VOC emissions from terrestrial vegetation. PMID:25866703

  10. Application Of Multi-grid Method On China Seas' Temperature Forecast

    NASA Astrophysics Data System (ADS)

    Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.

    2006-12-01

    Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.

  11. Systematic review of blinding assessment in randomized controlled trials in schizophrenia and affective disorders 2000-2010.

    PubMed

    Baethge, Christopher; Assall, Oliver P; Baldessarini, Ross J

    2013-01-01

    Blinding is an integral part of many randomized controlled trials (RCTs). However, both blinding and blinding assessment seem to be rarely documented in trial reports. Systematic review of articles on RCTs in schizophrenia and affective disorders research during 2000-2010. Among 2,467 publications, 61 (2.5%; 95% confidence interval: 1.9-3.1%) reported assessing participant, rater, or clinician blinding: 5/672 reports on schizophrenia (0.7%; 0.3-1.6%) and 33/1,079 (3.1%; 2.1-4.2%) on affective disorders, without significant trends across the decade. Rarely was blinding assessed at the beginning, in most studies assessment was at the end. Proportion of patients' and raters' correct guesses of study arm averaged 54.4 and 62.0% per study, with slightly more correct guesses in treatment arms than in placebo arms. Three fourths of responders correctly guessed that they received the active agent. Blinding assessment was more frequently reported in papers on psychotherapy and brain stimulation than on drug trials (5.1%, 1.7-11.9%, vs. 8.3%, 4.3-14.4%, vs. 2.1%, 1.5-2.8%). Lack of assessment of blinding was associated with: (a) positive findings, (b) full industrial sponsorship, and (c) diagnosis of schizophrenia. There was a moderate association of treatment success and blinding status of both trial participants (r = 0.51, p = 0.002) and raters (r = 0.55, p = 0.067). Many RCT reports did not meet CONSORT standards regarding documentation of persons blinded (60%) or of efforts to match interventions (50%). Recent treatment trials in major psychiatric disorders rarely reported on or evaluated blinding. We recommend routine documentation of blinding strategies in reports. Copyright © 2013 S. Karger AG, Basel.

  12. Development of a Preventive HIV Vaccine Requires Solving Inverse Problems Which Is Unattainable by Rational Vaccine Design

    PubMed Central

    Van Regenmortel, Marc H. V.

    2018-01-01

    Hypotheses and theories are essential constituents of the scientific method. Many vaccinologists are unaware that the problems they try to solve are mostly inverse problems that consist in imagining what could bring about a desired outcome. An inverse problem starts with the result and tries to guess what are the multiple causes that could have produced it. Compared to the usual direct scientific problems that start with the causes and derive or calculate the results using deductive reasoning and known mechanisms, solving an inverse problem uses a less reliable inductive approach and requires the development of a theoretical model that may have different solutions or none at all. Unsuccessful attempts to solve inverse problems in HIV vaccinology by reductionist methods, systems biology and structure-based reverse vaccinology are described. The popular strategy known as rational vaccine design is unable to solve the multiple inverse problems faced by HIV vaccine developers. The term “rational” is derived from “rational drug design” which uses the 3D structure of a biological target for designing molecules that will selectively bind to it and inhibit its biological activity. In vaccine design, however, the word “rational” simply means that the investigator is concentrating on parts of the system for which molecular information is available. The economist and Nobel laureate Herbert Simon introduced the concept of “bounded rationality” to explain why the complexity of the world economic system makes it impossible, for instance, to predict an event like the financial crash of 2007–2008. Humans always operate under unavoidable constraints such as insufficient information, a limited capacity to process huge amounts of data and a limited amount of time available to reach a decision. Such limitations always prevent us from achieving the complete understanding and optimization of a complex system that would be needed to achieve a truly rational design process. This is why the complexity of the human immune system prevents us from rationally designing an HIV vaccine by solving inverse problems. PMID:29387066

  13. Diagnostic and model dependent uncertainty of simulated Tibetan permafrost area

    NASA Astrophysics Data System (ADS)

    Wang, W.; Rinke, A.; Moore, J. C.; Cui, X.; Ji, D.; Li, Q.; Zhang, N.; Wang, C.; Zhang, S.; Lawrence, D. M.; McGuire, A. D.; Zhang, W.; Delire, C.; Koven, C.; Saito, K.; MacDougall, A.; Burke, E.; Decharme, B.

    2015-03-01

    We perform a land surface model intercomparison to investigate how the simulation of permafrost area on the Tibetan Plateau (TP) varies between 6 modern stand-alone land surface models (CLM4.5, CoLM, ISBA, JULES, LPJ-GUESS, UVic). We also examine the variability in simulated permafrost area and distribution introduced by 5 different methods of diagnosing permafrost (from modeled monthly ground temperature, mean annual ground and air temperatures, air and surface frost indexes). There is good agreement (99-135 x 104 km2) between the two diagnostic methods based on air temperature which are also consistent with the best current observation-based estimate of actual permafrost area (101 x 104 km2). However the uncertainty (1-128 x 104 km2) using the three methods that require simulation of ground temperature is much greater. Moreover simulated permafrost distribution on TP is generally only fair to poor for these three methods (diagnosis of permafrost from monthly, and mean annual ground temperature, and surface frost index), while permafrost distribution using air temperature based methods is generally good. Model evaluation at field sites highlights specific problems in process simulations likely related to soil texture specification and snow cover. Models are particularly poor at simulating permafrost distribution using definition that soil temperature remains at or below 0°C for 24 consecutive months, which requires reliable simulation of both mean annual ground temperatures and seasonal cycle, and hence is relatively demanding. Although models can produce better permafrost maps using mean annual ground temperature and surface frost index, analysis of simulated soil temperature profiles reveals substantial biases. The current generation of land surface models need to reduce biases in simulated soil temperature profiles before reliable contemporary permafrost maps and predictions of changes in permafrost distribution can be made for the Tibetan Plateau.

  14. Assessment of representational competence in kinematics

    NASA Astrophysics Data System (ADS)

    Klein, P.; Müller, A.; Kuhn, J.

    2017-06-01

    A two-tier instrument for representational competence in the field of kinematics (KiRC) is presented, designed for a standard (1st year) calculus-based introductory mechanics course. It comprises 11 multiple choice (MC) and 7 multiple true-false (MTF) questions involving multiple representational formats, such as graphs, pictures, and formal (mathematical) expressions (1st tier). Furthermore, students express their answer confidence for selected items, providing additional information (2nd tier). Measurement characteristics of KiRC were assessed in a validation sample (pre- and post-test, N =83 and N =46 , respectively), including usefulness for measuring learning gain. Validity is checked by interviews and by benchmarking KiRC against related measures. Values for item difficulty, discrimination, and consistency are in the desired ranges; in particular, a good reliability was obtained (KR 20 =0.86 ). Confidence intervals were computed and a replication study yielded values within the latter. For practical and research purposes, KiRC as a diagnostic tool goes beyond related extant instruments both for the representational formats (e.g., mathematical expressions) and for the scope of content covered (e.g., choice of coordinate systems). Together with the satisfactory psychometric properties it appears a versatile and reliable tool for assessing students' representational competency in kinematics (and of its potential change). Confidence judgments add further information to the diagnostic potential of the test, in particular for representational misconceptions. Moreover, we present an analytic result for the question—arising from guessing correction or educational considerations—of how the total effect size (Cohen's d ) varies upon combination of two test components with known individual effect sizes, and then discuss the results in the case of KiRC (MC and MTF combination). The introduced method of test combination analysis can be applied to any test comprising two components for the purpose of finding effect size ranges.

  15. New solitary wave solutions of (3 + 1)-dimensional nonlinear extended Zakharov-Kuznetsov and modified KdV-Zakharov-Kuznetsov equations and their applications

    NASA Astrophysics Data System (ADS)

    Lu, Dianchen; Seadawy, A. R.; Arshad, M.; Wang, Jun

    In this paper, new exact solitary wave, soliton and elliptic function solutions are constructed in various forms of three dimensional nonlinear partial differential equations (PDEs) in mathematical physics by utilizing modified extended direct algebraic method. Soliton solutions in different forms such as bell and anti-bell periodic, dark soliton, bright soliton, bright and dark solitary wave in periodic form etc are obtained, which have large applications in different branches of physics and other areas of applied sciences. The obtained solutions are also presented graphically. Furthermore, many other nonlinear evolution equations arising in mathematical physics and engineering can also be solved by this powerful, reliable and capable method. The nonlinear three dimensional extended Zakharov-Kuznetsov dynamica equation and (3 + 1)-dimensional modified KdV-Zakharov-Kuznetsov equation are selected to show the reliability and effectiveness of the current method.

  16. Probabilistic structural mechanics research for parallel processing computers

    NASA Technical Reports Server (NTRS)

    Sues, Robert H.; Chen, Heh-Chyun; Twisdale, Lawrence A.; Martin, William R.

    1991-01-01

    Aerospace structures and spacecraft are a complex assemblage of structural components that are subjected to a variety of complex, cyclic, and transient loading conditions. Significant modeling uncertainties are present in these structures, in addition to the inherent randomness of material properties and loads. To properly account for these uncertainties in evaluating and assessing the reliability of these components and structures, probabilistic structural mechanics (PSM) procedures must be used. Much research has focused on basic theory development and the development of approximate analytic solution methods in random vibrations and structural reliability. Practical application of PSM methods was hampered by their computationally intense nature. Solution of PSM problems requires repeated analyses of structures that are often large, and exhibit nonlinear and/or dynamic response behavior. These methods are all inherently parallel and ideally suited to implementation on parallel processing computers. New hardware architectures and innovative control software and solution methodologies are needed to make solution of large scale PSM problems practical.

  17. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  18. Backpocket: Activities for Nature Study.

    ERIC Educational Resources Information Center

    Hendry, Ian; And Others

    1995-01-01

    Leading naturalist-teachers share outdoor learning activities and techniques, including using binoculars as magnifiers, scavenger hunts, games such as "what's it called" and "I spy," insect study, guessing the age of trees by examining the bark, leading bird walks, exploring nature in the community, and enhancing nature hikes…

  19. Sustainability - What are the Odds? Guessing the Future of our Environment, Economy, and Society

    EPA Science Inventory

    This article examines the concept of sustainability from a global perspective, describing how alternative futures might develop in the environmental, economic, and social dimensions. The alternatives to sustainability appear to be (a) a catastrophic failure of life support, econo...

  20. The Cognitive Dimensions of Information Structures.

    ERIC Educational Resources Information Center

    Green, T. R. G.

    1994-01-01

    Describes a set of terms (viscosity, hidden dependencies, imposes guess-ahead, abstraction level, and secondary notation) intended as a set of discussion tools for nonspecialists to converse about the structural features of a range of information artifacts. Explains the terms using spreadsheets as an example. (SR)

  1. Field Notes

    ERIC Educational Resources Information Center

    Parrone, Edward G.; Montalto, Michael P.

    2008-01-01

    The importance of athletic fields has increased in today's society because of the popularity of sporting events. As a result, education administrators face challenges when dealing with their athletic facilities. Decisionmakers constantly are being second-guessed in regard to outdated, overused facilities and lack of budget. In this article, the…

  2. Keep Them Guessing

    ERIC Educational Resources Information Center

    Riendeay, Diane, Ed.

    2013-01-01

    Discrepant events are surprising occurrences that challenge learners' preconceptions. These events puzzle students because the results are contrary to what they believe should happen. Due to the unexpected outcome, students experience cognitive disequilibrium, and this often leads to a desire to solve the problem. Discrepant events are great…

  3. An improved authenticated key agreement protocol for telecare medicine information system.

    PubMed

    Liu, Wenhao; Xie, Qi; Wang, Shengbao; Hu, Bin

    2016-01-01

    In telecare medicine information systems (TMIS), identity authentication of patients plays an important role and has been widely studied in the research field. Generally, it is realized by an authenticated key agreement protocol, and many such protocols were proposed in the literature. Recently, Zhang et al. pointed out that Islam et al.'s protocol suffers from the following security weaknesses: (1) Any legal but malicious patient can reveal other user's identity; (2) An attacker can launch off-line password guessing attack and the impersonation attack if the patient's identity is compromised. Zhang et al. also proposed an improved authenticated key agreement scheme with privacy protection for TMIS. However, in this paper, we point out that Zhang et al.'s scheme cannot resist off-line password guessing attack, and it fails to provide the revocation of lost/stolen smartcard. In order to overcome these weaknesses, we propose an improved protocol, the security and authentication of which can be proven using applied pi calculus based formal verification tool ProVerif.

  4. Aligning Spinoza with Descartes: An informed Cartesian account of the truth bias.

    PubMed

    Street, Chris N H; Kingstone, Alan

    2017-08-01

    There is a bias towards believing information is true rather than false. The Spinozan account claims there is an early, automatic bias towards believing. Only afterwards can people engage in an effortful re-evaluation and disbelieve the information. Supporting this account, there is a greater bias towards believing information is true when under cognitive load. However, developing on the Adaptive Lie Detector (ALIED) theory, the informed Cartesian can equally explain this data. The account claims the bias under load is not evidence of automatic belief; rather, people are undecided, but if forced to guess they can rely on context information to make an informed judgement. The account predicts, and we found, that if people can explicitly indicate their uncertainty, there should be no bias towards believing because they are no longer required to guess. Thus, we conclude that belief formation can be better explained by an informed Cartesian account - an attempt to make an informed judgment under uncertainty. © 2016 The British Psychological Society.

  5. Serial consolidation of orientation information into visual short-term memory.

    PubMed

    Liu, Taosheng; Becker, Mark W

    2013-06-01

    Previous research suggests that there is a limit to the rate at which items can be consolidated in visual short-term memory (VSTM). This limit could be due to either a serial or a limited-capacity parallel process. Historically, it has proven difficult to distinguish between these two types of processes. In the present experiment, we took a novel approach that allowed us to do so. Participants viewed two oriented gratings either sequentially or simultaneously and reported one of the gratings' orientation via method of adjustment. Performance was worse for the simultaneous than for the sequential condition. We fit the data with a mixture model that assumes performance is limited by a noisy memory representation plus random guessing. Critically, the serial and limited-capacity parallel processes made distinct predictions regarding the model's guessing and memory-precision parameters. We found strong support for a serial process, which implies that one can consolidate only a single orientation into VSTM at a time.

  6. ClueConnect: a word array game to promote student comprehension of key terminology in an introductory anatomy and physiology course.

    PubMed

    Burleson, Kathryn M; Olimpo, Jeffrey T

    2016-06-01

    The sheer amount of terminology and conceptual knowledge required for anatomy and physiology can be overwhelming for students. Educational games are one approach to reinforce such knowledge. In this activity, students worked collaboratively to review anatomy and physiology concepts by creating arrays of descriptive tiles to define a term. Once guessed, students located the structure or process within diagrams of the body. The game challenged students to think about course vocabulary in novel ways and to use their collective knowledge to get their classmates to guess the terms. Comparison of pretest/posttest/delayed posttest data revealed that students achieved statistically significant learning gains for each unit after playing the game, and a survey of student perceptions demonstrated that the game was helpful for learning vocabulary as well as fun to play. The game is easily adaptable for a variety of lower- and upper-division courses. Copyright © 2016 The American Physiological Society.

  7. Solving the Swath Segment Selection Problem

    NASA Technical Reports Server (NTRS)

    Knight, Russell; Smith, Benjamin

    2006-01-01

    Several artificial-intelligence search techniques have been tested as means of solving the swath segment selection problem (SSSP) -- a real-world problem that is not only of interest in its own right, but is also useful as a test bed for search techniques in general. In simplest terms, the SSSP is the problem of scheduling the observation times of an airborne or spaceborne synthetic-aperture radar (SAR) system to effect the maximum coverage of a specified area (denoted the target), given a schedule of downlinks (opportunities for radio transmission of SAR scan data to a ground station), given the limit on the quantity of SAR scan data that can be stored in an onboard memory between downlink opportunities, and given the limit on the achievable downlink data rate. The SSSP is NP complete (short for "nondeterministic polynomial time complete" -- characteristic of a class of intractable problems that can be solved only by use of computers capable of making guesses and then checking the guesses in polynomial time).

  8. Retrieved Products from Simulated Hyperspectral Observations of a Hurricane

    NASA Technical Reports Server (NTRS)

    Susskind, Joel; Kouvaris, Louis; Iredell, Lena; Blaisdell, John

    2015-01-01

    Retrievals were run using the AIRS Science Team Version-6 AIRS Only retrieval algorithm, which generates a Neural-Net first guess (T(sub s))(sup 0), (T(p))(sup 0), and (q(p))(sup 0) as a function of observed AIRS radiances. AIRS Science Team Neural-Net coefficients performed very well beneath 300 mb using the simulated radiances. This means the simulated radiances are very realistic. First guess and retrieved values of T(p) above 300 mb were biased cold, but both represented the model spatial structure very well. QC'd T(p) and q(p) retrievals for all experiments had similar accuracies compared to their own truth fields, and were roughly consistent with results obtained using real data. Spatial coverage of retrievals, as well as the representativeness of the spatial structure of the storm, improved dramatically with decreasing size of the instrument's FOV. We sent QC'd values of T(p) and q(p) to Bob Atlas at AOML for use as input to OSSE Data Assimilation experiments.

  9. Combined neural network/Phillips-Tikhonov approach to aerosol retrievals over land from the NASA Research Scanning Polarimeter

    NASA Astrophysics Data System (ADS)

    Di Noia, Antonio; Hasekamp, Otto P.; Wu, Lianghai; van Diedenhoven, Bastiaan; Cairns, Brian; Yorks, John E.

    2017-11-01

    In this paper, an algorithm for the retrieval of aerosol and land surface properties from airborne spectropolarimetric measurements - combining neural networks and an iterative scheme based on Phillips-Tikhonov regularization - is described. The algorithm - which is an extension of a scheme previously designed for ground-based retrievals - is applied to measurements from the Research Scanning Polarimeter (RSP) on board the NASA ER-2 aircraft. A neural network, trained on a large data set of synthetic measurements, is applied to perform aerosol retrievals from real RSP data, and the neural network retrievals are subsequently used as a first guess for the Phillips-Tikhonov retrieval. The resulting algorithm appears capable of accurately retrieving aerosol optical thickness, fine-mode effective radius and aerosol layer height from RSP data. Among the advantages of using a neural network as initial guess for an iterative algorithm are a decrease in processing time and an increase in the number of converging retrievals.

  10. Failure of self-consistency in the discrete resource model of visual working memory.

    PubMed

    Bays, Paul M

    2018-06-03

    The discrete resource model of working memory proposes that each individual has a fixed upper limit on the number of items they can store at one time, due to division of memory into a few independent "slots". According to this model, responses on short-term memory tasks consist of a mixture of noisy recall (when the tested item is in memory) and random guessing (when the item is not in memory). This provides two opportunities to estimate capacity for each observer: first, based on their frequency of random guesses, and second, based on the set size at which the variability of stored items reaches a plateau. The discrete resource model makes the simple prediction that these two estimates will coincide. Data from eight published visual working memory experiments provide strong evidence against such a correspondence. These results present a challenge for discrete models of working memory that impose a fixed capacity limit. Copyright © 2018 The Author. Published by Elsevier Inc. All rights reserved.

  11. An Adaptive Buddy Check for Observational Quality Control

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.; Rukhovets, Leonid; Todling, Ricardo; DaSilva, Arlindo M.; Larson, Jay W.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    An adaptive buddy check algorithm is presented that adjusts tolerances for outlier observations based on the variability of surrounding data. The algorithm derives from a statistical hypothesis test combined with maximum-likelihood covariance estimation. Its stability is shown to depend on the initial identification of outliers by a simple background check. The adaptive feature ensures that the final quality control decisions are not very sensitive to prescribed statistics of first-guess and observation errors, nor on other approximations introduced into the algorithm. The implementation of the algorithm in a global atmospheric data assimilation is described. Its performance is contrasted with that of a non-adaptive buddy check, for the surface analysis of an extreme storm that took place in Europe on 27 December 1999. The adaptive algorithm allowed the inclusion of many important observations that differed greatly from the first guess and that would have been excluded on the basis of prescribed statistics. The analysis of the storm development was much improved as a result of these additional observations.

  12. MWR3C physical retrievals of precipitable water vapor and cloud liquid water path

    DOE Data Explorer

    Cadeddu, Maria

    2016-10-12

    The data set contains physical retrievals of PWV and cloud LWP retrieved from MWR3C measurements during the MAGIC campaign. Additional data used in the retrieval process include radiosondes and ceilometer. The retrieval is based on an optimal estimation technique that starts from a first guess and iteratively repeats the forward model calculations until a predefined convergence criterion is satisfied. The first guess is a vector of [PWV,LWP] from the neural network retrieval fields in the netcdf file. When convergence is achieved the 'a posteriori' covariance is computed and its square root is expressed in the file as the retrieval 1-sigma uncertainty. The closest radiosonde profile is used for the radiative transfer calculations and ceilometer data are used to constrain the cloud base height. The RMS error between the brightness temperatures is computed at the last iterations as a consistency check and is written in the last column of the output file.

  13. Analysis and use of VAS satellite data

    NASA Technical Reports Server (NTRS)

    Fuelberg, Henry E.; Andrews, Mark J.; Beven, John L., II; Moore, Steven R.; Muller, Bradley M.

    1989-01-01

    Four interrelated investigations have examined the analysis and use of VAS satellite data. A case study of VAS-derived mesoscale stability parameters suggested that they would have been a useful supplement to conventional data in the forecasting of thunderstorms on the day of interest. A second investigation examined the roles of first guess and VAS radiometric data in producing sounding retrievals. Broad-scale patterns of the first guess, radiances, and retrievals frequently were similar, whereas small-scale retrieval features, especially in the dew points, were often of uncertain origin. Two research tasks considered 6.7 micron middle tropospheric water vapor imagery. The first utilized radiosonde data to examine causes for two areas of warm brightness temperature. Subsidence associated with a translating jet streak was important. The second task involving water vapor imagery investigated simulated imagery created from LAMPS output and a radiative transfer algorithm. Simulated image patterns were found to compare favorably with those actually observed by VAS. Furthermore, the mass/momentum fields from LAMPS were powerful tools for understanding causes for the image configurations.

  14. CLAES Product Improvement by use of GSFC Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Kumer, J. B.; Douglass, Anne (Technical Monitor)

    2001-01-01

    Recent development in chemistry transport models (CTM) and in data assimilation systems (DAS) indicate impressive predictive capability for the movement of airparcels and the chemistry that goes on within these. This project was aimed at exploring the use of this capability to achieve improved retrieval of geophysical parameters from remote sensing data. The specific goal was to improve retrieval of the CLAES CH4 data obtained during the active north high latitude dynamics event of 18 to 25 February 1992. The model capabilities would be used: (1) rather than climatology to improve on the first guess and the a-priori fields, and (2) to provide horizontal gradients to include in the retrieval forward model. The retrieval would be implemented with the first forward DAS prediction. The results would feed back to the DAS and a second DAS prediction for first guess, a-priori and gradients would feed to the retrieval. The process would repeat to convergence and then proceed to the next day.

  15. Interpreting the Coulomb-field approximation for generalized-Born electrostatics using boundary-integral equation theory.

    PubMed

    Bardhan, Jaydeep P

    2008-10-14

    The importance of molecular electrostatic interactions in aqueous solution has motivated extensive research into physical models and numerical methods for their estimation. The computational costs associated with simulations that include many explicit water molecules have driven the development of implicit-solvent models, with generalized-Born (GB) models among the most popular of these. In this paper, we analyze a boundary-integral equation interpretation for the Coulomb-field approximation (CFA), which plays a central role in most GB models. This interpretation offers new insights into the nature of the CFA, which traditionally has been assessed using only a single point charge in the solute. The boundary-integral interpretation of the CFA allows the use of multiple point charges, or even continuous charge distributions, leading naturally to methods that eliminate the interpolation inaccuracies associated with the Still equation. This approach, which we call boundary-integral-based electrostatic estimation by the CFA (BIBEE/CFA), is most accurate when the molecular charge distribution generates a smooth normal displacement field at the solute-solvent boundary, and CFA-based GB methods perform similarly. Conversely, both methods are least accurate for charge distributions that give rise to rapidly varying or highly localized normal displacement fields. Supporting this analysis are comparisons of the reaction-potential matrices calculated using GB methods and boundary-element-method (BEM) simulations. An approximation similar to BIBEE/CFA exhibits complementary behavior, with superior accuracy for charge distributions that generate rapidly varying normal fields and poorer accuracy for distributions that produce smooth fields. This approximation, BIBEE by preconditioning (BIBEE/P), essentially generates initial guesses for preconditioned Krylov-subspace iterative BEMs. Thus, iterative refinement of the BIBEE/P results recovers the BEM solution; excellent agreement is obtained in only a few iterations. The boundary-integral-equation framework may also provide a means to derive rigorous results explaining how the empirical correction terms in many modern GB models significantly improve accuracy despite their simple analytical forms.

  16. Stochastic level-set variational implicit-solvent approach to solute-solvent interfacial fluctuations

    PubMed Central

    Zhou, Shenggao; Sun, Hui; Cheng, Li-Tien; Dzubiella, Joachim; McCammon, J. Andrew

    2016-01-01

    Recent years have seen the initial success of a variational implicit-solvent model (VISM), implemented with a robust level-set method, in capturing efficiently different hydration states and providing quantitatively good estimation of solvation free energies of biomolecules. The level-set minimization of the VISM solvation free-energy functional of all possible solute-solvent interfaces or dielectric boundaries predicts an equilibrium biomolecular conformation that is often close to an initial guess. In this work, we develop a theory in the form of Langevin geometrical flow to incorporate solute-solvent interfacial fluctuations into the VISM. Such fluctuations are crucial to biomolecular conformational changes and binding process. We also develop a stochastic level-set method to numerically implement such a theory. We describe the interfacial fluctuation through the “normal velocity” that is the solute-solvent interfacial force, derive the corresponding stochastic level-set equation in the sense of Stratonovich so that the surface representation is independent of the choice of implicit function, and develop numerical techniques for solving such an equation and processing the numerical data. We apply our computational method to study the dewetting transition in the system of two hydrophobic plates and a hydrophobic cavity of a synthetic host molecule cucurbit[7]uril. Numerical simulations demonstrate that our approach can describe an underlying system jumping out of a local minimum of the free-energy functional and can capture dewetting transitions of hydrophobic systems. In the case of two hydrophobic plates, we find that the wavelength of interfacial fluctuations has a strong influence to the dewetting transition. In addition, we find that the estimated energy barrier of the dewetting transition scales quadratically with the inter-plate distance, agreeing well with existing studies of molecular dynamics simulations. Our work is a first step toward the inclusion of fluctuations into the VISM and understanding the impact of interfacial fluctuations on biomolecular solvation with an implicit-solvent approach. PMID:27497546

  17. A Most Probable Point-Based Method for Reliability Analysis, Sensitivity Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Hou, Gene J.-W; Newman, Perry A. (Technical Monitor)

    2004-01-01

    A major step in a most probable point (MPP)-based method for reliability analysis is to determine the MPP. This is usually accomplished by using an optimization search algorithm. The minimum distance associated with the MPP provides a measurement of safety probability, which can be obtained by approximate probability integration methods such as FORM or SORM. The reliability sensitivity equations are derived first in this paper, based on the derivatives of the optimal solution. Examples are provided later to demonstrate the use of these derivatives for better reliability analysis and reliability-based design optimization (RBDO).

  18. The reliable solution and computation time of variable parameters logistic model

    NASA Astrophysics Data System (ADS)

    Wang, Pengfei; Pan, Xinnong

    2018-05-01

    The study investigates the reliable computation time (RCT, termed as T c) by applying a double-precision computation of a variable parameters logistic map (VPLM). Firstly, by using the proposed method, we obtain the reliable solutions for the logistic map. Secondly, we construct 10,000 samples of reliable experiments from a time-dependent non-stationary parameters VPLM and then calculate the mean T c. The results indicate that, for each different initial value, the T cs of the VPLM are generally different. However, the mean T c trends to a constant value when the sample number is large enough. The maximum, minimum, and probable distribution functions of T c are also obtained, which can help us to identify the robustness of applying a nonlinear time series theory to forecasting by using the VPLM output. In addition, the T c of the fixed parameter experiments of the logistic map is obtained, and the results suggest that this T c matches the theoretical formula-predicted value.

  19. Development of a nanosatellite de-orbiting system by reliability based design optimization

    NASA Astrophysics Data System (ADS)

    Nikbay, Melike; Acar, Pınar; Aslan, Alim Rüstem

    2015-12-01

    This paper presents design approaches to develop a reliable and efficient de-orbiting system for the 3USAT nanosatellite to provide a beneficial orbital decay process at the end of a mission. A de-orbiting system is initially designed by employing the aerodynamic drag augmentation principle where the structural constraints of the overall satellite system and the aerodynamic forces are taken into account. Next, an alternative de-orbiting system is designed with new considerations and further optimized using deterministic and reliability based design techniques. For the multi-objective design, the objectives are chosen to maximize the aerodynamic drag force through the maximization of the Kapton surface area while minimizing the de-orbiting system mass. The constraints are related in a deterministic manner to the required deployment force, the height of the solar panel hole and the deployment angle. The length and the number of layers of the deployable Kapton structure are used as optimization variables. In the second stage of this study, uncertainties related to both manufacturing and operating conditions of the deployable structure in space environment are considered. These uncertainties are then incorporated into the design process by using different probabilistic approaches such as Monte Carlo Simulation, the First-Order Reliability Method and the Second-Order Reliability Method. The reliability based design optimization seeks optimal solutions using the former design objectives and constraints with the inclusion of a reliability index. Finally, the de-orbiting system design alternatives generated by different approaches are investigated and the reliability based optimum design is found to yield the best solution since it significantly improves both system reliability and performance requirements.

  20. solveME: fast and reliable solution of nonlinear ME models.

    PubMed

    Yang, Laurence; Ma, Ding; Ebrahim, Ali; Lloyd, Colton J; Saunders, Michael A; Palsson, Bernhard O

    2016-09-22

    Genome-scale models of metabolism and macromolecular expression (ME) significantly expand the scope and predictive capabilities of constraint-based modeling. ME models present considerable computational challenges: they are much (>30 times) larger than corresponding metabolic reconstructions (M models), are multiscale, and growth maximization is a nonlinear programming (NLP) problem, mainly due to macromolecule dilution constraints. Here, we address these computational challenges. We develop a fast and numerically reliable solution method for growth maximization in ME models using a quad-precision NLP solver (Quad MINOS). Our method was up to 45 % faster than binary search for six significant digits in growth rate. We also develop a fast, quad-precision flux variability analysis that is accelerated (up to 60× speedup) via solver warm-starts. Finally, we employ the tools developed to investigate growth-coupled succinate overproduction, accounting for proteome constraints. Just as genome-scale metabolic reconstructions have become an invaluable tool for computational and systems biologists, we anticipate that these fast and numerically reliable ME solution methods will accelerate the wide-spread adoption of ME models for researchers in these fields.

  1. Mystery Boxes: Helping Children Improve Their Reasoning

    ERIC Educational Resources Information Center

    Rule, Audrey C.

    2007-01-01

    This guest editorial describes ways teachers can use guessing games about an unknown item in a "mystery box" to help children improve their abilities to listen to others, recall information, ask purposeful questions, classify items by class, make inferences, synthesize information, and draw conclusions. The author presents information…

  2. Snowy Entomology

    ERIC Educational Resources Information Center

    Schmidt, Pamela; Chadde, Joan Schumaker; Buenzli, Michael

    2003-01-01

    Insects can be useful for investigations because they are numerous, relatively easy to find, and fascinating to students. Most elementary students have limited understandings of what exactly becomes of insects during the winter, often guessing that insects must "go to sleep" or "they just die." In this winter activity, students learn about insect…

  3. Producibility Engineering and Planning (PEP)

    DTIC Science & Technology

    1977-01-01

    Materiel System, May 1976. c. Cesare Raimondi, "Estimating Drafting Time - Art , Science , Guess- work", Machine Design, 7 September 1972. d. Current Wage...Comprehensive 8 16 24 32 40 86 45 70 90 80 1/ Cesare Raimondi, "Estimating Drafting Time- Art , Science , Guesswork," Machine Design, September

  4. Minimum Fuel Trajectory Design in Multiple Dynamical Environments Utilizing Direct Transcription Methods and Particle Swarm Optimization

    DTIC Science & Technology

    2016-03-01

    89 3.1.3 NLP Improvement...3.2.1.2 NLP Improvement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 3.2.2 Multiple-burn Planar LEO to GEO Transfer...101 3.2.2.1 PSO Initial Guess Generation . . . . . . . . . . . . . . . . . . . . . 101 3.2.2.2 NLP Improvement

  5. Soliton and periodic solutions for time-dependent coefficient non-linear equation

    NASA Astrophysics Data System (ADS)

    Guner, Ozkan

    2016-01-01

    In this article, we establish exact solutions for the generalized (3+1)-dimensional variable coefficient Kadomtsev-Petviashvili (GVCKP) equation. Using solitary wave ansatz in terms of ? functions and the modified sine-cosine method, we find exact analytical bright soliton solutions and exact periodic solutions for the considered model. The physical parameters in the soliton solutions are obtained as function of the dependent model coefficients. The effectiveness and reliability of the method are shown by its application to the GVCKP equation.

  6. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  7. Minimize system cost by choosing optimal subsystem reliability and redundancy

    NASA Technical Reports Server (NTRS)

    Suich, Ronald C.; Patterson, Richard L.

    1993-01-01

    The basic question which we address in this paper is how to choose among competing subsystems. This paper utilizes both reliabilities and costs to find the subsystems with the lowest overall expected cost. The paper begins by reviewing some of the concepts of expected value. We then address the problem of choosing among several competing subsystems. These concepts are then applied to k-out-of-n: G subsystems. We illustrate the use of the authors' basic program in viewing a range of possible solutions for several different examples. We then discuss the implications of various solutions in these examples.

  8. Forward Period Analysis Method of the Periodic Hamiltonian System.

    PubMed

    Wang, Pengfei

    2016-01-01

    Using the forward period analysis (FPA), we obtain the period of a Morse oscillator and mathematical pendulum system, with the accuracy of 100 significant digits. From these results, the long-term [0, 1060] (time unit) solutions, ranging from the Planck time to the age of the universe, are computed reliably and quickly with a parallel multiple-precision Taylor series (PMT) scheme. The application of FPA to periodic systems can greatly reduce the computation time of long-term reliable simulations. This scheme provides an efficient way to generate reference solutions, against which long-term simulations using other schemes can be tested.

  9. Reliability and availability modeling of coupled communication networks - A simplified modeling approach

    NASA Technical Reports Server (NTRS)

    Shooman, Martin L.; Cortes, Eladio R.

    1991-01-01

    The network-complexity of LANs and of LANs that are interconnected by bridges and routers poses a challenging reliability-modeling problem. The present effort toward these problems' solution attempts to simplify them by reducing their number of states through truncation and state merging, as suggested by Shooman and Laemmel (1990). Through the use of state merging, it becomes possible to reduce the Bateman-Cortes 161 state model to a two state model with a closed-form solution. In the case of coupled networks, a technique which allows for problem-decomposition must be used.

  10. Reliability issues of free-space communications systems and networks

    NASA Astrophysics Data System (ADS)

    Willebrand, Heinz A.

    2003-04-01

    Free space optics (FSO) is a high-speed point-to-point connectivity solution traditionally used in the enterprise campus networking market for building-to-building LAN connectivity. However, more recently some wire line and wireless carriers started to deploy FSO systems in their networks. The requirements on FSO system reliability, meaing both system availability and component reliability, are far more stringent in the carrier market when compared to the requirements in the enterprise market segment. This paper tries to outline some of the aspects that are important to ensure carrier class system reliability.

  11. Challenges Regarding IP Core Functional Reliability

    NASA Technical Reports Server (NTRS)

    Berg, Melanie D.; LaBel, Kenneth A.

    2017-01-01

    For many years, intellectual property (IP) cores have been incorporated into field programmable gate array (FPGA) and application specific integrated circuit (ASIC) design flows. However, the usage of large complex IP cores were limited within products that required a high level of reliability. This is no longer the case. IP core insertion has become mainstream including their use in highly reliable products. Due to limited visibility and control, challenges exist when using IP cores and subsequently compromise product reliability. We discuss challenges and suggest potential solutions to critical application IP insertion.

  12. TH-A-9A-02: BEST IN PHYSICS (THERAPY) - 4D IMRT Planning Using Highly- Parallelizable Particle Swarm Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Modiri, A; Gu, X; Sawant, A

    2014-06-15

    Purpose: We present a particle swarm optimization (PSO)-based 4D IMRT planning technique designed for dynamic MLC tracking delivery to lung tumors. The key idea is to utilize the temporal dimension as an additional degree of freedom rather than a constraint in order to achieve improved sparing of organs at risk (OARs). Methods: The target and normal structures were manually contoured on each of the ten phases of a 4DCT scan acquired from a lung SBRT patient who exhibited 1.5cm tumor motion despite the use of abdominal compression. Corresponding ten IMRT plans were generated using the Eclipse treatment planning system. Thesemore » plans served as initial guess solutions for the PSO algorithm. Fluence weights were optimized over the entire solution space i.e., 10 phases × 12 beams × 166 control points. The size of the solution space motivated our choice of PSO, which is a highly parallelizable stochastic global optimization technique that is well-suited for such large problems. A summed fluence map was created using an in-house B-spline deformable image registration. Each plan was compared with a corresponding, internal target volume (ITV)-based IMRT plan. Results: The PSO 4D IMRT plan yielded comparable PTV coverage and significantly higher dose—sparing for parallel and serial OARs compared to the ITV-based plan. The dose-sparing achieved via PSO-4DIMRT was: lung Dmean = 28%; lung V20 = 90%; spinal cord Dmax = 23%; esophagus Dmax = 31%; heart Dmax = 51%; heart Dmean = 64%. Conclusion: Truly 4D IMRT that uses the temporal dimension as an additional degree of freedom can achieve significant dose sparing of serial and parallel OARs. Given the large solution space, PSO represents an attractive, parallelizable tool to achieve globally optimal solutions for such problems. This work was supported through funding from the National Institutes of Health and Varian Medical Systems. Amit Sawant has research funding from Varian Medical Systems, VisionRT Ltd. and Elekta.« less

  13. Coupled orbit-attitude mission design in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Guzzetti, Davide

    Trajectory design increasingly leverages multi-body dynamical structures that are based on an understanding of various types of orbits in the Circular Restricted Three-Body Problem (CR3BP). Given the more complex dynamical environment, mission applications may also benefit from deeper insight into the attitude motion. In this investigation, the attitude dynamics are coupled with the trajectories in the CR3BP. In a highly sensitive dynamical model, such as the orbit-attitude CR3BP, periodic solutions allow delineation of the fundamental dynamical structures. Periodic solutions are also a subset of motions that are bounded over an infinite time-span (assuming no perturbing factors), without the necessity to integrate over an infinite time interval. Euler equations of motion and quaternion kinematics describe the rotational behavior of the spacecraft, whereas the translation of the center of mass is modeled in the CR3BP equations. A multiple shooting and continuation procedure are employed to target orbit-attitude periodic solutions in this model. Application of Floquet theory, Poincare mapping, and grid search to identify initial guesses for the targeting algorithm is described. In the Earth-Moon system, representative scenarios are explored for axisymmetric vehicles with various inertia characteristics, assuming that the vehicles move along Lyapunov, halo as well as distant retrograde orbits. A rich structure of possible periodic behaviors appears to pervade the solution space in the coupled problem. The stability analysis of the attitude dynamics for the selected families is included. Among the computed solutions, marginally stable and slowly diverging rotational behaviors exist and may offer interesting mission applications. Additionally, the solar radiation pressure is included and a fully coupled orbit-attitude model is developed. With specific application to solar sails, various guidance algorithms are explored to direct the spacecraft along a desired path, when the mutual interaction between orbit and attitude dynamics is considered. Each strategy implements a different form of control input, ranging from instantaneous reorientation of the sail pointing direction to the application of control torques, and it is demonstrated within a simple station keeping scenario.

  14. Challenges for Wireless Mesh Networks to provide reliable carrier-grade services

    NASA Astrophysics Data System (ADS)

    von Hugo, D.; Bayer, N.

    2011-08-01

    Provision of mobile and wireless services today within a competitive environment and driven by a huge amount of steadily emerging new services and applications is both challenge and chance for radio network operators. Deployment and operation of an infrastructure for mobile and wireless broadband connectivity generally requires planning effort and large investments. A promising approach to reduce expenses for radio access networking is offered by Wireless Mesh Networks (WMNs). Here traditional dedicated backhaul connections to each access point are replaced by wireless multi-hop links between neighbouring access nodes and few gateways to the backbone employing standard radio technology. Such a solution provides at the same time high flexibility in both deployment and the amount of offered capacity and shall reduce overall expenses. On the other hand currently available mesh solutions do not provide carrier grade service quality and reliability and often fail to cope with high traffic load. EU project CARMEN (CARrier grade MEsh Networks) was initiated to incorporate different heterogeneous technologies and new protocols to allow for reliable transmission over "best effort" radio channels, to support a reliable mobility and network management, self-configuration and dynamic resource usage, and thus to offer a permanent or temporary broadband access at high cost efficiency. The contribution provides an overview on preliminary project results with focus on main technical challenges from a research and implementation point of view. Especially impact of mesh topology on the overall system performance in terms of throughput and connection reliability and aspects of a dedicated hybrid mobility management solution will be discussed.

  15. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  16. Antenatal fetal heart rate and "maternal intuition" as predictors of fetal sex.

    PubMed

    Genuis, S; Genuis, S K; Chang, W C

    1996-06-01

    To determine if the antenatal fetal heart rate is a reliable predictor of fetal sex, if there is any correlation between "maternal intuition" and fetal gender, and if maternal intuition favors one sex over the other. Two hundred twelve consecutive maternity patients with singleton gestations underwent a total of 2,261 antepartum visits. Fetal heart rate assessment was carried out between 14 and 41 weeks of gestation. At 32 weeks, participants were asked if they had a strong intuitive feeling regarding the fetal gender. Following birth, data on the infant were recorded, and the information was analyzed. There was no significant difference in the baseline fetal heart rate between male and female fetuses at any recorded gestational age. One hundred ten patients (51.9%) in the sample indicated a strong belief about the sex of their fetuses, with the majority (63.6%) predicting a male. The accuracy of maternal intuition, however, was not significantly different from that of random guessing. In the current era of declining family size, an increased focus on absolute reproductive choice and proliferating reproductive technological services, prenatal sex determination and sex selection will continue to provoke increasing attention. Fetal heart rate determination and maternal intuition, however, are not valid predictors of fetal gender.

  17. Setting the scene for SWOT: global maps of river reach hydrodynamic variables

    NASA Astrophysics Data System (ADS)

    Schumann, Guy J.-P.; Durand, Michael; Pavelsky, Tamlin; Lion, Christine; Allen, George

    2017-04-01

    Credible and reliable characterization of discharge from the Surface Water and Ocean Topography (SWOT) mission using the Manning-based algorithms needs a prior estimate constraining reach-scale channel roughness, base flow and river bathymetry. For some places, any one of those variables may exist locally or even regionally as a measurement, which is often only at a station, or sometimes as a basin-wide model estimate. However, to date none of those exist at the scale required for SWOT and thus need to be mapped at a continental scale. The prior estimates will be employed for producing initial discharge estimates, which will be used as starting-guesses for the various Manning-based algorithms, to be refined using the SWOT measurements themselves. A multitude of reach-scale variables were derived, including Landsat-based width, SRTM slope and accumulation area. As a possible starting point for building the prior database of low flow, river bathymetry and channel roughness estimates, we employed a variety of sources, including data from all GRDC records, simulations from the long-time runs of the global water balance model (WBM), and reach-based calculations from hydraulic geometry relationships as well as Manning's equation. Here, we present the first global maps of this prior database with some initial validation, caveats and prospective uses.

  18. Fast human pose estimation using 3D Zernike descriptors

    NASA Astrophysics Data System (ADS)

    Berjón, Daniel; Morán, Francisco

    2012-03-01

    Markerless video-based human pose estimation algorithms face a high-dimensional problem that is frequently broken down into several lower-dimensional ones by estimating the pose of each limb separately. However, in order to do so they need to reliably locate the torso, for which they typically rely on time coherence and tracking algorithms. Their losing track usually results in catastrophic failure of the process, requiring human intervention and thus precluding their usage in real-time applications. We propose a very fast rough pose estimation scheme based on global shape descriptors built on 3D Zernike moments. Using an articulated model that we configure in many poses, a large database of descriptor/pose pairs can be computed off-line. Thus, the only steps that must be done on-line are the extraction of the descriptors for each input volume and a search against the database to get the most likely poses. While the result of such process is not a fine pose estimation, it can be useful to help more sophisticated algorithms to regain track or make more educated guesses when creating new particles in particle-filter-based tracking schemes. We have achieved a performance of about ten fps on a single computer using a database of about one million entries.

  19. Effects of an Environmental Education Course on Consensus Estimates for Proenvironmental Intentions

    ERIC Educational Resources Information Center

    Hovardas, Tasos; Korfiatis, Konstantinos

    2012-01-01

    An environmental education intervention in a university conservation-related course was designed to decrease students' errors in consensus estimates for proenvironmental intentions, that is, their errors in guessing their classmates' proenvironmental intentions. Before and after the course, the authors measured two intentions regarding willingness…

  20. What the Computer Taught Me About My Students...or Is Binary Search "Natural"?

    ERIC Educational Resources Information Center

    Pasquino, Anne

    1978-01-01

    Several examples of student-written programs "teaching" a computer to guess systematically in finding a number between 0 and 10,000 are illustrated. These lend support to the contention that rather than being a "natural" application, using a binary search is a learned technique. (MN)

  1. Destroying a Racial Myth

    ERIC Educational Resources Information Center

    Wells, Elmer E.

    1978-01-01

    Describes a research study to determine if blindfolded subjects could tell the race (white or black) of members of a basketball team on the basis of each team member's body odor. Subjects, who were both black and white, were unable to guess team members' racial identity with any degree of accuracy. (AV)

  2. Computational Immunology for the Defense of Large Scale Systems

    DTIC Science & Technology

    2002-07-01

    or unusual activity (e.g., multiple login attempts, possibly in order to guess a password). We can summarize our results as follows: • Our...such as those used in SRI’s Emerald project. There are two important characteristics of the approach introduced in [5]. First, it identifies a simple

  3. Health Policy and the Economy: Guessing about the Future.

    ERIC Educational Resources Information Center

    Helms, Robert B.

    1989-01-01

    This paper looks at demographic and financial trends that can have an effect on the health care sector, the government reliance on projections of budget expenditures and the current budget deficit, and trends in health care expenditures and effects on the future of Social Security and Medicare. (MLW)

  4. God Bless You, Mrs. Liddy.

    ERIC Educational Resources Information Center

    Otto, Wayne

    1994-01-01

    Discusses the hyperbole surrounding statements and testimonials about the effectiveness of phonics instruction. Suggests that reading is more than sounding out words one at a time and that most children figure it out for themselves with little or no help. Suggests that reading is also more than a psycholinguistic guessing game. (RS)

  5. Would You Hire This Person?

    ERIC Educational Resources Information Center

    Claudis, Penny Todd

    1982-01-01

    Secondary students complete an application for employment using a person they have studied in their U.S. history, government, or psychology courses. All information has to be historically correct. The teacher reads the application to the class and students guess the name of the applicant. A sample application form is included. (AM)

  6. Groups for Parents with Developmental Disabilities.

    ERIC Educational Resources Information Center

    Johnson, Paul L.

    The Parent Group Development Program was established to provide information and support for parents with developmental disabilities. Parent group activities focused on offering information about child development (through a guessing game in which behavior was matched to one of four age groups) and meal planning and budgeting (with a task that…

  7. Guess Who's Coming to Graduation?

    ERIC Educational Resources Information Center

    Miller, Ann

    2010-01-01

    In this article, the author shares her experience in assisting seniors and two teachers at Kalamazoo (Michigan) Central High School when they joined a national contest to have the President of the United States deliver the commencement address to the class of 2010. One of the author's favorite memories surrounds the distribution of commencement…

  8. A Review of Scoring Algorithms for Ability and Aptitude Tests.

    ERIC Educational Resources Information Center

    Chevalier, Shirley A.

    In conventional practice, most educators and educational researchers score cognitive tests using a dichotomous right-wrong scoring system. Although simple and straightforward, this method does not take into consideration other factors, such as partial knowledge or guessing tendencies and abilities. This paper discusses alternative scoring models:…

  9. The Role of Response Confusion in Proactive Interference

    ERIC Educational Resources Information Center

    Dillon, Richard F.; Thomas, Heather

    1975-01-01

    In two experiments using the Brown-Peterson memory paradigm, instructions to guess had small effects on recall, but sizeable effects on incidence of prior list intrustion. However, results indicate that proactive interference is primarily the result of inability to generate correct items, rather than confusion between present and previous items.…

  10. Dream Map to a Mind Seized

    ERIC Educational Resources Information Center

    Leal, Amy

    2012-01-01

    Parents of children on the autism spectrum often talk about a number of comorbid conditions that can accompany the disorder--immunological dysfunctions, frequent ear infections, intractable strep, gastrointestinal disorders, rampant yeast, inexplicable regressions, allergies. The author did not guess that her son would have all of those as well as…

  11. Computer supplies insulation recipe for Cookie Company Roof

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    Roofing contractors no longer have to rely on complicated calculations and educated guesses to determine cost-efficient levels of roof insulation. A simple hand-held calculator and printer offers seven different programs for fast figuring insulation thickness based on job type, roof size, tax rates, and heating and cooling cost factors.

  12. Bayesian Estimation of the DINA Model with Gibbs Sampling

    ERIC Educational Resources Information Center

    Culpepper, Steven Andrew

    2015-01-01

    A Bayesian model formulation of the deterministic inputs, noisy "and" gate (DINA) model is presented. Gibbs sampling is employed to simulate from the joint posterior distribution of item guessing and slipping parameters, subject attribute parameters, and latent class probabilities. The procedure extends concepts in Béguin and Glas,…

  13. Multilevel and Latent Variable Modeling with Composite Links and Exploded Likelihoods

    ERIC Educational Resources Information Center

    Rabe-Hesketh, Sophia; Skrondal, Anders

    2007-01-01

    Composite links and exploded likelihoods are powerful yet simple tools for specifying a wide range of latent variable models. Applications considered include survival or duration models, models for rankings, small area estimation with census information, models for ordinal responses, item response models with guessing, randomized response models,…

  14. Robust Estimation of Latent Ability in Item Response Models

    ERIC Educational Resources Information Center

    Schuster, Christof; Yuan, Ke-Hai

    2011-01-01

    Because of response disturbances such as guessing, cheating, or carelessness, item response models often can only approximate the "true" individual response probabilities. As a consequence, maximum-likelihood estimates of ability will be biased. Typically, the nature and extent to which response disturbances are present is unknown, and, therefore,…

  15. Probability Matching in the Right Hemisphere

    ERIC Educational Resources Information Center

    Miller, M.B.; Valsangkar-Smyth, M.

    2005-01-01

    Previously it has been shown that the left hemisphere, but not the right, of split-brain patients tends to match the frequency of previous occurrences in probability-guessing paradigms (Wolford, Miller, & Gazzaniga, 2000). This phenomenon has been attributed to an ''interpreter,'' a mechanism for making interpretations and forming hypotheses,…

  16. Amazing Animals

    ERIC Educational Resources Information Center

    Al-Kuwari, Najat Saad

    2007-01-01

    "Animals" is a three-part lesson plan for young learners with a zoo animal theme. The first lesson is full of activities to describe animals, with Simon Says, guessing games, and learning stations. The second lesson is about desert animals, but other types of animals could be chosen depending on student interest. This lesson teaches…

  17. Correcting the Normalized Gain for Guessing

    ERIC Educational Resources Information Center

    Stewart, John; Stewart, Gay

    2010-01-01

    The normalized gain, "g", has been an important tool for the characterization of conceptual improvement in physics courses since its use in Hake's extensive study on conceptual learning in introductory physics. The normalized gain is calculated from the score on a pre-test administered before instruction and a post-test administered…

  18. The Pogo Principle: We Have Met the Enemy, and Guess Who It Is?

    ERIC Educational Resources Information Center

    Blaine, Robert

    1994-01-01

    Despite recent criticisms, U.S. society is getting a good value for its education dollar. High schools are beset by college influences on the curriculum; special education requirements; overemphasis on student activities; unreasonable international comparisons; the influences of TV, teenage employment, and pathological behaviors; and the…

  19. Propagation of Innovations in Networked Groups

    ERIC Educational Resources Information Center

    Mason, Winter A.; Jones, Andy; Goldstone, Robert L.

    2008-01-01

    A novel paradigm was developed to study the behavior of groups of networked people searching a problem space. The authors examined how different network structures affect the propagation of information in laboratory-created groups. Participants made numerical guesses and received scores that were also made available to their neighbors in the…

  20. Leveraging Code Comments to Improve Software Reliability

    ERIC Educational Resources Information Center

    Tan, Lin

    2009-01-01

    Commenting source code has long been a common practice in software development. This thesis, consisting of three pieces of work, made novel use of the code comments written in natural language to improve software reliability. Our solution combines Natural Language Processing (NLP), Machine Learning, Statistics, and Program Analysis techniques to…

Top